text
stringlengths
8
1.28M
token_count
int64
3
432k
Electronics/Capacitor Construction. Electrolytic capacitor. From Wikipedia, the free encyclopedia: Electrolytic_capacitor An electrolytic capacitor is a type of capacitor with a larger capacitance per unit volume than other types, making them valuable in relatively high-current and low-frequency electrical circuits. This is especially the case in power-supply filters, where they store charge needed to moderate output voltage and current fluctuations, at the frequency or twice the frequency of AC input power, in rectifier output, and especially in the absence of rechargeable batteries that can provide similar low-frequency current capacity. Electrolytic capacitors are constructed from two conducting aluminium foils, one of which is coated with an insulating oxide layer, and a paper spacer soaked in electrolyte. The foil insulated by the oxide layer is the anode while the liquid electrolyte and the second foil act as cathode. This stack is then rolled up, fitted with pin connectors and placed in a cylindrical aluminium casing. (Two popular geometries use axial leads, or two leads or lugs in one circular face of the cylinder, respectively.) A common modelling circuit for an electrolytic capacitor has the following schematic: R_leak _______ o-----|_______|-----o | | (+) | || | _______ (-) 0-----o--------||---------o-----|_______|-----oOoOoOo-----0 || R_ESR L_ESL C where R_leak is the leakage resistance, R_ESR is the Equivalent Series Resistance, L_ESL the Equivalent Series Inductance (L being the conventional symbol for inductance). R_ESR must be as small as possible since it determines the loss power when the capacitor is used to smooth voltage. Loss power scales linearly with the ripple current flowing through and quadratically with R_ESR. Low ESR capacitors are imperative for high efficiencies in power supplies. It should be pointed out that this is only a simple model and does not include all the effects associated with real electrolytic capacitors. Since the electrolytes evaporate, design life is most often rated in hours at a set temperature. For example, typically as 2000 hours at 105 degrees Celsius (which is the highest working temperature). Design life doubles for each 10 degrees lower, reaching 15 years at 45 degrees. Electrolytic capacitors may explode (have weak link safety valve) when charged up with wrong polarity. What are they made of Aluminum electrolytic capacitors contain aluminum foil, porous paper, and an electrolyte. This is usually boric acid or sodium borate in water with some sugars or ethylene glycol added to retard evaporation. While you should not eat this, nor get it in your eyes, it is not very corrosive or dangerous. Simply wash it off your skin after you gut the old capacitor. Wet-slug tantalum electrolytics (low voltage and so expensive that only the military can generally afford them) do contain sulfuric acid. I doubt if these are ever found in audio equipment. It is important, however, to always be careful and not take chances. Safety glasses are always advised. With chemicals, extra precaution is always worthwhile. External link * Electrolytic Capacitors Electronics :
802
Electronics/Diagnostic Equipment. =Diagnostic and Testing Equipment= There is a wide array of devices used to test and diagnose electronic equipment. This chapter will attempt to explain the differences and different types of equipment used by electronics technicians and engineers. Ammeter. An ammeter measures current.Current in electronics is usually measured in mA which are called milliamperes, which are 1/1000s of an ampere.. The ammeter's terminals must be "in series" with the current being measured. Ammeters have a small resistance (typically 50 ohms) so that they only have a small effect on the current. Basically an ammeter consists of a coil that can rotate inside a magnet, but a spring is trying to push the coil back to zero. The larger the current that flows through the coil, the larger the angle of rotation, the torque (= a rotary force) created by the current being counteracted by the return torque of the spring.. Usually ammeters are connected in parallel with various switched resistors that can extend the range of currents that can be measured. Assume, for example, that the basic ammeter is "1000 ohms per volt", which means that to get the full-scale deflection of the pointer a current of 1 mA is needed (1 volt divided by 1000 ohms is 1 mA - see "Ohm's Law").. To use that ammeter to read 10 mA full-scale it is "shunted" with another resistance, so that when 10 mA flows, 9 mA will flow through the shunt, and only 1 mA will flow through the meter. Similarly, to extend the range of the ammeter to 100 mA the shunt will carry 99 mA, and the meter only 1 mA. Ohmmeter. An ohmmeter measures resistance.The two terminals of ohmmeter are each placed on a terminal of the resistance being measured. This resistance should be isolated from other effects. (It should be taken out of a circuit, if it is in one.) Ohmmeters are basically ammeters that are "connected to an internal battery, with a suitable resistance in series". Assume that the basic ammeter is "1000 ohms per volt", meaning that 1 mA is needed for full-scale deflection. When the external resistance that is connected to its terminals is zero (the leads are connected together at first for calibration), then the internal, variable, resistor in series with the ammeter is adjusted so that 1 mA will flow; that will depend on the voltage of the battery, and as the battery runs down that setting will change. The full scale point is marked as zero resistance. If an external resistance is then connected to the terminals that causes only half of the current to flow (0.5 mA in this example), then the external resistance will equal the internal resistance, and the scale is marked accordingly. When no current flows, the scale will read infinity resistance. The scale of an ohmmeter is NOT linear.Ohmmeters are usually usuful in cheking the short circuit and open circuit in boards. Voltmeter. A voltmeter measures voltage.The voltmeter's terminals must be "in parallel" with the voltage being measured. Voltmeters have a large resistance (typically 1 megaohm), so that they only have a small effect on the voltage. Multimeter. A multimeter is a combination device, (usually) capable of measuring current, resistance, or voltage. Most modern models measure all three, and include other features such as a diode tester, which can be used to measure continuity in circuits (emitting a loud 'beep' if there is a short). Oscilloscope. An oscilloscope, commonly called a 'scope' by technicians, is used to display a voltage waveform on a screen, usually graphing voltage as a function of time. Spectrum Analyzer. Spectrum analyzer shows voltage (or power) densities as function of frequency on radio frequency spectrum. Spectrum analyzer can use analog frequency scanning principle (like radio receiver always changing frequency and measuring receiving amplitude) or digital sampling and FFT (Fast Fourier Transformation). Logic analyzer. A logic analyzer is, in effect, a specialised oscilloscope. The key difference between an analyzer and an oscilloscope is that the analyzer can only display a digital (on/off) waveform, whereas an oscilloscope can display any voltage (depending on the type of probe connected). The other difference is that logic analyzers tend to have many more signal inputs than oscilloscopes - usually 32 or 64, versus the two channels most oscilloscopes provide. Logic analyzers can be very useful for debugging complex logic circuits, where one signal's state may be affected by many other signals. Frequency counter. A frequency counter is a relatively simple instrument used to measure the frequency of a signal in Hertz (cycles per second). Most counters work by counting the number of signal cycles that occur in a given time period (usually one second). This count is the frequency of the signal in Hertz, which is displayed on the counter's display. Electrometer. A voltmeter with extremely high input resistance capable of measuring electrical charge with minimal influence to that charge. Ubiquitous in nucleonics, physics and bio-medical disciplines. Enables the direct verification of charge measured in coulombs according to Q=CV. Additionally, electrometers can generally measure current flows in the femtoampere range, i.e. .000000000000001 ampere.
1,296
Robotics/Components/Actuation Devices/Air muscle. The concept of a fully autonomous, mission capable, legged robot has for years been a Holy Grail of roboticists. Development of such machines has been hampered by actuators and power technology and control schemes that cannot hope to compete with even some of the “simplest” systems found in the natural world. Biomimetics. Faced with such a daunting task, it is not surprising that more and more researchers are beginning to look toward biological mechanisms for inspiration. Biology provides a wealth of inspiration for robot design. There are millions of species of animals that have evolved efficient solutions to locomotion and locomotion control. Legs, wheels, treads. Insects in particular are well known not only for their speed and agility but also for their ability to traverse some of the most difficult terrains imaginable; insects can be found navigating rocky ground, walking upside down, climbing vertical surfaces, or even walking on water. Furthermore, insects almost instantly respond to injury or removal of legs by altering stance and stepping pattern to maintain efficient locomotion with a reduced number of legs [1]. Given the ultimate goal of autonomy, this ability to reconfigure locomotion strategies will be crucial to the robustness of autonomous robots [2]. There are of course other mechanisms capable of producing locomotion, most notably wheels and caterpillar treads. While these devices are admittedly much easier to design and implement, they carry with them a set of disadvantages that inhibits their use in military or exploratory applications. Primary amongst these limitations is the simple fact that wheels, and to a lesser extent treads, are not capable of traversing terrain nearly as complex as that which a legged vehicle is capable of maneuvering over [2]. Even wheeled and tracked vehicles designed specifically for harsh terrains cannot maneuver over an obstacle significantly shorter than the vehicle itself; a legged vehicle on the other hand could be expected to climb an obstacle up to twice its own height, much like a cockroach can. This limitation on mobility alone means that in any environment without fairly flat, continuous terrain, a walking vehicle is far preferable to a wheeled or tracked one. Legged vehicles are also inherently more robust than those dependent on wheels or tracks. The loss of a single leg on a hexapod will result in only minimal loss in maneuverability; on a wheeled vehicle a damaged wheel could spell the end of mobility, and a damaged caterpillar tread almost always results in catastrophic failure. Finally, legged vehicles are far more capable of navigating an intermittent substrate—such as a slatted surface—than wheeled vehicles [3]. Given the preceding argument for the use of legged locomotion in certain environments, one is left with the daunting task of actually designing an efficient legged robot. While such a task is difficult to say the least, nature has provided us—literally—with a world full of templates. Animals can be found that are capable of navigation over almost any surface, and it is from these natural solutions to locomotion problems that engineers are more and more often seeking inspiration. Actuators. 2. Actuator Selection The selection of actuators plays a pivotal role in any mobile robot design, as the shape, size, weight and strength of an actuator must all be taken into account, and the power source of the actuators often provides the greatest constraint on a robot’s potential abilities. Actuators perform like muscles and joints of a robot to bring about motion. Muscle. Biological organisms have a great advantage over mechanical systems in that muscle, nature’s actuator of choice, has a favorable force-to-weight ratio and requires low levels of activation energy. Their tunable passive stiffness properties are also well suited for energy efficient legged locomotion. The most frequently used actuators, electric motors and pneumatic/hydraulic cylinders, are far from equivalent to their biological counterparts. Electric motor. Electric motors are probably the most commonly used actuation and control devices in modern day robotics, and with good reason. Motors in a wide range of sizes are readily available and they are very easy to control. These devices are also fairly easy to implement, normally requiring just a few electrical connections. However, electric motors have several disadvantages. Most importantly, their force-to-weight ratio is far lower than that of pneumatic and hydraulic devices, and in a field such as legged robotics, where weight is of the utmost importance, this makes them unsuitable for many applications. Typically, electric systems have a power to weight ratio of 50-100 W/kg (including only motor and gear reducer, operating at rated power), whereas fluid systems produce 100-200 W/kg (including actuator and valve weights) [4] and biological muscle, which varies widely in properties, produces anywhere from 40-250 W/kg [5]. In addition, when trying to take advantage of an animal’s efficient biomechanical design, the drastic difference between the rotary motion of most electric motors and the linear motion of muscle can cause complications. Pneumatic and hydraulic cylinder. Pneumatic and hydraulic cylinder systems eliminate some of the problems associated with electric motors [6]. As a general rule, they provide a significantly higher force-to-weight ratio than motors; an advantage that in itself often leads to their use, even given the increased complexity and weight of control valves and pressurized fluid lines required for operation. These actuators also produce linear motion, which makes them more suitable to serving a role equivalent to muscle. Unfortunately, air cylinders are better suited to “bang-bang” operation; that is, motion from one extreme to another with mechanical stops to halt motion. Smooth walking motion requires a much larger range of states, and the present in most pressure cylinders makes even coarse position control difficult. Fluid pressure devices are still quite massive; for example, almost seventy-five percent of CWRU’s Robot III’s weight is composed of its actuators and valves [7]. Braided pneumatic actuator. Braided pneumatic actuators (BPAs) provide a number of advantages over conventional actuation devices, and share some important characteristics with biological muscle. These devices consist of two major components: an inflatable bladder around which is wrapped an expandable fiber mesh (Figure 1). The resulting actuator is significantly lighter than a standard air cylinder; however, the braided pneumatic actuator is actually capable of producing greater forces (and thus possesses a much higher force-to weight-ratio) than its heavier counterpart. When the bladder is filled with pressurized air, its volume tends to increase. Because of the constant length of the mesh fibers, this can only be accomplished by the actuator expanding radially while simultaneously contracting along its axis. The result is a muscle-like contraction that produces a force-length curve akin to the rising phase of actual muscle [8]. Figure 1: A placeholder An important property of BPA to note is that at maximum contraction (L/Lo≈0.69) the actuator is incapable of producing force; conversely, the maximum possible force is produced when the actuator is fully extended. Therefore, similar to muscle, the force output of these actuators is self-limited by nature. While an electric motor controller could conceivably become unstable and drive a system until failure of either the structure or the motor, a braided pneumatic actuator driven by an unstable controller is less likely to be driven to the point of damaging itself or the surrounding structure. Because of this property, braided pneumatic actuators are well suited for the implementation of positive load feedback, which is known to be used by animals including cockroaches, cats and humans [9]. BPAs are also known as McKibben artificial muscles [10], air muscles, and rubbertuators. They were patented in 1957 by Gaylord and used by McKibben in orthotic devices [11]. Like biological muscle, BPAs are pull-only devices. This means that they must be used in opposing pairs, or opposing some other antagonist. This property is of significant importance for useful application of these devices, for although it requires the use of two actuators or sets of actuators at each joint, it allows the muscle-like property of co-contraction, also known as stiffness control. If one considers a joint in the human body, such as the elbow or knee, it should be obvious that whatever position the joint is in the muscles that control that joint can be activated (flexed) without changing the joint angle. From an engineering standpoint, this is accomplished by increasing the force produced by each muscle in such a way that the net moment produced at the joint is zero. As a result, the joint angle remains the same but perturbations, such as the application of an outside force, result in less disturbance. From a practical standpoint, this means that the joint can be varied through a continuum of positions and compliances independently. The resulting joint can be stiff when needed, such as when bearing weight while walking, or compliant, as in cases of heel strike where compensation for uneven terrain may be needed. The greatest impediment to widespread use of BPAs has been their relatively short fatigue life. Under operating conditions such as we desire, these devices are capable of a service life on the order of 10,000 cycles as they were originally designed. A significant improvement to these devices has been made by the Festo Corporation, which has recently introduced a new product called fluidic muscle. This operates on the same principles as a standard BPA, with the critical difference being that the fiber mesh is impregnated inside the expandable bladder. The resulting actuators have a demonstrated fatigue life on the order of 10,000,000 cycles at high pressure. 3. Previous Robots Two previous robots developed at CWRU have provided significant insight and impetus for the design of Robot V. Both of these cockroach-based robots are non-autonomous, and rely on off board controllers and power supplies for operation. Robot III was the first pneumatically powered robot built at CWRU, and relied on conventional pneumatic cylinders for actuation. This 15 kilogram robot was powerful, and was demonstrated to be capable of easily lifting payloads equivalent to its own weight. The fundamental failing of this robot was the difficulty inherent in the control of the pneumatic cylinders; although capable of maintaining stance robustly and cycling its legs in a cockroach manner, to date this robot has not demonstrated smooth locomotion [12]. Kinematically similar to its predecessor, Robot IV implemented braided pneumatic actuators in place of Robot III’s pneumatic cylinders. This robot was underpowered; it was barely able to lift itself, the valves were moved off-board for walking experiments. However, this robot was significantly easier to control, in large part because the valves allowed air to be trapped inside the actuators, so that joint stiffness could be varied as well as joint position. Using an open-loop controller, this robot was able to locomote [13] 4. Overview of Robot V Design Case Western Reserve University’s most recent robot, Robot V (Ajax) like its predecessors Robot IV and Robot III is based on the death head cockroach "Blaberus discoidalis". Although it is not feasible to capture the full range of motion exhibited by the insect—up to seven degrees of freedom per leg—analysis of leg motion during locomotion suggests that this is not necessary. This is because in many cases joints demonstrate only a small range of motion, while the majority of a leg’s movement is produced by a few joints. We have determined that three joints in the rear legs, four in the middle legs, and five in the front legs are sufficient to produce reasonable and robust walking [7] [14]. The different number of DOF in each set of limbs represents the task-oriented nature of each pair of legs. On the insect, the front legs are relatively small and weak, but highly dexterous (Figure 2), and are thus able to effectively manipulate objects or navigate difficult terrain. This dexterity is attained in the robot through three joints between the body and coxa. These joints are referred to (from most proximal to most distal) as γ, with an axis parallel to the intersection of the median and coronal planes (in the z direction); β, with an axis parallel to the median and transverse planes (in the y direction); and α, with Figure 2: Schematic of front leg with axes of joint rotation an axis parallel to the coronal and transverse planes (in the x direction). The two remaining joints are between the coxa and femur and the femur and tibia. The middle legs on the insect play an important role in weight support, and are critical for turning and climbing (rearing) functions; however they sacrifice some dexterity for power. On Robot V, the middle legs have only two degrees of freedom—α and β—between the body and coxa, and retain the single joint between the coxa and femur and the femur and tibia. Finally, the cockroach uses its rear legs primarily for locomotion, and although these limbs are not as agile as the others, they are larger and much more powerful; likewise, the rear legs of the robot have only one joint between each of the segments. The body-coxa joint uses of only the β joint. Although each leg has a unique design, one component they have in common is the tarsus, or foot, construction. This consists of a compliant member attached to the end of the tibia and a pair of claws. The compliant element is capable of bending to maintain contact with the ground, thus providing traction. The claws are angled differently on each leg to assist in its specific task; for example, the claws on the rear leg are angled backwards like spines, allowing the foot additional traction when propelling the robot forward. 4.1 Valves Each joint is driven by two opposing sets of actuators, allowing for controlled motion in both directions (previous robots have used a single actuator set paired with a spring) [15]. Each actuator set is driven by two two-way valves; one for air inlet and one for air exhaust. This scheme doubles the number, and thus the weight, of valves as compared to Robot III; however, it allows for the implementation of stiffness control, or co-contraction. Because the pressures in opposing actuators can be independently varied, the same joint angle can be achieved using different combinations of actuator pressures; all that is required is that the moments on a given joint sum to zero at the desired position. As a result, a joint can be made very stiff by pressurizing both sets of actuators, or very compliant, by pressurizing one actuator only enough to overcome the mass properties of the limb to reach a desired position. 4.2 Stance bias The actuators onboard a legged robot can generally be subdivided into two classes: those used to move the limb through the swing phase and those required to maintain stance and generate locomotion. One of the fundamental differences between these two types of actuators is the load that is required of them. The swing actuators need only provide the force necessary to overcome the weight and inertia of the limb, whereas the stance actuators must support not only a significant portion of the entire mass of the robot, but also provide the force necessary for locomotion. This disparity between operational demands can potentially lead to large, powerful stance actuators and small swing actuators (as can be seen in the human body with powerful quadriceps muscles which maintain stance, and the respectively weaker hamstring muscles, which are used for swing); however, because of limited options for robot actuator sizes, it is more often the case that the swing actuators are overpowered, whereas the stance actuators are either underpowered or just capable of meeting the demands placed on them. On Robot V this problem was resolved through the placement of torsion springs at some critical load bearing joints (specifically the coxa-femur and β joints) to provide a bias in the direction of stance. As a result, the forces required of the stance actuators are significantly reduced while the swing actuators must produce greater forces, but still remain within their operational range. 5. Initial Trials Robot V, like Robot IV, was designed as an exoskeleton, where the structural members are placed outside and around the actuators. Not only did this allow a significant reduction in weight, but it also provided a limited protection for the actuators, which are susceptible to puncture and abrasion (Figure 3). The vast majority of the structural elements were made of 6061-T6 Aluminum, although axles and actuator mounting shafts were made of 1018 steel, and fasteners were made from stainless steel. All joint axles were mounted in nylon journal bearings. Figure 3: Robot V (Ajax) Whenever possible, actuators were directly mounted to both their insertion and their origin. This precluded the need for tendons, allowing the maximum possible length of actuator to be used. This in turn maximized the force and stroke available for each individual joint. The notable exception to this strategy was the β actuators, which were attached to a tendon and mounted parallel to the body. This was done to reduce the overall height of the robot. The first legs to be built were the middle legs. These were chosen for initial tests because they must be dexterous and forceful to maintain stance in a tripod gait. After completion of the first leg the range of motion (ROM) of each of its joints was measured and compared to the design values. These data are summarized below: Joint ROM Desired ROM β 20° 30° α 25 40 c-f (coxa-femur) 40 50 f-t (femur-tibia) 75 75 These tests were performed at both 5.5 and 6.25 bar, with no significant difference between the results of the two, suggesting that at these pressures the actuators had reached their full contraction. Although the desired ROMs were not reached, the measured ROMs are in excess of those demonstrated by animal. The demonstrated ROM’s of the leg were deemed sufficient for walking and climbing. A gantry was constructed to support the middle legs for preliminary stance and motion tests. With only horizontal support—to prevent tipping—the legs were able to maintain stance while supporting their weight (three kilos) plus the weight of the valves for the actuators (one half kilo) and a gantry element (one kilo) without any pressurized air in the actuators. This capability, a result of the aforementioned stance bias, clearly demonstrates the ability of these legs to support not only the weight of the robot, but a significant payload as well. An open loop controller was then used to cycle the legs through “push-ups”; raising themselves from a minimum to a maximum height. In this fashion, the legs were able to lift the body approximately 6 cm. This process was repeated with additional payloads (beyond valve and gantry weight) of two and a half and five kilograms using 6 bar air. In both cases, the legs were able to attain the same height. 6. Ajax Fully assembled—including valves—Robot V weighs 15 kilograms. Range of motion tests have been performed for all joints, and are summarized below. In many cases, specifically the femur-tibia joints of all legs, these ranges of motion are in excess of the desired ROM. In all cases, they are sufficient for walking and climbing. JOINT ROM Front Leg γ 35° β 45 α 25 c-f 40 f-t 75 Middle Leg β 20 α 25 c-f 40 f-t 75 Rear Leg β 25 c-f 50 f-t 80 Ajax demonstrates a propensity to stand due to the preloads placed on the torsion springs; even without pressure in the actuators, the middle and rear Figure 4: Robot V without activated actuators (top) and standing (bottom). Note that even when the actuators are un-pressurized, they maintain a near-stance position, with only the feet contacting the surface. Legs maintain a near-stance position. Initial tests of the robot have shown that it is capable of supporting its weight in a standing position and of achieving stance both unloaded and with a five kilogram payload (Figure 4). Further tests have shown that the robot is able to achieve a tripod stance and alternate between tripods, which is important for walking. These tasks were achieved using a simple open loop controller. Furthermore, the passive properties of the BPA’s are clearly highlighted in the robot’s ability to return to its desired position after suffering perturbations without the use of any form of active posture control. Using a feed forward controller with absolutely no feedback, the robot can produce reasonable forward locomotion. Although this is by no means the robust, agile walking that is the ultimate goal of this project, it is a clear demonstration of not only the robot’s capabilities, but also the advantages offered by the BPA’s. The ability to move using only an open loop controller is in large part a result of the passive properties of the actuators, which provide compensation for any instabilities in the controller itself and immediate response to perturbations without the need for controller intervention. This can be contrasted with Robot III, which, even with kinematic and force feedback, was not able to walk. This failure of Robot III is attributed to the inability of both the pneumatic cylinders and posture controller to deal with the sudden changes in load associated with locomotion. In short, the BPA’s act as filters, providing immediate response to perturbation; a task the controller is incapable of. This same process occurs in biological muscle, which responds nearly instantaneously to perturbation, but only slowly to neurological input [16]. With the addition of a biologically inspired closed loop controller in the future, Ajax is expected to display robust, insect-like locomotion. 7. Future Work Although the mechanical aspects of this robot have been completed, the control system is still in its infancy. Because the mechanics of a system are inextricably linked to its control circuits, Ajax’s controller is expected to benefit from the close relationship between its design and that of the actual insect. This relationship is perhaps most prominent in the muscle-like nature of the braided pneumatic actuators. Sensors will be added to provide not only joint position feedback, but force feedback as well. Joint angle can easily be determined with a potentiometer, as has been done on our previous robots. Force feedback will be attained through pressure measurements from the actuators, which, given actuator length, can be used to determine actuator force. Although strain gauges properly placed on the mounting elements of the actuators can produce sufficient force feedback, previous work has shown many desirable characteristics inherent in pressure transducers: they have much cleaner signals, do not require amplifiers, and do not exhibit cross talk; all disadvantages of strain gauges. In addition, strain gauges must be mounted directly adjacent to the actuator they are recording from; this requires more weight at distal points of the limb, (thusly increasing the moment of inertia of the limb) and generally reduces the usable available stroke of the actuator. We have demonstrated that a pressure transducer located down-line from an actuator produces a sufficient signal to determine actuator force. An insect-inspired controller was developed for Robot III, and this will be modified for use on Robot V. It is a distributed hierarchical control system. The local to central progression includes circuits that control joint position and stiffness, inter-leg coordination and reflexes, intra-leg gait coordination, and body motion. The inter-leg coordination circuit solves the inverse kinematics problem for the legs and the centralized posture control system solves the force distribution problem.
5,658
Geometry/Circles/Radii, Chords and Diameters. A circle is a set of all points in a plane that are equidistant from a single point; that single point is called the centre of the circle and the distance between any point on circle and the centre is called radius of the circle. http://en.wikipedia.org/upload/d/dd/Circle-2.png Chord a chord is an internal segment of a circle that has both of its endpoints on the circumference of the circle. Secant a secant of a circle is any line that intersects a circle in two places. Tangent a tangent to a circle is a line that intersects a circle in exactly one point, called the point of tangency.
169
Personal Finance/Assets. Personal assets definition. Personal assets in simple terms can be described as things (at times virtual things like web-addresses, copyrights etc) that possess value. Additionally, an important feature of assets is that they generate personal income. Types of assets. Personal assets can be broken down into different classes:
72
Guitar/Stringing the Guitar. Aside from the physical shape of the guitar body, strings are the most important thing for determining the sound of a guitar. New strings sound bright and full, while old strings tend to sound dull and dead. Many guitarists believe that strings should be changed regularly, not just when they break. This is because sweat and dirt corrode the strings, and over time this degrades their sound quality. Other guitarists believe that new strings sound much worse than old ones, feeling that a string's tonal quality only improves over time. Individual string quality may vary drastically from string to string. When one breaks a string, all of the strings should be changed at once. This is especially true if the newer string is of a different brand or gauge. The string's manufacturing process, thickness and age all affect its tone, and one new string being played with a bunch of old strings can make your guitar sound strange. Players should be advised that guitars are usually set up for a particular gauge of string. The guitar will still function fine with a different gauge of strings, however for optimal sound, the guitar may need to be adjusted. See the chapter on adjusting the guitar for more details. Because there are several different types of guitar, and each type is designed differently, each type has its own method of stringing. The type of strings you use mostly depends on what style of music you play and how long you've been playing. Thinner strings are generally preferred by beginners, but many experienced players prefer the feel of thin strings over thicker strings. Please see the guitar accessories section for details on different types of strings. The first thing you always need to do when stringing a guitar is to take off the old strings. You should "never" just cut the strings of a tuned guitar in half, because the sudden release of tension on the neck can damage the guitar. Instead, always turn the tuning pegs to decrease tension, until the string is so loose that it doesn't produce a note when struck, then cut or unwind them. In most cases, the string is bent at the end where it was inserted, to insure that it would stay during tuning. Unbend the string, then pull it out of the peg hole. If the peg end of the string is too bent or curled from the winding, cut the string on a straight part of the string. This will make it easier to remove the string from the hole at the other end and reduce the risk of scratching the body or the bridge while trying to get it out. Slide the string out of the bridge at the bottom end of the guitar. Some people string one at a time to make sure the neck sustains tension, or they just take all of the strings off at the same time. Stringing Acoustics. Standard guitars typically have a ball-head peg at the bridge section. This peg has a hollow shaft, with a groove that allows the string to come out from the peg. Typically, the process is as follows: Unwinding the string Attaching the string Twelve String Acoustic. It has the same principle to the sixth string. But every two strings were tuned with the same sound, one octave apart. *(RDT) Classical Guitar. To unstring a classical guitar one method is: To string a classical guitar one method is reverse of the unstringing Stringing Electrics. For the 6th string (the low E), take the string out of the package and insert the end through the bridge of the guitar. Pull it all the way through until the ball at the end of the string stops it from being pulled further. This is optional: Make a kink in the string to insure that it will not slip away from the turning of the peg, (usually about one or two inches from the peg). Wind the string around halfway and insert the end through the hole. Pull the string to add tension, so the string will stay around the peg during tuning. Turn the tuning peg to increase tension until the string is around the desired pitch, to make certain it will stay on properly. Check that the string is in the notch in the nut and the bridge, if it is not, decrease tension on the string until you can move it into the notch, tune it back up. Do this for the rest of your strings and you are done! Another method: String the low E and other strings as mentioned. Align the tuning peg's hole with the direction of the string and slip it through the peg in the direction of the headstock. Facing the guitar with the headstock to your right, pull the string taut with your left hand. With your opposite thumb and forefinger, twist the string in an "s" at the twelfth fret so that it touches both sides of the twelfth fret. You will have to let some of the string out to do this. This method tells you the optimum length of the string to wind around the tuning peg. Hold the string with your right hand below the tuning peg so that the pointy end is sticking out the other side. Slowly tighten the peg so that the string is winding on the INSIDE of the headstock -- inside right for E A D, and inside left for G B E. Allow the string to wind once underneath itself, and then wrap it over top of itself the rest of the way. Make sure you hold tight as you go so that there is little slippage later. If possible, hold the string with your right thumb and middle finger while regulating the pressure on the string with your right index finger.
1,207
C Programming/String manipulation. A string in C is merely an array of characters. The length of a string is determined by a terminating null character: codice_1. So, a string with the contents, say, codice_2 has four characters: codice_3, codice_4, codice_5, and the terminating null (codice_1) character. The terminating null character has the value zero. Syntax. In C, string constants (literals) are surrounded by double quotes ("), e.g. "Hello world!" and are compiled to an array of the specified char values with an additional null terminating character (0-valued) code to mark the end of the string. The type of a string constant is char []. backslash escapes. String literals may not directly in the source code contain embedded newlines or other control characters, or some other characters of special meaning in string. To include such characters in a string, the backslash escapes may be used, like this: Wide character strings. C supports wide character strings, defined as arrays of the type wchar_t, 16-bit (at least) values. They are written with an L before the string like this This feature allows strings where more than 256 different possible characters are needed (although also variable length char strings can be used). They end with a zero-valued wchar_t. These strings are not supported by the <string.h> functions. Instead they have their own functions, declared in <wchar.h>. Character encodings. What character encoding the char and wchar_t represent is not specified by the C standard, except that the value 0x00 and 0x0000 specify the end of the string and not a character. It is the input and output code which are directly affected by the character encoding. Other code should not be too affected. The editor should also be able to handle the encoding if strings shall be able to written in the source code. There are three major types of encodings: The codice_7 Standard Header. Because programmers find raw strings cumbersome to deal with, they wrote the code in the codice_7 library. It represents not a concerted design effort but rather the accretion of contributions made by various authors over a span of years. First, three types of functions exist in the string library: The more commonly-used string functions. The nine most commonly used functions in the string library are: Other functions, such as codice_21 (convert to lower case), codice_22 (return the string reversed), and codice_23 (convert to upper case) may be popular; however, they are neither specified by the C Standard nor the Single Unix Standard. It is also unspecified whether these functions return copies of the original strings or convert the strings in place. The codice_12 function. char *strcat(char * restrict s1, const char * restrict s2); "Some people recommend using" codice_25 "or" codice_26 "instead of strcat, in order to avoid buffer overflow." The codice_27 function shall append a copy of the string pointed to by codice_28 (including the terminating null byte) to the end of the string pointed to by codice_29. The initial byte of codice_28 overwrites the null byte at the end of codice_29. If copying takes place between objects that overlap, the behavior is undefined. The function returns codice_29. This function is used to attach one string to the end of another string. It is imperative that the first string (codice_29) have the space needed to store both strings. Example: #include <stdio.h> #include <string.h> static const char *colors[] = {"Red","Orange","Yellow","Green","Blue","Purple" }; static const char *widths[] = {"Thin","Medium","Thick","Bold" }; char penText[20]; int penColor = 3, penThickness = 2; strcpy(penText, colors[penColor]); strcat(penText, widths[penThickness]); printf("My pen is %s\n", penText); /* prints 'My pen is GreenThick' */ Before calling codice_27, the destination must currently contain a null terminated string or the first character must have been initialized with the null character (e.g. codice_35). The following is a public-domain implementation of codice_12: #include <string.h> /* strcat */ char *(strcat)(char *restrict s1, const char *restrict s2) char *s = s1; /* Move s so that it points to the end of s1. */ while (*s != '\0') s++; /* Copy the contents of s2 into the space at the end of s1. */ strcpy(s, s2); return s1; The codice_13 function. char *strchr(const char *s, int c); The codice_38 function shall locate the first occurrence of codice_39 (converted to a codice_40) in the string pointed to by codice_41. The terminating null byte is considered to be part of the string. The function returns the location of the found character, or a null pointer if the character was not found. This function is used to find certain characters in strings. At one point in history, this function was named codice_42. The codice_13 name, however cryptic, fits the general pattern for naming. The following is a public-domain implementation of codice_13: #include <string.h> /* strchr */ char *(strchr)(const char *s, int c) char ch = c; /* Scan s for the character. When this loop is finished, s will either point to the end of the string or the character we were looking for. */ while (*s != '\0' && *s != ch) s++; return (*s == ch) ? (char *) s : NULL; The codice_14 function. int strcmp(const char *s1, const char *s2); A rudimentary form of string comparison is done with the strcmp() function. It takes two strings as arguments and returns a value less than zero if the first is lexographically less than the second, a value greater than zero if the first is lexographically greater than the second, or zero if the two strings are equal. The comparison is done by comparing the coded (ascii) value of the characters, character by character. This simple type of string comparison is nowadays generally considered unacceptable when sorting lists of strings. More advanced algorithms exist that are capable of producing lists in dictionary sorted order. They can also fix problems such as strcmp() considering the string "Alpha2" greater than "Alpha12". (In the previous example, "Alpha2" compares greater than "Alpha12" because '2' comes after '1' in the character set.) What we're saying is, don't use this codice_46 alone for general string sorting in any commercial or professional code. The codice_46 function shall compare the string pointed to by codice_29 to the string pointed to by codice_28. The sign of a non-zero return value shall be determined by the sign of the difference between the values of the first pair of bytes (both interpreted as type codice_50) that differ in the strings being compared. Upon completion, codice_46 shall return an integer greater than, equal to, or less than 0, if the string pointed to by codice_29 is greater than, equal to, or less than the string pointed to by codice_28, respectively. Since comparing pointers by themselves is not practically useful unless one is comparing pointers within the same array, this function lexically compares the strings that two pointers point to. This function is useful in comparisons, e.g. if (strcmp(s, "whatever") == 0) /* do something */ The collating sequence used by codice_46 is equivalent to the machine's native character set. The only guarantee about the order is that the digits from '0' to '9' are in consecutive order. The following is a public-domain implementation of codice_14: #include <string.h> /* strcmp */ int (strcmp)(const char *s1, const char *s2) unsigned char uc1, uc2; /* Move s1 and s2 to the first differing characters in each string, or the ends of the strings if they are identical. */ while (*s1 != '\0' && *s1 == *s2) { s1++; s2++; /* Compare the characters as unsigned char and return the difference. */ uc1 = (*(unsigned char *) s1); uc2 = (*(unsigned char *) s2); return ((uc1 < uc2) ? -1 : (uc1 > uc2)); The codice_15 function. char *strcpy(char *restrict s1, const char *restrict s2); "Some people recommend always using" codice_57 "instead of strcpy, to avoid buffer overflow." The codice_58 function shall copy the C string pointed to by codice_28 (including the terminating null byte) into the array pointed to by codice_29. If copying takes place between objects that overlap, the behavior is undefined. The function returns codice_29. There is no value used to indicate an error: if the arguments to codice_58 are correct, and the destination buffer is large enough, the function will never fail. Example: #include <stdio.h> #include <string.h> static const char *penType="round"; char penText[20]; strcpy(penText, penType); Important: You must ensure that the destination buffer (codice_29) is able to contain all the characters in the source array, including the terminating null byte. Otherwise, codice_58 will overwrite memory past the end of the buffer, causing a buffer overflow, which can cause the program to crash, or can be exploited by hackers to compromise the security of the computer. The following is a public-domain implementation of codice_15: #include <string.h> /* strcpy */ char *(strcpy)(char *restrict s1, const char *restrict s2) char *dst = s1; const char *src = s2; /* Do the copying in a loop. */ while ((*dst++ = *src++) != '\0') ; /* The body of this loop is left empty. */ /* Return the destination string. */ return s1; The codice_16 function. size_t strlen(const char *s); The codice_67 function shall compute the number of bytes in the string to which codice_41 points, not including the terminating null byte. It returns the number of bytes in the string. No value is used to indicate an error. The following is a public-domain implementation of codice_16: #include <string.h> /* strlen */ size_t (strlen)(const char *s) const char *p = s; /* pointer to character constant */ /* Loop over the data in s. */ while (*p != '\0') p++; return (size_t)(p - s); Note how the line const char *p = s declares and initializes a pointer codice_70 to an integer constant, i.e. codice_70 cannot change the value it points to. The codice_17 function. char *strncat(char *restrict s1, const char *restrict s2, size_t n); The codice_25 function shall append not more than codice_74 bytes (a null byte and bytes that follow it are not appended) from the array pointed to by codice_28 to the end of the string pointed to by codice_29. The initial byte of codice_28 overwrites the null byte at the end of codice_29. A terminating null byte is always appended to the result. If copying takes place between objects that overlap, the behavior is undefined. The function returns codice_29. The following is a public-domain implementation of codice_17: #include <string.h> /* strncat */ char *(strncat)(char *restrict s1, const char *restrict s2, size_t n) char *s = s1; /* Loop over the data in s1. */ while (*s != '\0') s++; /* s now points to s1's trailing null character, now copy up to n bytes from s2 into s stopping if a null character is encountered in s2. It is not safe to use strncpy here since it copies EXACTLY n characters, NULL padding if necessary. */ while (n != 0 && (*s = *s2++) != '\0') { n--; s++; if (*s != '\0') *s = '\0'; return s1; The codice_18 function. int strncmp(const char *s1, const char *s2, size_t n); The codice_82 function shall compare not more than codice_74 bytes (bytes that follow a null byte are not compared) from the array pointed to by codice_29 to the array pointed to by codice_28. The sign of a non-zero return value is determined by the sign of the difference between the values of the first pair of bytes (both interpreted as type codice_50) that differ in the strings being compared. See codice_14 for an explanation of the return value. This function is useful in comparisons, as the codice_14 function is. The following is a public-domain implementation of codice_18: #include <string.h> /* strncmp */ int (strncmp)(const char *s1, const char *s2, size_t n) unsigned char uc1, uc2; /* Nothing to compare? Return zero. */ if (n == 0) return 0; /* Loop, comparing bytes. */ while (n-- > 0 && *s1 == *s2) { /* If we've run out of bytes or hit a null, return zero since we already know *s1 == *s2. */ if (n == 0 || *s1 == '\0') return 0; s1++; s2++; uc1 = (*(unsigned char *) s1); uc2 = (*(unsigned char *) s2); return ((uc1 < uc2) ? -1 : (uc1 > uc2)); The codice_19 function. char *strncpy(char *restrict s1, const char *restrict s2, size_t n); The codice_57 function shall copy not more than codice_74 bytes (bytes that follow a null byte are not copied) from the array pointed to by codice_28 to the array pointed to by codice_29. If copying takes place between objects that overlap, the behavior is undefined. If the array pointed to by codice_28 is a string that is shorter than codice_74 bytes, null bytes shall be appended to the copy in the array pointed to by codice_29, until codice_74 bytes in all are written. The function shall return s1; no return value is reserved to indicate an error. It is possible that the function will not return a null-terminated string, which happens if the codice_28 string is longer than codice_74 bytes. The following is a public-domain version of codice_19: #include <string.h> /* strncpy */ char *(strncpy)(char *restrict s1, const char *restrict s2, size_t n) char *dst = s1; const char *src = s2; /* Copy bytes, one at a time. */ while (n > 0) { n--; if ((*dst++ = *src++) == '\0') { /* If we get here, we found a null character at the end of s2, so use memset to put null bytes at the end of s1. */ memset(dst, '\0', n); break; return s1; The codice_20 function. char *strrchr(const char *s, int c); The codice_20 function is similar to the codice_13 function, except that codice_20 returns a pointer to the last occurrence of codice_39 within codice_41 instead of the first. The codice_108 function shall locate the last occurrence of codice_39 (converted to a codice_40) in the string pointed to by codice_41. The terminating null byte is considered to be part of the string. Its return value is similar to codice_13's return value. At one point in history, this function was named codice_113. The codice_20 name, however cryptic, fits the general pattern for naming. The following is a public-domain implementation of codice_20: #include <string.h> /* strrchr */ char *(strrchr)(const char *s, int c) const char *last = NULL; /* If the character we're looking for is the terminating null, we just need to look for that character as there's only one of them in the string. */ if (c == '\0') return strchr(s, c); /* Loop through, finding the last match before hitting NULL. */ while ((s = strchr(s, c)) != NULL) { last = s; s++; return (char *) last; The less commonly-used string functions. The less-used functions are: Copying functions. The codice_118 function. void *memcpy(void * restrict s1, const void * restrict s2, size_t n); The codice_130 function shall copy codice_74 bytes from the object pointed to by codice_28 into the object pointed to by codice_29. If copying takes place between objects that overlap, the behavior is undefined. The function returns codice_29. Because the function does not have to worry about overlap, it can do the simplest copy it can. The following is a public-domain implementation of codice_118: #include <string.h> /* memcpy */ void *(memcpy)(void * restrict s1, const void * restrict s2, size_t n) char *dst = s1; const char *src = s2; /* Loop and copy. */ while (n-- != 0) *dst++ = *src++; return s1; The codice_119 function. void *memmove(void *s1, const void *s2, size_t n); The codice_137 function shall copy codice_74 bytes from the object pointed to by codice_28 into the object pointed to by codice_29. Copying takes place as if the codice_74 bytes from the object pointed to by codice_28 are first copied into a temporary array of codice_74 bytes that does not overlap the objects pointed to by codice_29 and codice_28, and then the codice_74 bytes from the temporary array are copied into the object pointed to by codice_29. The function returns the value of codice_29. The easy way to implement this without using a temporary array is to check for a condition that would prevent an ascending copy, and if found, do a descending copy. The following is a public-domain, though not completely portable, implementation of codice_119: #include <string.h> /* memmove */ void *(memmove)(void *s1, const void *s2, size_t n) /* note: these don't have to point to unsigned chars */ char *p1 = s1; const char *p2 = s2; /* test for overlap that prevents an ascending copy */ if (p2 < p1 && p1 < p2 + n) { /* do a descending copy */ p2 += n; p1 += n; while (n-- != 0) *--p1 = *--p2; } else while (n-- != 0) *p1++ = *p2++; return s1; Comparison functions. The codice_117 function. int memcmp(const void *s1, const void *s2, size_t n); The codice_151 function shall compare the first codice_74 bytes (each interpreted as codice_50) of the object pointed to by codice_29 to the first codice_74 bytes of the object pointed to by codice_28. The sign of a non-zero return value shall be determined by the sign of the difference between the values of the first pair of bytes (both interpreted as type codice_50) that differ in the objects being compared. The following is a public-domain implementation of codice_117: #include <string.h> /* memcmp */ int (memcmp)(const void *s1, const void *s2, size_t n) const unsigned char *us1 = (const unsigned char *) s1; const unsigned char *us2 = (const unsigned char *) s2; while (n-- != 0) { if (*us1 != *us2) return (*us1 < *us2) ? -1 : +1; us1++; us2++; return 0; The codice_121 and codice_128 functions. int strcoll(const char *s1, const char *s2); codice_161 The ANSI C Standard specifies two locale-specific comparison functions. The codice_121 function compares the string pointed to by codice_29 to the string pointed to by codice_28, both interpreted as appropriate to the codice_165 category of the current locale. The return value is similar to codice_14. The codice_128 function transforms the string pointed to by codice_28 and places the resulting string into the array pointed to by codice_29. The transformation is such that if the codice_14 function is applied to the two transformed strings, it returns a value greater than, equal to, or less than zero, corresponding to the result of the codice_121 function applied to the same two original strings. No more than codice_74 characters are placed into the resulting array pointed to by codice_29, including the terminating null character. If codice_74 is zero, codice_29 is permitted to be a null pointer. If copying takes place between objects that overlap, the behavior is undefined. The function returns the length of the transformed string. These functions are rarely used and nontrivial to code, so there is no code for this section. Search functions. The codice_116 function. void *memchr(const void *s, int c, size_t n); The codice_177 function shall locate the first occurrence of codice_39 (converted to an codice_50) in the initial codice_74 bytes (each interpreted as codice_50) of the object pointed to by codice_41. If codice_39 is not found, codice_116 returns a null pointer. The following is a public-domain implementation of codice_116: #include <string.h> /* memchr */ void *(memchr)(const void *s, int c, size_t n) const unsigned char *src = s; unsigned char uc = c; while (n-- != 0) { if (*src == uc) return (void *) src; src++; return NULL; The codice_122, codice_124, and codice_125 functions. size_t strcspn(const char *s1, const char *s2); char *strpbrk(const char *s1, const char *s2); size_t strspn(const char *s1, const char *s2); The codice_122 function computes the length of the maximum initial segment of the string pointed to by codice_29 which consists entirely of characters not from the string pointed to by codice_28. The codice_124 function locates the first occurrence in the string pointed to by codice_29 of any character from the string pointed to by codice_28, returning a pointer to that character or a null pointer if not found. The codice_125 function computes the length of the maximum initial segment of the string pointed to by codice_29 which consists entirely of characters from the string pointed to by codice_28. All of these functions are similar except in the test and the return value. The following are public-domain implementations of codice_122, codice_124, and codice_125: #include <string.h> /* strcspn */ size_t (strcspn)(const char *s1, const char *s2) const char *sc1; for (sc1 = s1; *sc1 != '\0'; sc1++) if (strchr(s2, *sc1) != NULL) return (sc1 - s1); return sc1 - s1; /* terminating nulls match */ #include <string.h> /* strpbrk */ char *(strpbrk)(const char *s1, const char *s2) const char *sc1; for (sc1 = s1; *sc1 != '\0'; sc1++) if (strchr(s2, *sc1) != NULL) return (char *)sc1; return NULL; /* terminating nulls match */ #include <string.h> /* strspn */ size_t (strspn)(const char *s1, const char *s2) const char *sc1; for (sc1 = s1; *sc1 != '\0'; sc1++) if (strchr(s2, *sc1) == NULL) return (sc1 - s1); return sc1 - s1; /* terminating nulls don't match */ The codice_126 function. char *strstr(const char *haystack, const char *needle); The codice_202 function shall locate the first occurrence in the string pointed to by codice_203 of the sequence of bytes (excluding the terminating null byte) in the string pointed to by codice_204. The function returns the pointer to the matching string in codice_203 or a null pointer if a match is not found. If codice_204 is an empty string, the function returns codice_203. The following is a public-domain implementation of codice_126: #include <string.h> /* strstr */ char *(strstr)(const char *haystack, const char *needle) size_t needlelen; /* Check for the null needle case. */ if (*needle == '\0') return (char *) haystack; needlelen = strlen(needle); for (; (haystack = strchr(haystack, *needle)) != NULL; haystack++) if (memcmp(haystack, needle, needlelen) == 0) return (char *) haystack; return NULL; The codice_127 function. char *strtok(char *restrict s1, const char *restrict delimiters); A sequence of calls to codice_210 breaks the string pointed to by codice_29 into a sequence of tokens, each of which is delimited by a byte from the string pointed to by codice_212. The first call in the sequence has codice_29 as its first argument, and is followed by calls with a null pointer as their first argument. The separator string pointed to by codice_212 may be different from call to call. The first call in the sequence searches the string pointed to by codice_29 for the first byte that is not contained in the current separator string pointed to by codice_212. If no such byte is found, then there are no tokens in the string pointed to by codice_29 and codice_210 shall return a null pointer. If such a byte is found, it is the start of the first token. The codice_210 function then searches from there for a byte (or multiple, consecutive bytes) that is contained in the current separator string. If no such byte is found, the current token extends to the end of the string pointed to by codice_29, and subsequent searches for a token shall return a null pointer. If such a byte is found, it is overwritten by a null byte, which terminates the current token. The codice_210 function saves a pointer to the following byte, from which the next search for a token shall start. Each subsequent call, with a null pointer as the value of the first argument, starts searching from the saved pointer and behaves as described above. The codice_210 function need not be reentrant. A function that is not required to be reentrant is not required to be thread-safe. Because the codice_210 function must save state between calls, and you could not have two tokenizers going at the same time, the Single Unix Standard defined a similar function, codice_224, that does not need to save state. Its prototype is this: codice_225 The codice_224 function considers the null-terminated string codice_41 as a sequence of zero or more text tokens separated by spans of one or more characters from the separator string codice_212. The argument lasts points to a user-provided pointer which points to stored information necessary for codice_224 to continue scanning the same string. In the first call to codice_224, codice_41 points to a null-terminated string, codice_212 to a null-terminated string of separator characters, and the value pointed to by codice_233 is ignored. The codice_224 function shall return a pointer to the first character of the first token, write a null character into codice_41 immediately following the returned token, and update the pointer to which codice_233 points. In subsequent calls, codice_41 is a null pointer and codice_233 shall be unchanged from the previous call so that subsequent calls shall move through the string codice_41, returning successive tokens until no tokens remain. The separator string codice_212 may be different from call to call. When no token remains in codice_41, a NULL pointer shall be returned. The following public-domain code for codice_127 and codice_243 codes the former as a special case of the latter: #include <string.h> /* strtok_r */ char *(strtok_r)(char *s, const char *delimiters, char **lasts) char *sbegin, *send; sbegin = s ? s : *lasts; sbegin += strspn(sbegin, delimiters); if (*sbegin == '\0') { *lasts = ""; return NULL; send = sbegin + strcspn(sbegin, delimiters); if (*send != '\0') *send++ = '\0'; *lasts = send; return sbegin; /* strtok */ char *(strtok)(char *restrict s1, const char *restrict delimiters) static char *ssave = ""; return strtok_r(s1, delimiters, &ssave); Miscellaneous functions. These functions do not fit into one of the above categories. The codice_120 function. void *memset(void *s, int c, size_t n); The codice_245 function converts codice_39 into codice_50, then stores the character into the first codice_74 bytes of memory pointed to by codice_41. The following is a public-domain implementation of codice_120: #include <string.h> /* memset */ void *(memset)(void *s, int c, size_t n) unsigned char *us = s; unsigned char uc = c; while (n-- != 0) *us++ = uc; return s; The codice_123 function. char *strerror(int errorcode); This function returns a locale-specific error message corresponding to the parameter. Depending on the circumstances, this function could be trivial to implement, but this author will not do that as it varies. The Single Unix System Version 3 has a variant, codice_252, with this prototype: codice_253 This function stores the message in codice_254, which has a length of size codice_255. Examples. To determine the number of characters in a string, the codice_67 function is used: #include <stdio.h> #include <string.h> int length, length2; char *turkey; static char *flower= "begonia"; static char *gemstone="ruby "; length = strlen(flower); printf("Length = %d\n", length); // prints 'Length = 7' length2 = strlen(gemstone); turkey = malloc( length + length2 + 1); if (turkey) { strcpy( turkey, gemstone); strcat( turkey, flower); printf( "%s\n", turkey); // prints 'ruby begonia' free( turkey ); Note that the amount of memory allocated for 'turkey' is one plus the sum of the lengths of the strings to be concatenated. This is for the terminating null character, which is not counted in the lengths of the strings.
8,149
Python Programming/Overview. Python is a high-level, structured, open-source programming language that can be used for a wide variety of programming tasks. Python was created by Guido Van Rossum in the early 1990s; its following has grown steadily and interest has increased markedly in the last few years or so. It is named after Monty Python's Flying Circus comedy program. Python is used extensively for system administration (many vital components of Linux distributions are written in it); also, it is a great language to teach programming to novices. NASA has used Python for its software systems and has adopted it as the standard scripting language for its Integrated Planning System. Python is also extensively used by Google to implement many components of its Web Crawler and Search Engine & Yahoo! for managing its discussion groups. Python within itself is an interpreted programming language that is automatically compiled into bytecode before execution (the bytecode is then normally saved to disk, just as automatically, so that compilation need not happen again until and unless the source gets changed). It is also a dynamically typed language that includes (but does not require one to use) object-oriented features and constructs. The most unusual aspect of Python is that whitespace is significant; instead of block delimiters (braces → "{}" in the C family of languages), indentation is used to indicate where blocks begin and end. For example, the following Python code can be interactively typed at an interpreter prompt, display the famous "Hello World!" on the user screen: »> print ("Hello World!") Hello World! Another great feature of Python is its availability for all platforms. Python can run on Microsoft Windows, Macintosh and all Linux distributions with ease. This makes the programs very portable, as any program written for one platform can easily be used on another. Python provides a powerful assortment of built-in types (e.g., lists, dictionaries and strings), a number of built-in functions, and a few constructs, mostly statements. For example, loop constructs that can iterate over items in a collection instead of being limited to a simple range of integer values. Python also comes with a powerful standard library, which includes hundreds of modules to provide routines for a wide variety of services including regular expressions and TCP/IP sessions. Python is used and supported by a large Python Community that exists on the Internet. The mailing lists and news groups like the tutor list actively support and help new python programmers. While they discourage doing homework for you, they are quite helpful and are populated by the authors of many of the Python textbooks currently available on the market. "Python 2 vs. Python 3:" Years ago, the Python developers made the decision to come up with a major new version of Python, which became the 3."x" series of versions. The 3.x versions are "backward-incompatible" with Python 2."x": certain old features (like the handling of Unicode strings) were deemed to be too unwieldy or broken to be worth carrying forward. Instead, new, cleaner ways of achieving the same results were added. See also ../Python 2 vs. Python 3/ chapter.
703
Python Programming/Modules. Modules are a way to structure a program and create reusable libraries. A module is usually stored in and corresponds to a separate .py file. Many modules are available from the standard library. You can create your own modules. Python searches for modules in the current directory and other locations; the list of module search locations can be expanded by expanding PYTHONPATH environment variable and by other means. Importing a Module. To use the functions and classes offered by a module, you have to import the module: import math print(math.sqrt(10)) The above imports the math standard module, making all of the functions in that module namespaced by the module name. It imports all functions and all classes, if any. You can import the module under a different name: import math as Mathematics print(Mathematics.sqrt(10)) You can import a single function, making it available without the module name namespace: from math import sqrt print(sqrt(10)) You can import a single function and make it available under a different name: from math import cos as cosine print(cosine(10)) You can import multiple modules in a row: import os, sys, re You can make an import as late as in a function definition: def sqrtTen(): import math print(math.sqrt(10)) Such an import only takes place when the function is called. You can import all functions from the module without the module namespace, using an asterisk notation: from math import * print(sqrt(10)) However, if you do this inside a function, you get a warning in Python 2 and error in Python 3: def sqrtTen(): from math import * print(sqrt(10)) You can guard for a module not found: try: import custommodule except ImportError: pass Modules can be different kinds of things: Modules are loaded in the order they're found, which is controlled by sys.path. The current directory is always on the path. Directories should include a file in them called __init__.py, which should probably include the other files in the directory. Creating a DLL that interfaces with Python is covered in another section. Imported Check. You can check whether a module has been imported as follows: if "re" in sys.modules: print("Regular expression module is ready for use.") Links: Creating a Module. From a File. The easiest way to create a module is by having a file called mymod.py either in a directory recognized by the PYTHONPATH variable or (even easier) in the same directory where you are working. If you have the following file mymod.py class Object1: def __init__(self): self.name = 'object 1' you can already import this "module" and create instances of the object "Object1". import mymod myobject = mymod.Object1() from mymod import * myobject = Object1() From a Directory. It is not feasible for larger projects to keep all classes in a single file. It is often easier to store all files in directories and load all files with one command. Each directory needs to have a codice_1 file which contains python commands that are executed upon loading the directory. Suppose we have two more objects called codice_2 and codice_3 and we want to load all three objects with one command. We then create a directory called "mymod" and we store three files called codice_4, codice_5 and codice_6 in it. These files would then contain one object per file but this not required (although it adds clarity). We would then write the following codice_1 file: from Object1 import * from Object2 import * from Object3 import * __all__ = ["Object1", "Object2", "Object3"] The first three commands tell python what to do when somebody loads the module. The last statement defining __all__ tells python what to do when somebody executes "from mymod import *". Usually we want to use parts of a module in other parts of a module, e.g. we want to use Object1 in Object2. We can do this easily with an "from . import *" command as the following file "Object2.py" shows: from . import * class Object2: def __init__(self): self.name = 'object 2' self.otherObject = Object1() We can now start python and import "mymod" as we have in the previous section. Making a program usable as a module. In order to make a program usable both as a standalone program to be called from a command line and as a module, it is advisable that you place all code in functions and methods, designate one function as the main one, and call then main function when __name__ built-in equals '__main__'. The purpose of doing so is to make sure that the code you have placed in the main function is not called when your program is imported as a module; the code would be called upon import if it were placed outside of functions and methods. Your program, stored in mymodule.py, can look as follows: def reusable_function(x, y): return x + y def main(): pass # Any code you like if __name__ == '__main__': main() The uses of the above program can look as follows: from mymodule import reusable_function my_result = reusable_function(4, 5) Links: Extending Module Path. When import is requested, modules are searched in the directories (and zip files?) in the module path, accessible via sys.path, a Python list. The module path can be extended as follows: import sys sys.path.append("/My/Path/To/Module/Directory") from ModuleFileName import my_function Above, if ModuleFileName.py is located at /My/Path/To/Module/Directory and contains a definition of my_function, the 2nd line ensures the 3rd line actually works. Links: Module Names. Module names seem to be limited to alphanumeric characters and underscore; dash cannot be used. While my-module.py can be created and run, importing my-module fails. The name of a module is the name of the module file minus the .py suffix. Module names are case sensitive. If the module file is called MyModule.py, doing "import mymodule" fails while "import MyModule" is fine. PEP 0008 recommends module names to be in all lowercase, with possible use of underscores. Examples of module names from the standard library include math, sys, io, re, urllib, difflib, and unicodedata. Links: Built-in Modules. For a module to be built-in is not the same as to be part of the standard library. For instance, re is not a built-in module but rather a module written in Python. By contrast, _sre is a built-in module. Obtaining a list of built-in module names: print(sys.builtin_module_names) print("_sre" in sys.builtin_module_names) # True print("math" in sys.builtin_module_names) # True Links:
1,661
Plato. This is an introduction to the works of Plato. Plato is regarded by many to be one of the West’s greatest ancient philosophers. The student of Socrates and teacher of Aristotle, he wrote many books in his life time and here you will find a brief summary of his works. To find the actual books themselves, look at our sister project Wikisource. Plato was born into an Athenian aristocratic family around 427/428 BC. His father Ariston was said to be an ancestor of the last king of Athens, Crodus and his mother Perictione was a relation of the Greek politician Solon. There is not much external information about Plato's early life and most of what we know has come from his own writings. His father died when Plato was young and his mother was remarried to her uncle Pyrilampes. It is very likely that Plato knew Socrates from early childhood. Perictione's cousin Critias and her brother Charmides are known to have been friends with Socrates and they themselves were part of the oligarchic leadership of 404 BC. These connections should have led to a political career for Plato but at some stage he made a decision not to enter political life. The oligarchic leadership collapsed and democracy was restored and considering that Plato's family members had been part of the oligarchic terror must have meant that his position in Athenian society was under scrutiny. The condemning to death of Socrates by the democracy seems to have been the final political act of the state that forced Plato into exile at Megara. Plato is known to have taken refuge with Eucleides, founder of the Megarian school of philosophy and it is stated by later historians that during this period in his life he travelled extensively through Greece, Italy and Egypt. Whether these journeys took place is disputed but it is known that Plato did travel to Sicily where he met Dion, brother-in-law of the ruler of Syracuse, Dionysius I. Socratic dialogue. The Socratic dialogue (Greek Σωκρατικὸς λόγος or Σωκρατικὸς διάλογος) is a literary prose genre, developed in Greece around 400 BC. The best known examples are the dialogues of Plato and the Socratic works of Xenophon. Typical of the genre are the dialogue form and the moral and philosophical issues that the characters discuss. The protagonist of each dialogue, both in Plato's and Xenophon's work, usually is Socrates who by means of a kind of interrogation tries to find out more about the other person's understanding of the moral issues. In the dialogues Plato presents Socrates as a simple man who confesses that he has little knowledge. Plato uses the character of Socrates to state the aims of the inquiry at the outset of the dialogue. The outcome of the dialogue is that Socrates demonstrates that the other person's views are inconsistent. In this way Plato is using Socrates to show Plato's view of the way to real wisdom. One of his most famous statements in that regard is "The unexamined life is not worth living." This philosophical questioning is known as the Socratic method. In some dialogues Plato's main character is not Socrates but someone from outside of Athens. In Xenophon's 'Hiero' a certain Simonedes plays this role when Socrates is not the protagonist. The ordering of the dialogues is based roughly on the standard division into tetralogies. Authorship in many cases is uncertain, as we only have Plato's works as handed down through many generations of translations, forgeries, etc. Please consult the following legend. Sources. All of the texts of Plato's Dialogues are available at the MIT Internet Classics Archive.
888
Guitar/Tapping. Fretboard Tapping. Tapping is the short name of "fretboard tapping" or "finger tapping", a technique where the fingers hammer down (tap) against the strings in order to produce sounds rather than striking or plucking the strings. If both the left and right hand are used then it is called two-handed tapping. It is not clear who first developed tapping but it was certainly popularized by Eddie van Halen. Van Halen was listening to "Heartbreaker" by Led Zeppelin and he was quite inspired by the solo, which contained a variation of tapping. This is arguably the song that pushed Van Halen to use "tapping" frequently. A rather different kind of independent two-handed tapping was discovered by Harry DeArmond and named "The Touch System" by his student Jimmie Webster. The "Touch System" is a complete playing method rather than a technique. Another method of independent tapping was discovered by Emmett Chapman, where the right hand comes over the fretboard and lines up with the frets like the left. The three kinds of tapping techniques are: Interdependent tapping. Interdependent tapping is by far the most common type of tapping. It is generally used as a lead guitar technique, most commonly during solos; however, a small number of songs are entirely tapped. The player's picking hand leaps out to the fretboard and begins to tap the strings with the fingers. However, one must get the pick out of the way in order to tap. Some players do this by sticking the pick between their fingers; others simply use the middle finger to tap. The Van Halen technique of getting rid of the pick is done by moving the pick into the space between the first and second joints of his middle finger. Eruption by Eddie Van Halen is a good example of this technique. The Touch System. As mentioned before, this is a whole playing style and a whole book could be written about it. The first musician to play this way was pickup designer Harry DeArmond in the 1940's, who used tapping as a way to demonstrate the sensitivity of his pickups. While each hand could play its own part, DeArmond held his right hand in the same orientation as conventional guitar technique. This meant the ability of that hand to tap scale-based melody lines was limited. He taught his approach to Gretch Guitars employee Jimmie Webster, who wrote an instruction book called "The Touch System for Amplified Spanish guitar." Webster made a record and travelled around demonstrating the method. Even though it inspired a few builders (Dave bunker, for example), the Touch System was limited by the lack of equal movements for the right hand and never caught on. The Free Hands Method. In 1969 Emmett Chapman, who had no previous knowledge of DeArmond, Webster or any other tapping guitarists, discovered that he could tap on the strings with both hands, and that by raising the neck up could align the right hand's fingers with the frets as on the left, but from above the fretboard. This made scale-based melody lines just as easy to tap in the right hand as the left, and a new way of playing a stringed instrument was born. Chapman redesigned his home-made 9-string guitar to support his new playing method, and began selling his new instrument (The Chapman Stick) to others in 1974. In 1976 Chapman published his volume of collected lessons he used for teaching guitarists and Stick players as "Free Hands: A New Discipline of Fingers on Strings." It has been popularised by players such as Tony Levin, Nick Beggs, John Myung, Bob Culbertson, and Greg Howard, and is currently experiencing a surge in popularity due to the internet. Stanley Jordan became famous in the 1980s for using the same method on the guitar. Jordan discovered the method independently after Chapman did, was signed to Blue Note Records, and released several successful albums.The method that Chapman invented and Jordan also used allows complete self-accompaniment and counterpoint, as on piano.
928
Python Programming/Control Flow. As with most imperative languages, there are three main categories of program control flow: Function calls are covered in the next section. Generators and list comprehensions are advanced forms of program control flow, but they are not covered here. Overview. Control flow in Python at a glance: x = -6 # Branching if x > 0: # If print("Positive") elif x == 0: # Else if AKA elseif print("Zero") else: # Else print("Negative") list1 = [100, 200, 300] for i in list1: print(i) # A for loop for i in range(0, 5): print(i) # A for loop from 0 to 4 for i in range(5, 0, -1): print(i) # A for loop from 5 to 1 for i in range(0, 5, 2): print(i) # A for loop from 0 to 4, step 2 list2 = [(1, 1), (2, 4), (3, 9)] for x, xsq in list2: print(x, xsq) # A for loop with a two-tuple as its iterator l1 = [1, 2]; l2 = ['a', 'b'] for i1, i2 in zip(l1, l2): print(i1, i2) # A for loop iterating two lists at once. i = 5 while i > 0: # A while loop i -= 1 list1 = ["cat", "dog", "mouse"] i = -1 # -1 if not found for item in list1: i += 1 if item=="dog": break # Break; also usable with while loop print("Index of dog:", i) for i in range(1,6): if i <= 4: continue # Continue; also usable with while loop print("Greater than 4:", i) Loops. In Python, there are two kinds of loops, 'for' loops and 'while' loops. For loops. A for loop iterates over elements of a sequence (tuple or list). A variable is created to represent the object in the sequence. For example, x = [100,200,300] for i in x: print (i) This will output 100 200 300 The codice_1 loop loops over each of the elements of a list or iterator, assigning the current element to the variable name given. In the example above, each of the elements in codice_2 is assigned to codice_3. A built-in function called range exists to make creating sequential lists such as the one above easier. The loop above is equivalent to: l = range(100, 301,100) for i in l: print (i) Similar to the slicing operation, in the range function, the first argument is the starting integer (we can just pass one argument to the range, which will be interpreted as the "second" argument, and then the default value: 0 is used for the first argument), and the second argument is the ending integer "but excluded" from the list. »> range(5) range(0, 5) »> list(range(5)) #need to use list() to really print the list out [0, 1, 2, 3, 4] »> set(range(5)) #we can also print a set out »> list(range(1,5)) [1, 2, 3, 4] »> list(range(1,1)) #starting from 1, but 1 itself is excluded from the list The next example uses a negative "step" (the third argument for the built-in range function, which is similar to the slicing operation): for i in range(5, 0, -1): print (i) This will output 5 4 3 2 1 The negative step can be -2: for i in range(10, 0, -2): print (i) This will output 10 8 6 4 2 For loops can have names for each element of a tuple, if it loops over a sequence of tuples: l = [(1, 1), (2, 4), (3, 9), (4, 16), (5, 25)] for x, xsquared in l: print(x, ':', xsquared) This will output 1 : 1 2 : 4 3 : 9 4 : 16 5 : 25 Links: While loops. A while loop repeats a sequence of statements until the condition becomes false. For example: x = 5 while x > 0: print (x) x = x - 1 Will output: 5 4 3 2 1 Python's while loops can also have an 'else' clause, which is a block of statements that is executed (once) when the while condition evaluates to false. The break statement (see the next section) inside the while loop will not direct the program flow to the else clause. For example: x = 5 y = x while y > 0: print (y) y = y - 1 else: print (x) This will output: 5 4 3 2 1 5 Unlike some languages, there is no post-condition loop. When the while condition never evaluates to false, i.e., is always true, then we have an "infinite loop". For example, x = 1 while x > 0: print(x) x += 1 This results in an infinite loop, which prints 1,2,3,4... . To stop an infinite loop, we need to use the break statement. Links: Breaking and continuing. Python includes statements to exit a loop (either a for loop or a while loop) prematurely. To exit a loop, use the break statement: x = 5 while x > 0: print(x) break x -= 1 print(x) This will output 5 The statement to begin the next iteration of the loop without waiting for the end of the current loop is 'continue'. l = [5,6,7] for x in l: continue print(x) This will not produce any output. Else clause of loops. The else clause of loops will be executed if no break statements are met in the loop. l = range(1,100) for x in l: if x == 100: print(x) break else: print(x, " is not 100") else: print("100 not found in range") Another example of a while loop using the break statement and the else statement: expected_str = "melon" received_str = "apple" basket = ["banana", "grapes", "strawberry", "melon", "orange"] x = 0 step = int(raw_input("Input iteration step: ")) while received_str != expected_str: if x >= len(basket): print("No more fruits left on the basket."); break received_str = basket[x] x += step # Change this to 3 to make the while statement # evaluate to false, avoiding the break statement, using the else clause. if received_str==basket[2]: print("I hate", basket[2], "!"); break if received_str != expected_str: print("I am waiting for my ", expected_str,".") else: print("Finally got what I wanted! my precious ", expected_str, "!") print("Going back home now !") This will output: White Space. Python determines where a loop repeats itself by the indentation in the whitespace. Everything that is indented is part of the loop, the next entry that is not indented is not. For example, the code below prints "1 1 2 1 1 2" for i in [0, 1]: for j in ["a","b"]: print("1") print("2") On the other hand, the code below prints "1 2 1 2 1 2 1 2" for i in [0, 1]: for j in ["a","b"]: print("1") print("2") Branches. There is basically only one kind of branch in Python, the 'if' statement. The simplest form of the if statement simple executes a block of code only if a given predicate is true, and skips over it if the predicate is false For instance, »> x = 10 »> if x > 0: ... print("Positive") Positive »> if x < 0: ... print("Negative") You can also add "elif" (short for "else if") branches onto the if statement. If the predicate on the first “if” is false, it will test the predicate on the first elif, and run that branch if it’s true. If the first elif is false, it tries the second one, and so on. Note, however, that it will stop checking branches as soon as it finds a true predicate, and skip the rest of the if statement. You can also end your if statements with an "else" branch. If none of the other branches are executed, then python will run this branch. »> x = -6 »> if x > 0: ... print("Positive") ... elif x == 0: ... print("Zero") ... else: ... print("Negative") 'Negative' Links: Conclusion. Any of these loops, branches, and function calls can be nested in any way desired. A loop can loop over a loop, a branch can branch again, and a function can call other functions, or even call itself. Exercises. 1 2 3 2 4 6 3 6 9 1 2 3 1|1 2 3 2|2 4 6 3|3 6 9
2,467
Guitar/Tremolo Picking. What is Tremolo Picking? Tremolo means a modulation in volume; in the context of stringed instruments, usually refers to repeatedly striking or bowing a single string in a steady rhythm, especially the fastest rhythm the player can maintain. (This technique is particularly common on the acoustic mandolin.) In guitar literature, this is called "tremolo picking", and one of the few places the term "tremolo" is consistently used "correctly" in guitar literature (whose convention usually reverses tremolo and vibrato). This technique has nothing to do with a "tremolo bar" (really a vibrato bar) or a "tremolo" effects box. How to hold the Pick. Tremolo picking, though appearing hard at first, is actually quite easy. It is merely alternate picking at a faster speed. To start off, a pick makes tremolo picking much easier and is highly recommended when attempting it, but even though most people find tremolo picking much easier with a pick, it is possible without a pick. The best way to hold your pick is between your thumb and the side of the first knuckle of your pointing finger, but if you feel more comfortable holding it another way, such as with your thumb and middle finger then go ahead. How to Pick. The movement should come mostly from the wrist. A little bit of arm movement is okay, but shouldn't be done intentionally. It is possible to tremolo with the elbow, but the wrist is actually easier and faster for most people with practice. The motion done with the wrist should be like drawing quick zig zags, or Vs. Picking should feel just like writing. Imagine drawing as many connected V's as possible. Do not play with your hand parallel to the strings. Pick like you write, with your wrist at an angle. Grip. An important aspect of tremolo picking that many beginners fail to realise is that you must have a relaxed grip on the pick, as when you try to pick when holding the pick tensely, you will find that the pick hits the string harder therefore making it harder to pass through the string, causing it to sound sloppy. Maintaining a relaxed grip becomes harder when playing faster, but you will get used to it. Things to Remember. When tremolo picking make sure you use just your wrist, as this will make it much easier to pass through the string. Also, when you pick the string, make sure your hand doesn't go to far away from it, as this will slow you down. The impact from hitting the string usually forces your hand to leave the string, but after practice, avoiding this will become easier. It's also important to remember that many beginners start to use the thin side of the pick to tremolo, since the thin side has a smaller surface area and passes through the strings easily. This is incorrect, as the pick start to not only damage the strings, but also causes damage to the wrists, and may further start to ring other strings. Use the flat side of the pick to tremolo, not the thin border.
668
Guitar/Adjusting the Guitar. Many beginning or even intermediate guitarists are unaware that their guitar should be "set up". The adjustments described in the adjustment subsections below (along with restringing and tuning) are called a "set up". =What difference does a set up make?= When a guitar is set up properly: If a guitar plays easily and sounds its best then it's easy for the player to feel successful. When a guitar is not set up properly: When to Set Up? When a guitar is brand new and fresh from the factory it may or may not have had these adjustments done. As a rule, a guitar should be set up when first purchased (used or new) and again when switching string gauges. Consider getting a set up anytime the guitar sounds or feels different than it used to. Perhaps after a guitar travels (altitude changes, pressure changes, and humidity can affect the wood in the guitar) and just like changing oil in a car it is a good idea to get a set up every now and then for maintenance purposes (perhaps twice a year). Poor set up may be obvious to a player or it might not. In some cases the guitar may be unplayable because it hasn't been set up. A maladjusted guitar can cause strange quirks, for instance frets near the bottom of the neck being too sharp, or can even cause damage (e.g., by using .012 gauge strings on a nut designed for .009 strings, and the tension messes up the nut), and it can easily frustrate the player when their playing is perfectly correct yet things still don't sound right. In particular if your guitar ever becomes difficult for you to play, a set up will probably help. It is not absolutely "required" to set up a guitar, but it is nonetheless a good idea, "especially" if the guitar is to be taken to the stage. Some people never get their guitar set up. Some get their guitar set up even when nothing previously seemed wrong with it, then find such a dramatic change in the guitar's playability and sound that they wish they had set it up sooner. How to get a Set Up. These adjustments should generally be done by a professional, qualified repair person. They require precision instruments, some hard to find tools, a steady hand, quite a bit of time and know-how. Virtually all musical instrument stores will be able to perform a professional set up. Some will do the job better than others. Call a local music store and ask them "Do you do set ups for electric (or acoustic) guitars and how much would you charge?". Getting a set up will probably cost from $30 to $75 USD. =Adjustments= Adjusting action at the bridge. This is a simple adjustment that can usually be performed without professional assistance. The bridge saddles should be lowered if the string action is too high, that is, the strings are too far up off the fretboard. In some cases it may be desirable to raise the saddles for a higher string action. Most electric guitars have two small screws on the saddle which can be used to raise or lower the saddle. Some saddles have screws that can be rotated using the fingers; others require an allen key. Lower the saddles too much and the strings might rattle against certain frets (this may or may not be inconsequential on an electric guitar; listen through an amplifier). In more extreme cases, pressing a string against one fret might actually fret the string against a different fret, usually the one under the intended one. In both cases, filing the frets might alleviate the problem if the saddle really should be that low. Otherwise, simply raising the saddle a small amount on the side with the problem should be fine. Filing frets. The frets go with the shape (or contour) of the neck radius (9, 12", 16", etc.) Frets can determine how the notes on your guitar sound (i.e. intonation) and over time and use playing a fretted instrument the frets will begin to wear out by either changing the shape of the crowns (the top of the frets, and they are changed by being flattened out or mis-shaped)or the frets will begin to leave their slots. Fret work should be done by someone with experience doing this kind of job because this is a job that can lead to worse problems on your guitar such as your tonation being worse, action may be higher, strings may buzz out, and it may require that multiple frets be replaced or further repaired. Frets come in a variety of sizes as well making them each different to work on and there are special tools available to do this line of work but many are expensive and without proper training may not be used correctly. Filing the nut. Filing the nut should only be done by a qualified repair person and is used to reduce pressure at the nut to allow a heavier gauge of strings to be used. It may not be necessary if the new strings are detuned lower (e.g., when switching from .009's to .010's, the nut will need no adjustment if the guitar is tuned to Eb-Ab-Db-Gb-Bb-Eb instead of E-A-D-G-B-E). Neck/truss rod adjustment. This particular adjustment has been known to ruin guitars when performed incorrectly, so here referral to a professional repair person is highly recommended. A guitar will need a truss rod adjustment if the neck is not straight. One way to check the straightness of the neck is to play 12th and 19th harmonics on the low and high strings. After sounding each harmonic, fret the note there and play it again: it should be exactly the same pitch. If it is not, the neck "may" be in need of adjustment. However, this may be indicative of an intonation problem as well, which "can" be fixed without the aid of a repair person; see below. If adjusting the intonation does nothing for you, give the guitar to a repair person. Adjusting intonation. You may notice each string on the bridge sits in a "saddle". Depending on your setup, you might notice the saddles may be in different positions: some might be pushed forward and others might be pushed back, sometimes slightly. The positioning of the saddle effectively changes the length of the vibrating string. Tune the guitar to concert pitch with the aid of an electronic tuner, making sure the open strings are "perfectly" in tune. Play the 9th and 12th fret harmonics, then play the fretted notes. If the fretted notes are sharp, the string is too short and the saddle needs to be pushed back toward the base of the bridge. If the note is flat, the string is too long and the saddle needs to be pushed up toward the nut. Repeat this procedure for each string. Adjusting the intonation should be done every few months or at least twice a year (Every six months interval).
1,572
Intelligence Intensification/Visualization. What is visualization? Put simply it's the act of holding an image or several images in your mind with clarity. That's all it is. Doing it is just as simple. Try this. Take a common object - pencil, pen, etc. - something that is interesting to your eyes and you feel comfortable with. Hold it in your hands. Focus on it and nothing else. Rotate it slowly so you can see all sides of it. Now close your eyes. Try to picture yourself rotating the object in your mind. Do your best to recall as many details as possible. Open your eyes if you're unsure about something in particular. Try to build the object in front of your eyes while they're closed. It'll take time but not as long as you think. The important part is to do this every day in some small way. Like a muscle this ability will grow stronger the more you use it. Once you've done one object - try for a second one. Two at once, then three, then four, etc. Try re-building the room you're in in your mind. Re-organize the furniture in your mind so the room has all the same parts but is completely different. Don't be troubled if you can't visualize "exactly" right down to the scratches on the wooden coffee table. That will come with time and practice. Just keep working at it. After you're comfortable with visualizing objects, rooms, whatnot, then you can move on to visualizing goals for yourself. Start with a little bit of motion in your mental scenes. Put yourself into those scenes. Again, little by little change the scene you're visualizing to what you want. Visualize yourself learning something you always wanted to. Or getting a new job that pays more or has more perks. Visualize yourself on vacation - a good anti-stress method by the way. Focusing on these goals using visualization will make them far more real to you and much easier to work towards. Another exercise for increasing your visualization skills is through art. Drawing, painting, sculpting and other forms of art are all ways of creating links between your initial perceptions of things, and how well you can recreate those perceptions. For example: Try to sketch an object without looking at it. When you have gotten as far as you can, try to reconcile the drawing with the original image. How much of what you drew was merely what you expected to see instead of what you actually saw? A good drawing exercise for training your perceptions is to try to copy another drawing upside-down. Take a dollar bill and turn it upside down. Then try to copy it (not tracing, just copying it line for line). This exercise helps you to become an accurate scribe of what you see, instead of merely what you think you see. The goal is to develop your ability to record specific visual information. In many cases it is better to amalgamate information instead of remembering every detail. These exercises will help you develop the ability to choose how your brain will store information.
679
History of Islamic Civilization. "This wikibook concerns about the political, economic, scientific, and cultural developments of the Islamic world throughout the centuries. For the history of Islamic thought, see the History of Islam." The Islamic civilization is traced back to the Prophet Muhammad in the 7th century.
65
Go. Go is thought to be the world's most ancient board game, with deceptively simple rules that lead to deep strategy. After centuries of play, new ideas about the game are still being developed on a regular basis. The game is believed to have originated in China, and is still most popular in East Asia, particularly Korea, Japan, and China. In Korea, it is called 바둑 (Baduk), pronounced / pa.tukʰ /, Japan where it is sometimes known as 囲碁 (I-Go), pronounced i.ɡo, and China, the game's original home, where it is named 圍棋 (trad.) / 围棋 (simp.) / wéiqí (Pinyin) pronounced ueɪ2.tɕʰi2. Each of these three countries have professional associations that allow individuals to hold the status of a professional Go player. It enjoys a small but rising popularity in other parts of the world. For consistency, the rest of the book will use the name Go, and use Japanese terms. Korean and Chinese terms will still be listed. Beyond the Guide. __NOEDITSECTION__
258
Cryptography/Playfair cipher. The Playfair Cipher is one of several methods used to foil a simple frequency analysis. Instead of every letter having a substitute, every digraph has a substitute. This tends to level the frequency distribution somewhat. The classic Playfair tableau consists of four alphabets, usually in a square arrangement, two plaintext and two ciphertext. In this example, keywords have been used to disorder the ciphertext alphabets. <br> <br> In use, two letters of the plaintext are located in the plaintext alphabets. Then reading across from the first letter to the column of the second letter, the first ciphertext character is found. Next, reading down from the first letter to the row of the second letter, the second ciphertext letter is found. As an example, using tableau above, the digraph "TE" is enciphered as "uw", whereas the digraph "LE" is enciphered as "mk". This makes a frequency analysis difficult. A second version of the Playfair cipher uses a single alphabet. SECRT - Your secret keyword, share among you and your receiver KYWDP LAFIZ BXCQG HUMOK If the letters of a digraph lie at the corners of a rectangle, then they are rotated clockwise round the rectangle, SW to CK, AT to EZ. If they lie in the same column or row they are moved one down or across, EA to YX, RS to TE. The square is treated as though it wraps round in both directions, ST to ES, DO to IR Both versions of the Playfair cipher are of comparable strength.
392
Lucid Dreaming/Reality Checks/Mirrors. Presentation. With the mirrors reality check, you check to see if you look normal in a mirror. You could do this reality check every time you see a mirror.
50
Visual Language Interpreting. Visual language interpreting is the practice of deciphering communication in sign languages, which use gestures, body language, and facial expressions to convey meaning. This book is being communally written (at least that's the idea) to fulfill what is seen as a gap in the literature on Visual Language interpreting. There are many erudite works on the interpreting process, and still others for those who are current practitioners. However, the current introductory texts all suffer from one fault or another: they are inaccurate, obsolete, poorly written, or otherwise faulty. The solution proposed here is that material be written by practitioners, clients, and academics to produce a text that is both current (and designed to stay that way) and reflective of what is actually practiced by real working interpreters. In short, a text that is theoretically rigorous, unflinchingly realistic, and up to date. For this, we count on you, the reader, to help us build something which embodies our collective wisdom. Contents. After each link there is an image with a subjective indication of how complete that page is. A blank box indicates that the content has yet to be written. Authors (alphabetically). is an interpreter based in Portland, Oregon, USA. is pleased to be the second contributor, and a part of this work. suggested the idea of this book to the online interpreting community, and is a regular contributor. Roberto R. Santiago is an interpreter in Washington DC __NOEDITSECTION__
343
Geometry/Trapezoids. A Trapezoid (American English) or Trapezium (British English) is a quadrilateral that has two parallel sides and two non parallel sides. Some properties of trapezoids: The "area (A)" of a trapezoid is equal to the product of an altitude and the median. Recall though that the median is half of the sum of the bases. Substituting for m, we get:
106
XML - Managing Data Exchange/VoiceXML. Voicexml examples. According to the W3C, "VoiceXML is designed for creating audio dialogs that feature synthesized speech, digitized audio, recognition of spoken and DTMF key input, recording of spoken input, telephony, and mixed initiative conversations. Its major goal is to bring the advantages of Web-based development and content delivery to interactive voice response applications." Here are two short examples of VoiceXML. The first is the always fun example, "Hello World": Hello world <?xml version="1.0" encoding="UTF-8"?> <vxml xmlns="http://www.w3.org/2001/vxml" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/vxml" version="2.0"> <form> <block>Hello World!</block> </form> </vxml> The top-level element is <vxml>, which is mainly a container for dialogs. The two main types of dialogs are forms and menus. Forms present information and gather input. Menus offer choices of what to do next. This example has a single form, which contains a block that synthesizes and presents "Hello World!" to the user. Since the form does not specify a dialog after "Hello World", the conversation ends. Our second example asks the user for a choice of drink and then submits it to a server script: Form example: <?xml version="1.0" encoding="UTF-8"?> <vxml xmlns="http://www.w3.org/2001/vxml" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/vxml http://www.w3.org/TR/voicexml20/vxml.xsd" version="2.0"> <form> <field name="drink"> <prompt>Would you like coffee, tea, milk, or nothing?</prompt> <grammar type="application/x-gsl" mode="voice"> <![CDATA[ ]]> </field> <block> <submit next="http://www.drink.example.com/drink2.asp"/> </block> </form> </vxml> A "field" is an input field. The user must provide a value for the field before the next element in the form is referenced or executed. Here is an example of a simple interaction: Menu example: <?xml version="1.0" encoding="UTF-8"?> <vxml xmlns="http://www.w3.org/2001/vxml" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/vxml http://www.w3.org/TR/voicexml20/vxml.xsd" version="2.0"> <menu> <property name="inputmodes" value="dtmf"/> <prompt> For sports press 1, For weather press 2, For Stargazer astrophysics press 3. </prompt> <choice dtmf="1" next="http://www.sports.example.com/vxml/start.vxml"/> <choice dtmf="2" next="http://www.weather.example.com/intro.vxml"/> <choice dtmf="3" next="http://www.stargazer.example.com/astronews.vxml"/> </form> </vxml> The computer, or receiver, recognizes the number and sends a message to trigger the next dialog, according to which number was chosen. Here is what a typical conversation would look like: The beginning of VoiceXML. VoiceXML began in 1995 as an XML-based dialog design language. It was mainly used to simplify the speech recognition applications in an AT&T project called Phone Markup Language (PML). After the creation of this language, some other companies worked on their own PML-like languages such as Lucent, Motorola (VoxML), IBM (SpeechML), HP (TalkML) and PipeBeach (VoiceHTML). Since 1998, The VoiceXML Forum has been developed by AT&T, IBM, Lucent, and Motorola to define a standard dialog design language that developers could use to build conversational applications. They chose XML as the basis for this effort because it was clear to them that this was the direction technology was going. By 2000, the VoiceXML Forum released VoiceXML 1.0 to the public and submitted it to the W3C to set the language as an international standard. This implementation allowed the release of VoiceXML 2.0, based on input from W3C member companies, W3C working groups, and all kinds of developers. Introduction. VoiceXML is created to generate audio dialogs that allows the use of synthesized speech, digitized audio, recognition of spoken and DTMF(Dual Tone Multi-Frequency Touch-tone or push-button dialing.) In Layman's Terms, VoiceXML allows the use of computer speech, recorded audio, human speech, and telephones as input and output devices. Pushing a button on a telephone keypad generates a sound that is a combination of two tones, one high frequency and the other low frequency) key input, recording of spoken input, telephony, and mixed initiative conversations. VoiceXML architectural model. The architectural model assumed by this document has the following components: A document server (e.g. a Web server) processes requests from a client application, the VoiceXML Interpreter, through the VoiceXML interpreter context. The server produces VoiceXML documents in reply, which are processed by the VoiceXML interpreter. The VoiceXML interpreter context may monitor user inputs in parallel with the VoiceXML interpreter. For example, one VoiceXML interpreter context may always listen for a special escape phrase that takes the user to a high-level personal assistant, and another may listen for escape phrases that alter user preferences like volume or text-to-speech characteristics. The implementation platform is controlled by the VoiceXML interpreter context and by the VoiceXML interpreter. For instance, in an interactive voice response application, the VoiceXML interpreter context may be responsible for detecting an incoming call, acquiring the initial VoiceXML document, and answering the call, while the VoiceXML interpreter conducts the dialog after answer. The implementation platform generates events in response to user actions (e.g. spoken or character input received, disconnect) and system events (e.g. timer expiration). Some of these events are acted upon by the VoiceXML interpreter itself, as specified by the VoiceXML document, while others are acted upon by the VoiceXML interpreter context. The Goals of VoiceXML. VoiceXML's main goal is to bring the full power of Web development and content delivery to voice response applications, and to free the authors of such applications from low-level programming and resource management. VoiceXML sets an integration environment between voice services and data services taking advantage of the client-server paradigm. A voice service can be defined as a sequence of interactive dialogs between a user and an implementation platform. The dialogs are stored in document servers, allowing an independent structure from the implementation platform. These servers maintain overall service logic, perform database and legacy system operations, and produce dialogs. A VoiceXML document interacts with the dialogs from the server using a VoiceXML interpreter. The inputs from the user generates requests to the document server, and finally, the document server replies with another VoiceXML document to continue the user’s session with other dialogs. VoiceXML is a markup language that: While VoiceXML strives to accommodate the requirements of a majority of voice response services, services with stringent requirements may best be served by dedicated applications that employ a finer level of control. Principles of Design. VoiceXML is an XML application [XML]. These are some of the capabilities, or abilities VoiceXML carries: Implementation Platform Requirements. This section outlines the hardware/software requirements to support a VoiceXML interpreter: Document acquisition: The interpreter context is expected to acquire documents from the VoiceXML interpreter, requiring the support of the "http" URI protocol. There will be some cases in which the document request is generated by the interpretation of a VoiceXML document, but it can also be generated in response to events outside the scope of the language, like an incoming phone call. When issuing document requests via http, the interpreter context identifies itself using the "User-Agent" header variable with the value "<name>/<version>", for example, "acme-browser/1.2" Audio output: An implementation platform must support audio output using audio files and text-to-speech (TTS). The platform must be able to freely sequence TTS and audio output. If an audio output resource is not available, an error.noresource event must be thrown. These files are referenced by a particular URI. Audio input: An implementation platform needs to find the way to detect and report character and/or spoken input simultaneously. It also needs to control input detection interval duration with a timer whose length is specified by a VoiceXML document. Transfer: The platform should be able to support making a third party connection through a communications network, such as the telephone. Concepts. A VoiceXML document is a conversational finite state machine, in which the user is always in one conversational state, or dialog, at a time. Each dialog determines the next dialog to transition to. Transitions can be defined using URIs, which define the next document and dialog to use. When there are no more dialogs, or there is an element that explicitly exits the conversation, the execution is terminated. A VoiceXML document is primarily composed of top-level elements called dialogs. There are two types of dialogs: forms and menus. A document may also have: Forms define an interaction that collects values from a set of field item variables. Each field may specify a grammar that defines the allowable inputs for that field. Menus display the information to the user with a choice of options and then transitions to another dialog based on the selected choice. Each dialog has involved a series of speech and/or DTMF grammars, which are active only when the user is in that dialog. A subdialog is like a function call because it provides a way to creating and invoking a new interaction, and returning to the original dialog. Variable instances, grammars, and state information are saved and are available upon returning to the calling document. Subdialogs can be used to create a confirmation sequence that may require a database query, create a set of components that may be shared among documents in a single application, or possibly to create a reusable library of dialogs shared among many applications. A session begins when the user starts to interact with a VoiceXML interpreter context, continues as documents are loaded and processed, and ends when requested by the user, a document, or the interpreter context. An application is a set of documents sharing the same application root document. Whenever the user interacts with a document in an application, its application root document is also loaded. The application root document remains loaded while the user is transitioning between other documents in the same application, and it is unloaded when the user transitions to a document that is not in the application. Grammars: Each dialog has one or more speech and/or DTMF grammars associated with it. In machine directed applications, each dialog's grammars are active only when the user is in that dialog. In mixed initiative applications, where the user and the machine alternate in determining what to do next, some of the dialogs are flagged to make their grammars active (i.e., listened for) even when the user is in another dialog in the same document, or on another loaded document in the same application. In this situation, if the user says something matching another dialog's active grammars, execution transitions to that other dialog, with the user's utterance treated as if it were said in that dialog. Mixed initiative adds flexibility and power to voice applications. Events: VoiceXML allows the user to fill forms in the traditional way of user input and defines mechanisms for handling events not covered by the form mechanism. Events can be thrown when the user does not respond, does not respond correctly, or requests assistance. Similarly, the VoiceXML interpreter also can throw events if it finds a semantic error in a VoiceXML document using catch elements that allow the interpreter to trigger such events. A link specifies a grammar that is active whenever the user interacts with it. If user input matches the link’s grammar, control transfers to the link’s destination URI. A link can be used to throw an event or go to a destination URI. VoiceXML elements. For more information about the elements go to W3C page. http://www.w3.org/TR/2004/REC-voicexml20-20040316/ One Document Execution. Document execution starts with the first dialog by default. As each dialog executes, the next dialog is determined. When a dialog doesn't reference another dialog, document execution stops. Here is the "Hello World!" example expanded to illustrate VoiceXML execution. It now has a document level variable called "hi" which holds the greeting. Its value is used as the prompt in the first form. Once the first form plays the greeting, it goes to the form named "say_goodbye", which prompts the user with "Goodbye!" Because the second form does not have a transition to another dialog, the document execution ceases. <?xml version="1.0" encoding="UTF-8"?> <vxml xmlns="http://www.w3.org/2001/vxml" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/vxml http://www.w3.org/TR/voicexml20/vxml.xsd" version="2.0"> <meta name="author" content="John Doe"/> <meta name="maintainer" content="[email protected]"/> <var name="hi" expr="'Hello World!'"/> <form> <block> <value expr="hi"/> <goto next="#say_goodbye"/> </block> </form> <form id="say_goodbye"> <block> Goodbye! </block> </form> </vxml> Variables and Expressions. VoiceXML variables are in all respects equivalent to ECMAScript variables: they are part of the same variable space. VoiceXML variables can be used in a <script> just as variables defined in a <script> can be used in VoiceXML. Declaring a variable using var is equivalent to using a var statement in a <script> element. <script> can also appear everywhere that var can appear. VoiceXML variables are also declared by form items. The variable naming convention is as in ECMAScript, but names beginning with the underscore character ("_") and names ending with a dollar sign ("$") are reserved for internal use. VoiceXML variables, including form item variables, must not contain ECMAScript reserved words. They must also follow ECMAScript rules for referential correctness. For example, variable names must be unique and their declaration must not include a dot - "var x.y" is an illegal declaration in ECMAScript. Variable names which violate naming conventions or ECMAScript rules cause an 'error.semantic' event to be thrown. Variables are expressed using the var element: <var name="room_number"/> <var name="avg_mult" expr="2.2"/> <var name="state" expr="'Georgia'"/> <vxml> Element. Attributes of <vxml> include: <field> Element. A field specifies an input item to be gathered from the user. Some attributes of this element are: <grammar> Element. The <grammar> element is used to provide a speech grammar that Some attributes of the <grammar> element are: <block> Element. This element is a form item. It contains executable content that is executed if the block’s form item variable is undefined and the block's cond attribute, if any, evaluates to true. <block> Welcome to Flamingo, your source for lawn ornaments. </block> The form item variable is automatically set to true just before the block is entered. Therefore, blocks are typically executed when the form is called. Sometimes you may need more control over blocks. To do this, you can name the form item variable, and set or clear it to control execution of the <block>. This variable is declared in the dialog scope of the form. Attributes of <block> include: <prompt> Element. This element controls the output of synthesized speech and prerecorded audio. Prompts are queued for play, and interpretation will start when the user provides an input. Here is an example of a prompt: <prompt>Please say your name.</prompt> You can leave out the <prompt> ... </prompt> if: For instance, these are also prompts: But sometimes you have to use the <prompt> tags when adding embedded speech markups, such as: <prompt>Please <emphasis>say</emphasis> your city.</prompt> The <prompt> element has the following attributes: Exercises. 1. Create a VoiceXML document in which you give the user three different options to choose from the keyboard. The user must choose one option between hotels, museums or restaurants. Use forms for this exercise. Hint: this exercise needs to use the option element tag Example: 2. Create a VoiceXML document in which you give the user three different options to choose from the keyboard. The user must choose one option between hotels, museums or restaurants. Use menu dialogs for this exercise.
4,480
Hindi/Basic Hindi. Please ensure that you are reading the Indic fonts correctly. For details in setting up your computer, please see w:Wikipedia:Enabling complex text support for Indic scripts
45
XML - Managing Data Exchange/DocBook. Learning objectives. Upon completion of this chapter, you will be able to Introduction. DocBook is general purpose XML and SGML vocabulary particularly well suited to books, articles, and papers. It has a large, powerful and easy to understand Document Type Definition (DTD), and its main structures correspond to the general concept of what constitutes a book. DocBook is a substantial subject that we can't exactly cover in a few pages. Thus, for the purposes of this chapter, we will talk about creating a simple DocBook document with major elements in the DocBook DTD and the details of publishing the document in order to give you a feel about DocBook. If you would like to study the subject further, we suggest you to have a look at the references provided at the end of the chapter. What is DocBook? DTD vs. Schema:. A DTD is the XML Document Type Definition contains or points to markup declarations that provide a grammar for a class of documents. A Schema is a set of shared vocabularies that allow machines to carry out rules made by people. It provides a means for defining the structure, content and semantics of XML documents. In summary, schemas are a richer and more powerful means of describing information than DTDs. <author> <firstname>Rusen</firstname> <lastname>Gul</lastname> </author> <!ELEMENT author(firstname, lastname)> <!ELEMENT firstname(#PCDATA)> <!ELEMENT lastname(#PCDATA)> <xs:element name="author"> <xs:complexType> <xs:sequence> <xs:element name="firstname" type="xs:string"/> <xs:element name="lastname" type="xs:string"/> </xs:sequence> </xs:complexType> </xs:element> Output formats for DocBook. XSL (Extensible Style Language) stylesheets can transform DocBook XML into the following formats: DSSSL (Document Style Semantics and Specification Language) stylesheets can transform DocBook SGML into the following formats: A Brief History. DocBook was created around 1991 by HaL Computer Systems and O'Reilly & Associates. It was developed primarily for the purpose of holding the results of troff conversion of UNIX documentation, so that the files could be interchanged. Now it is maintained by OASIS. The official web site for DocBook is http://www.oasis-open.org/docbook/ DocBook Tools. DocBook is officially available as a DTD for both XML and SGML. You can download both the latest DocBook XML DTD and DocBook SGML DTD from the official DocBook site at OASIS. The examples provided in this chapter will use DocBook XML DTD. Some experimental DocBook schemas are available at sourceforge.net. DocBook is supported by a number of commercial and open source tools. Easily customizable and extensible "standard" DocBookStylesheets are available from the DocBookOpenRepository along with the other free open source tools. See DocBookTools on the DocBook Wiki for a more complete list of commercial and open source tools. SGML vs XML. The syntax of SGML and XML DTD is very similar but not identical. The biggest difference between the DocBook DTD for SGML and the one for XML is that the SGML DTD contains SGML exclusions in some content models. Example: SGML DTD excludes <footnote> as a descendent of <footnote>, because it doesn't make much practical sense to have footnotes within footnotes. XML DTDs can't contain exclusions, so if you're authoring using the DocBook XML DTD, it's possible to produce documents containing some valid-but-not-logical markup like footnotes within footnotes. Creating a DocBook Document. In order to get started, you will need: <?xml version="1.0"?> <!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN" "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"> <book> <bookinfo> <title>XML – Managing Data Exchange</title> <author> <firstname>Rusen</firstname> <surname>Gul</surname> </author> </bookinfo> <chapter> <title>Introduction</title> <sect1> <title>First Section</title> <para>This is a paragraph.</para> </sect1> <sect1>...</sect1> </chapter> <chapter>...</chapter> <chapter>...</chapter> <chapter>...</chapter> <appendix>...</appendix> <appendix>...</appendix> </book> <?xml version="1.0"?> <!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN" "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"> <article> <articleinfo> <title>A Simple Approach To DocBook</title> <author> <firstname>Rusen</firstname> <surname>Gul</surname> </author> </articleinfo> <para>This is the introductory paragraph of my article.</para> <sect1> <title>First Section</title> <para>This is a paragraph in the first section.</para> <sect2> <title>This is the title for section2.</title> <para>This is a paragraph in section2.</para> </sect2> <sect2>...</sect2> <sect2>...</sect2> </sect1> <sect1>This is a high level section</sect1> <sect1>...</sect1> <sect1>...</sect1> </article> Let’s examine the details of a typical DocBook document. Standard header to a DocBook XML file is a DocType declaration: <!DOCTYPE name FORMALID "Owner//Keyword Description//Language"> This tells the XML manipulation tools the DTD in use. Name is the name of the root element in the document. FORMALID is replaced with either PUBLIC or SYSTEM identifier or both. PUBLIC identifies the DTD to which the document conforms. SYSTEM explicitly states the location of the DTD used in the document by means of a URI (Uniform Resource Indicator). PUBLIC identifiers are optional in XML documents although SYSTEM Identifiers are mandatory in the DOCTYPE declaration. <?xml version="1.0"?> <!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN" "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"> Owner: Oasis<br> Keyword Description: DTD DocBook XML V4.2<br> Language: EN - English <?xml version="1.0"?> <!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN" "/usr/share/sgml/docbook/xml-dtd-4.2/docbookx.dtd"> Breaking a Document into Physical Portions. Before getting started, here is a useful tip! For the purposes of convenience and performance, you might consider breaking a document into physical chunks and work on each chunk separately. If you have a book that consists of three chapters and two appendixes, you might create a file called "book.xml", which looks like this: <?xml version="1.0"?> <!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [<!ENTITY chap1 SYSTEM "chap1.xml"> <!ENTITY chap2 SYSTEM "chap2.xml"> <!ENTITY chap3 SYSTEM "chap3.xml"> <!ENTITY appa SYSTEM "appa.xml"> <!ENTITY appb SYSTEM "appb.xml">] <book> <title>A Physically Divided Book</title> &chap1; &chap2; &chap3; &appa; &appb; </book> You can then write the chapters and appendixes conveniently in separate files. This is why DocBook is well suited to large contents. Note that these separate files do not and must not have document type declarations. For example, Chapter 1 might begin like this: <chapter id="ch1"> <title>My First Chapter</title> <para>My first paragraph.</para>… Breaking a Document into Logical Portions. Here is a quick reference guide for DocBook Elements: http://www.docbook.org/tdg/en/html/ref-elements.html There are–literally–hundreds of DocBook elements. This is what makes docBook very powerful. We will try to cover the major ones here and let you review the rest on your own. Firstly, a classification; DocBook Elements can be divided broadly into these categories: Major DocBook Elements. Set: A collection of books. "Set" is the very top of the DocBook structural hierarchy. There's nothing that contains a "Set". Some children elements: Book, SetIndex, SetInfo, Subtitle, Title, TitleAbbrev, ToC(table of contents). Reference page: http://www.oreilly.com/catalog/docbook/chapter/book/set.html <!DOCTYPE set PUBLIC "-//OASIS//DTD DocBook V4.2//EN"> <set> <title>Lord of the Rings</title> <setinfo> <author>J.R. Tolkien</author> </setinfo> <book><title>The Fellowship of the Ring</title> ... </book> <book><title>The Two Towers</title> ... </book> <book><title>Return of the King</title> ... </book> <set> Book: A book. A "Book" is probably the most common top-level element in a document. The DocBook definition of a book is very loose and general. It gives you free rein by not imposing a strict ordering of elements. Some children elements: Appendix, Article, Bibliography, BookInfo, Chapter, Colophon, Dedication, Glossary, Index, LoT, Part, Preface, Reference, SetIndex, Subtitle, Title, TitleAbbrev, ToC. Reference page: http://www.oreilly.com/catalog/docbook/chapter/book/book.html <small>Table 8: <book> element, "xmlbook.xml"</small> <!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook V4.2//EN"> <book> <title>XML – Managing Data Exchange</title> <titleabbrev>XML</titleabbrev> <bookinfo> <legalnotice><para>No notice is required.</para></legalnotice> <author><firstname>Rusen</firstname><surname>Gul</surname></author> </bookinfo> <dedication> <para>This book is dedicated to MIST 7700 class of 2004 at UGA.</para> </dedication> <preface> <title>Forword</title> <para>The book aims to fulfill the need for an introductory XML textbook. It contains the basics of XML as well as several tools using XML.</para> </preface> <chapter> <title>Introduction</title> <para>At least one chapter, reference, part, or article is required.</para> </chapter> <appendix> <title>Optional Appendix</title> <para>Appendixes are optional but handy.</para> </appendix> </book> Division: A collection of parts and references (optional). "Division"s are the first hierarchical level below Book. Children elements: Part (contain components), Reference (contain RefEntrys) Components: Chapter-like elements of a Book or Part. These are Preface, Chapter, Appendix, Glossary, Bibliography, and Article. Components generally contain block elements -or sections, and some can contain navigational components and RefEntrys. <!DOCTYPE bibliography PUBLIC "-//OASIS//DTD DocBook 4.2//EN"> <bibliography> <title>References</title> <bibliomixed> <bibliomset relation=article> <surname>Watson</surname> <firstname>Richard</firstname>. <title role=article>Managing Global Communities </title> </bibliomset> <bibliomset relation=journal> <title>The World Wide Web Journal</title> <volumenum>2</volumenum> <issuenum>1</issuenum>. <publishername>O'Reilly & Associates, Inc.</publishername> and <corpname>The World Wide Web Consortium</corpname>. <pubdate>Winter, 1996</pubdate> </bibliomset>. </bibliomixed> </bibliography> Sections: Several sectioning elements. a. Sect1…Sect5 elements - the most common sectioning elements that can occur in most component-level elements. These numbered section elements must be properly nested (Sect2s can only occur inside Sect1s, Sect3s can only occur inside Sect2s, and so on). b. Section element - an alternative to numbered sections Sections are recursive, meaning that you can nest them to any depth desired. c. SimpleSect element - a terminal section that can occur at any level SimpleSect cannot have any other sectioning element nested within it. d. BridgeHead element - a section title without any containing section e. RefSect1…RefSect3 elements - numbered section elements in RefEntrys f. GlossDiv, BiblioDiv, and IndexDiv elements - do not nest Please see Table 4 and Table 5 for examples. Reference page: http://www.oreilly.com/catalog/docbook/chapter/book/section.html Meta-Information Elements – contain bibliographic information. All of the elements at the section level and above include a wrapper for meta-information about the content. Examples of meta-wrappers: BookInfo, ArticleInfo, ChapterInfo, PrefaceInfo, SetInfo, GlossaryInfo. <!DOCTYPE bookinfo PUBLIC "-//OASIS//DTD DocBook V4.2//EN"> <bookinfo> <title>XML – Managing Data Exchange</title> <authorgroup> <author> <firstname>Richard</firstname> <surname>Watson</surname> </author> <author> <firstname>Hendrik</firstname> <surname>Fischer</surname> </author> <author> <firstname>Rusen</firstname> <surname>Gul</surname> <affiliation> <orgname>University of Georgia</orgname> </affiliation> </author> </authorgroup> <edition>Introduction to XML - Version 1.0 </edition> <pubdate>1997</pubdate> <copyright> <year>1999</year> <year>2000</year> <year>2001</year> <year>2002</year> <year>2003</year> <holder> O'Reilly & Associates, Inc. </holder> </copyright> <legalnotice> <para>Permission to use, copy, modify and distribute the DocBook DTD and its accompanying documentation for any purpose and without fee is hereby granted in perpetuity, provided that the above copyright notice and this paragraph appear in all copies. </para> </legalnotice> </bookinfo> Block vs. Inline Elements. There are two classes of paragraph-level elements: "block" and "inline". Block elements are usually presented with a paragraph break before and after them. Most can contain other block elements, and many can contain character data and inline elements. Examples of block elements are: Paragraphs, lists, sidebars, tables, and block quotations. Inline elements are generally represented without any obvious breaks. The most common distinguishing mark of inline elements is a font change, but inline elements may present no visual distinction at all. Inline elements contain character data and possibly other inline elements, but they never contain block elements. They are used to mark up data. Some examples are: cross references, filenames, commands, options, subscripts and superscripts, and glossary terms. Block Elements - paragraph-level elements. The block elements occur immediately below the component and sectioning elements. Lists. <!DOCTYPE para PUBLIC "-//OASIS//DTD DocBook V4.2//EN"> <para>The capitals of the states of the United States of America are: <segmentedlist> <title>State Capitals</title> <segtitle>State</segtitle> <segtitle>Capital</segtitle> <seglistitem> <seg>Georgia</seg> <seg>Atlanta</seg> </seglistitem> <seglistitem> <seg>Alaska</seg> <seg>Juneau</seg> </seglistitem> <seglistitem> <seg>Arkansas</seg> <seg>Little Rock</seg> </seglistitem> </segmentedlist> </para> The capitals of the states of the United States of America are: State Capitals State: Georgia Capital: Atlanta State: Alaska Capital: Juneau State: Arkansas Capital: Little Rock <small>Table 13: <orderedlist> element, "mashpotatoe.xml"</small> <!DOCTYPE para PUBLIC "-//OASIS//DTD DocBook V4.2//EN"> <para> <orderedlist numeration="upperroman"> <listitem> <para>Preparation</para> <orderedlist numeration="upperalpha"> <listitem><para>Chop tomatoes</para> </listitem> <listitem><para>Peel onions</para> </listitem> <listitem><para>Mash potatoes</para> </listitem> </orderedlist> </listitem> <listitem> <para>Cooking</para> <orderedlist numeration="upperalpha"> <listitem><para>Boil water</para> </listitem> <listitem><para>Put tomatoes and onions in </para></listitem> <listitem><para>Blanch for 5 minutes</para> </listitem> </orderedlist> </listitem> </orderedlist> </para> I.Preparation<br>     A.Chop tomatoes<br>     B.Peel onions<br>     C.Mash potatoes<br> II.Cooking<br>     A.Boil water<br>     B.Put tomatoes and onions in<br>     C.Blanch for 5 minutes<br> Admonitions. There are five types of "admonition"s: Caution, Important, Note, Tip, and Warning. <small>Table 15: <caution> element, "caution.xml"</small> <!DOCTYPE caution PUBLIC "-//OASIS//DTD DocBook V4.2//EN"> <caution> <title>This is a caution</title> <para>Be careful while opening the box!</para> </caution> Line-specific environments. "Line-specific environments" preserve whitespace and line breaks. <!DOCTYPE blockquote PUBLIC "-//OASIS//DTD DocBook V4.2//EN"> <blockquote> <attribution>Rudyard Kipling, <citetitle>If</citetitle> </attribution> <literallayout> If you can force your heart and nerve and sinew To serve your turn long after they are gone, And so hold on when is nothing in you Except the Will which says to them: Hold on! </literallayout> </blockquote> Common block-level elements. "Common block-level elements" include Examples, figures, and tables. The distinction between formal and informal elements is that formal elements have titles while informal ones do not. Example, InformalExample <!DOCTYPE example PUBLIC "-//OASIS//DTD DocBook V4.2//EN"> <example> <title>Sample code</title> <programlisting>print "Hello, world!"</programlisting> </example> Figure, InformalFigure <!DOCTYPE figure PUBLIC "-//OASIS//DTD DocBook V4.2//EN"> <figure> <title>Revenues for Q1</title> <mediaobject> <imageobject> <imagedata fileref="q1revenue.jpg" format="JPG"/> </imageobject> </mediaobject> </figure> Table, InformalTable <!DOCTYPE webpage SYSTEM "../website.dtd" [ <!NOTATION XML SYSTEM "xml"> <!ENTITY test1a SYSTEM "test1a.xml" NDATA XML> <!ENTITY test3 SYSTEM "test3.xml" NDATA XML> <!ENTITY about.xml SYSTEM "about.xml" NDATA XML>]> <webpage id="home"> <config param="desc" value="The Test Home Page"/> <config param="rcsdate" value="$Date: 2001/11/08 20:44:20 $"/> <config param="footer" value="about.html" altval="About..."/> <head> <title>Welcome to Website</title> <summary>Introduction</summary> <keywords>Rusen Gul, XSL, XML, DocBook, Website</keywords> </head> <para> This website demonstrates the DocBook.</para> <webtoc/> <section> <title>What is a Website?</title> <para>A website is a collection of pages organized, for the purposes of navigation, into one or more hierarchies. In Website, each page is a separate XML document authored according to the Website DTD, a customization of <ulink url="http://www.oasis-open.org/docbook/">DocBook</ulink>.</para> </section> </webpage> Why use DocBook? This certainly looks like too much work, doesn’t it? You’re not wrong. Why do we bother to use DocBook then? DocBook is well suited to any collection of technical documentation that is regularly maintained and published. Multiple authors can contribute to a single document, and their content can easily be merged because all the authors are using a highly structured, standard markup language. Just one little point to keep in mind; because the formatting for DocBook documents is strictly accomplished by stylesheets, DocBook is not well matched to highly designed layout-driven content like magazines. Setting up a DocBook system will certainly take some time and effort. The payoff will be an efficient, flexible, and inexpensive publishing system that is iterative and that can grow with your needs. Therefore, it is worth the effort! DocBook Filters - Reading and Writing DocBook XML Using OpenOffice.org. The goal of the project is to use OpenOffice.org as a WYSIWYG editor of XML content to edit structured documents using styles. When exported, these styles are then transformed to XML tags. This section shows you how to enable and use DocBook filters. Below are some links to stylesheets that can be download to use the latest transformations. Enabling the DocBook XSLT's in OpenOffice.org 1.1 Beta 2/RC. There are three different ways to enable the DocBook filters. The most recent stylesheets support the import and export of DocBook documents with article or chapter as the top-level tag. The different stylesheets required for each of these operation are listed below: OpenOffice.org Template required for Article and Chapter documents: Creating a new DocBook filter To create a DocBook Article filter, the above steps can be repeated with article replacing chapter This method is more convenient, however there is no guarantee that the most recent stylesheets and OpenOffice.org template will be used. The DocBook UNO component adds filter support for the retention of unresolved XML entities. How to Import a DocBook document. A DocBook article or chapter document can now be opened using the File -> Open dialog. The DocBook XSLT filter should automatically determine the root element of the document and import it with the matching XSLT filter. Alternatively, it is possible to browse manually to the desired DocBook filter in the File Type combo-box in the File -> Open dialog. How to Export a DocBook document. The DocBook document can also be exported using the File -> Save As dialog. Again, the DocBook XSLT filter should automatically determine the file type and export with the matching XSLT filter. Alternatively, it is possible to browse manually to the desired DocBook filter in the File Type combo-box in the File -> Save As dialog. Using OpenOffice.org Headings and Styles for different DocBook tags. Using OpenOffice.org styles to represent DocBook tags The style template supplies all of the custom styles that are currently supported. Once a DocBook document has been imported to OpenOffice.org, the available DocBook specific styles can be viewed using the Stylist. On import, each of the supported DocBook tags will be mapped to formatted OpenOffice.org content. Similarly, to modify the imported DocBook document, OpenOffice.org text styles can be used to represent the DocBook tags marking-up the text. NOTE: A new DocBook document can be created in OpenOffice.org by opening the DocBookTemplate.stw. The document can then be saved as a DocBook document, and the new content will be represented as DocBook mark-up. How to create new DocBook content: How to create DocBook sections: Initially the DocBook project used OpenOffice.org sections to enforce the nesting of DocBook sections. Feedback has shown that authors wish to use the common word processing styles such as Heading1, Heading2, etc. The following instructions describe how to create a <sect1> that contains a <sect2> Navigating through the document: If you wish to see how DocBook sections are nested as OpenOffice.org headings, use the F5 key to Display the Navigator window. Expand the headings tag, to display the layout of the headings within the document. You can skip to the start of a given DocBook section/OpenOffice.org heading, by double-clicking on it.
8,204
Basic Electrical Generation and Distribution. This is a document for everyday use of electricity in a household. Many circuits are a mixture of electrical, mechanical, and electronic components, which interact in different ways to produce strange and useful effects. Topics include commercially generated AC as well as AC generated from inverters for alternative power use (such as off-the-grid homes, cabins or recreational vehicles.) Electricity has become an integral part of life and difficult to imagine to be without it. Distribution and Domestic Power Supply. Alternating Current is used for electric power distribution because it can easily be transformed to a higher or lower voltage. Electrical energy losses are dependent on current flow. By using transformers, the voltage can be stepped up so that the same amount of power may be distributed over long distances at lower currents and hence lower losses due to the resistance of the conductors. The voltage can also be stepped down again so it is safe for domestic supply. Three-phase electrical generation and transmission is common and is an efficient use of conductors as the current-rating of each conductor can be fully utilized in transporting power from generation through transmission and distribution to final use. Three-phase electricity is supplied only in industrial premises and many industrial electric motors are designed for it. Three voltage waveforms are generated that are 120 degrees out of phase with each other. At the load end of the circuit the return legs of the three phase circuits can be coupled together at a 'neutral point', where the three currents sum to zero if supplied to a balanced load. This means that all the current can be carried using only three cables, rather than the six that would otherwise be needed. Three phase power is a type of "polyphase" system. In most situations only a single phase is needed to supply street lights or residential consumers. When distributing three-phase electric power, a fourth or neutral cable is run in the street distribution to provide one complete circuit to each house. Different houses in the street are placed on different phases of the supply so that the "load is balanced", or spread evenly, across the three phases when consumers are connected. Thus the supply cable to each house can consist of a live and neutral conductor with possibly an earthed armoured sheath. In North America, the most common technique is to use a transformer to convert one distribution phase to a center-tapped 'split-phase' 240 V winding; the connection to the consumer is typically two 120-volt power lines out of phase with each other, and a grounded 'neutral' wire, which also acts as the physical support wire. In India there is a recent trend of providing a High Voltage line up to the residence & then stepping it down to domestic power on premises to avoid pilferage of the Energy. Although this method has certain advantages, there are obvious potential dangers associated with it. The use of "split phase" power, two 120-volt power lines out of phase with each other, as described above, allows high-powered appliances to be run on 240 V, thus decreasing the amount of current required per phase, while allowing the rest of the residence to be wired for the safer 120 V. For example, a clothes dryer may need 3600 W of power, which translates to a circuit rating of 30 A at 120 V. If the dryer can instead be run on 240 V, the service required is only 15 A. Granted, you would then need two 15 A circuit breakers, one for each side of the circuit, and you would need to provide two 'hot' lines, one neutral, and a ground in the distribution wiring, but that is offset by the lower cost of the wires for the lower current. Houses are generally wired so that the two phases are loaded about equally; connecting the high-power appliances such as clothes dryers, kitchen ranges, and built-in space heaters across both phases helps to ensure that the loads will remain balanced across the two phases. For safety, a third wire is often connected between the individual electrical appliances in the house and the main electric switchboard or fusebox. The third wire is known in Britain and most other English-speaking countries as the "earth wire", whereas in North America it is the "ground wire". At the main switchboard the earth wire is connected to the neutral wire and also connected to an earth stake or other convenient earthing point (to Americans, the "grounding point") such as a water pipe. In the event of a fault, the earth wire can carry enough current to blow a fuse and isolate the faulty circuit. The earth connection also means that the surrounding building is at the same voltage as the neutral point. The most common form of electrical shock occurs when a person accidentally forms a circuit between a live conductor and ground. A "residual-current circuit breaker" (also called a Ground Fault Interrupter, GFI, or Ground Fault Circuit Interrupter, GFCI) is designed to detect such a problem and break the circuit before electric shock causes death. As many parts of the neutral system are connected to the earth, balancing currents, known as "earth currents", may flow between the distribution transformer and the consumer and other parts of the system, which are also earthed, this acts to keep the neutral voltage at a safe level. This system of earthing the neutral points to balance the current flows for safety reasons is known as a "multiple earth neutral system". Overcurrent protection. In households circuit breakers or fuses are used to switch off the supply of electricity quickly if the current is too large, for example there is a limit of 15 amperes in a normal 115/120 volt circuit. Unfortunately in some cases this 'protection' can have a cascading effect, because the switching-off of one circuit can lead to an overload of adjacent circuits that may switch off later. "Blackouts" can be the result if further failures occur. The amount of time taken to restore generation and reestablish that balance depends on the type of generation (thermal, hydroelectric. nuclear or other) available, - after a "blackout" it can take many hours to restore the system. Single phase electric power. The generation of AC electric power is commonly "three phase", in which the waveforms of three supply conductors are offset from one another by 120°. The design of the power generators has three sets of coils placed 120 degrees apart rotating in a magnetic field. This creates three separate sine waves of electricity that are displaced from each other in time by 120 degrees of rotation (1/3 of a circle). Standard frequencies of rotation are either 50 Hertz (cycles per second) in Europe or 60 Hertz in North America. The voltage across any pair of these three conductors, or between a single conductor and ground (in a grounded system) is what is known as "single phase" electric power. "Single phase" power is what is commonly available to residential and light-commercial consumers in most distribution power grids. In North America, the single phase that is supplied is developed across a transformer coil at the utility pole (for aerial drop) or transformer pad (for underground) distribution. This single coil is center tapped and the tap is grounded. This creates a 120/240 volt system that is delivered to the customer. The voltage from either side of the coil to the center tap (ground) is 120 volts whereas the voltage between the two conductors on either end of the coil develops the full voltage of 240 volts. Inverters and Battery Based AC. An "inverter" is a circuit for converting direct current to alternating current. An inverter can have one or two switched-mode power supplies (SMPS). Early inverters consisted of an oscillator driving a transistor as an on/off switch, that is used to interrupt the incoming direct current to create a square wave. This is then fed through a transformer to smooth the square wave into a sine wave and to produce the required output voltage. More efficient inverters use various methods to produce an approximate sine wave at the transformer input rather than relying on the transformer to smooth it. Capacitors can be used to smooth the flow of current into and out of the transformer. It is also possible to produce a more sinusoidal wave by having split-rail direct current inputs at two voltages, (positive and negative inputs with a central ground). By connecting the transformer input terminals in a timed sequence between the positive rail and ground, the positive rail and the negative rail, the ground rail and the negative rail, then both to the ground rail, a 'stepped sinusoid' is generated at the transformer input and the current drain on the direct current supply is less variable. "Modified Sine Wave inverters" convert the (usually 12 V DC) battery voltage to high frequency (20 kHz) AC, so that a smaller transformer can be used for stepping up to a higher voltage (say 160 V) AC. This output is converted to DC at the same voltage, and then inverted again to a quasi sine wave output (about 120 V RMS). A disadvantage of the modified sine wave inverters is that the output voltage depends on the battery voltage. It is quite difficult to obtain a good sine wave from an inverter. The quoted accuracy ("harmonic distortion") for most is less than 60%, and will have an effect on the appliances connected to the output of the inverter. This "might" mean noisy operation in some appliances and/or damaged electric motors, because they will run less efficiently and could overheat. High end inverters (> $2,000) produce waveforms which are closer to the sine wave produced by a utility. Batteries. Most home systems use conventional lead acid batteries for storage. They are cheap, and are deep cycle batteries, "i.e.", they can be discharged completely and charged again many times. You cannot use automobile batteries in inverters, as they are only used to provide a large starting current, and are not meant to be discharged completely. The lead acid batteries have the disadvantage that they have to be replenished with distilled water every few months, and if it dries out, it cannot be repaired. However, they can provide the large surge currents which are required by many loads (such as induction motors) which may be connected to the system. Switched Mode Power Supply. A "switched-mode power supply", or SMPS or switching regulator, is an electronic power supply circuit that attempts to produce a smoothed, constant-voltage, output from a varying input voltage. Switched-mode power supplies may be designed to convert from alternating current or direct current, or both. They generally output direct current, although an inverter is technically a switched-mode power supply. Switched-mode power supplies operate by using an inverter to convert the input direct current supply to alternating current, usually at around 20 kHz. If the input is alternating current but at a lower frequency (such as 50 Hz or 60 Hz line power) then an inverter is still used to bump the frequency up. This high frequency means that the output transformer of the inverter will operate more efficiently than if it were run at 50 Hz or 60 Hz, due to hysteresis in the transformer core, and the transformer will not need to be as large or heavy. This high-frequency output is then fed through a rectifier to produce the output direct current. Regulation is achieved through feedback. The output voltage is compared to a reference voltage and the result used to alter the switching frequency or duty cycle of the inverter oscillator, which affects its output voltage. Switched-mode PSUs in domestic products such as personal computers often have universal inputs, meaning that they can accept power from most mains supplies throughout the world, with frequencies from 50 Hz to 60 Hz and voltages from 100 V to 240 V. Unlike most other appliances, switched mode power supplies tend to be constant power devices, drawing more current as the line voltage reduces. This may cause stability problems in some situations such as emergency generator systems. Also, maximum current draw occurs at the peaks of the waveform cycle. This means that basic switched mode power supplies tend to produce more harmonics and have a worse power factor than other types of appliances. However, higher-quality switched-mode power supplies with power-factor correction (PFC) are available, which are designed to present close to a resistive load to the mains. The term power factor with respect to switched-mode supplies is misleading as it doesn't have much to do with leading or lagging voltage, but the way in which it loads the circuit ("i.e." only at certain points in the cycle). There are several types of switched-mode power supplies, classified according to the circuit topology. Major Classes of Appliances. Single-Phase AC motors. The most common single-phase motor is the shaded-pole synchronous motor, which is most commonly used in devices requiring lower torque such as electric fans, microwave ovens and other small household appliances. Another common single-phase AC motor is the induction motor, commonly used in major appliances such as washing machines and clothes dryers. These motors can generally provide greater starting torque by using a special startup winding in conjunction with a starting capacitor and a centrifugal switch. When starting, the capacitor and special winding are temporarily connected to the power source and provide starting torque. Once the motor reaches speed, the centrifugal switch disconnects the capacitor and startup winding. Shaded-pole synchronous motor. Shaded-pole synchronous motors are a class of AC motor that uses single phase electric power to convert electric power to mechanical energy. They work by using a squirrel-cage rotor and a split stator that has copper shorting rings placed on it so as to shade a portion of the stator's magnetic field enough to provide starting torque. The number of poles in an induction motor is an important factor in its interaction with non sine wave input. As a rule of thumb, motors with larger number of poles are more sensitive to harmonic distortion. Incandescent Lamps. Early applications of lighting was using lamps which used a heated filament to provide light. The filament was made of tungsten and was placed inside a near vacuum glass enclosure. While it was cheap, it produced a lot of heat, so that it was inefficient too. Note that the incandescent bulb is a purely resistive load (power factor 1). Inrush Current. The incandescent bulb is designed to operate at high temperatures. At normal operating temperatures, a tungsten filament has a resistance nearly 20 times its room-temperature resistance. So when a bulb is turned on, it draws a current nearly 20 times the normal current until it warms up. This current surge is called the "inrush current", which lasts for 30-100 milliseconds. Again, something different from the "dumb load" point of view. Thus, 5 100 W bulbs in parallel, which would consume just 500 W in normal circumstances, will have a inrush load of more than 10000 W. More importantly, a huge current flows, and it is important that all components on the line can carry the current. For larger lamps, a small current flows to keep it at a reasonable temperature, called the "keep alive". Evaporation. Another factor often overlooked in lamps is the resistance vs. time values. For an incandescent lamp, the power is proportional to the area. The tungsten slowly evaporates as the bulb ages, so that the power (and hence the light) produced by the lamp drops. Further, the light drops at about 5 times the rate of the power drop, so that the lamp becomes very inefficient with age. After running for 75% of its rated life, an incandescent lamp must produce more than 93% of its initial light output in order to pass the standard test described in IEC Publication 60064. Voltage and Efficiency. The efficiency of an incandescent lamp is measured in terms of the amount of light produced per watt of power consumed. As the temperature of the lamp decreases, the light output per watt decreases. Thus, at a lower voltage (brownout), the efficiency of the lamp is very low. The tungsten filament normal operating temperature is selected to minimize the net cost of running lighting fixtures, balancing efficiency and lifetime. Hotter filament temperatures cost more because they wear out the filament faster and require more frequent replacements. Colder filament temperatures cost more because they require more electrical power for a given amount of visible light. The luminous efficiency of any black-body radiator increases with temperature up to 6300 °C (6600 K or 11,500 °F). Tungsten melts at 3695 K (6192°F), where it, like any black-body radiator, would theoretically have a luminous efficiency of 52 lumens per watt. A 50-hour-life projection bulb is designed to operate at 50 °C (90 °F) below that melting point, where it may achieve up to 22 lumens/watt. A 1000 hour lifespan general service bulb typically operates at 2000 K to 3300 K (about 3100-5400°F), achieving 10 to 17 lumens/watt. As you increase the voltage V of an incandescent light bulb, the incandescent bulb puts out more light -- proportional to the fourth power of V -- but the life of the incandescent bulb is then decreased by the eighth power of V. Fluorescent Lamps. The tungsten lamp has been replaced in most applications by fluorescent lamps. Fluorescent lamps have a power factor close to 0.25. Fluorescent lamps typically rate about 40 W, and they provide much more (about 5 times) light compared to an incandescent lamp of the same wattage. They also give out less heat. Passive Control. Early fluorescent lamps used a ballast (also called a choke coil), which was essentially an inductor to control the current in the lamp. Also, the lamp was started by using a starter, which is essentially a neon thermistor which heats up and closes a circuit. With the choke coil in series with it, the lamp has a relatively small voltage drop across it so that the starter doesn't close again. As the starter is in parallel with the lamp, the same starter can be used to start several lamps. One particularly annoying aspect of the electromagnetic ballast is the 60 Hz flicker produced. While it does not bother most people, some find it extremely irritating. Also, the electromagnetic ballast increases power consumption by about 25% when on utility power. Active (Electronic) Control. Modern lamps use electronic circuits to control the current, so that both the starter and the choke coil are redundant, and they behave much better on both inverter based and utility power. Many electronic ballasts will boost the frequency to something in the range of 20 kHz, so that there is no flicker problem. CRT Based Appliances. The other major source of power consumption are CRTs (Cathode Ray Tubes) like computer monitors and televisions. Computer Towers. The towers of a modern computer draw their power from a SMPS, which has been detailed below. The most popular computers today (running P4s and 3D cards) consume several hundred watts of power. Other Electronic Loads. Other electronic items in a household draw their power from the mains using a wall wart. The steady state power consumed by each component is pretty low, and in many cases (like printers, scanners etc.), they don't work continuously. Control Elements. Control elements are the switches, dimmers, and regulators which are connected to the circuit. They are, by their very nature non linear elements and their behavior is quite complicated, and not quite well represented by their simple schematic symbols. Light Dimmers. Light dimmers work by cutting off parts of the input sine wave. While this works for resistive loads, even here it has side effects. Energy Meters. Most households are on the grid, "i.e." their electricity comes from a utility, which installs an energy meter on the premises. The meter is then read either manually or by phone line connection to the utility offices. The utility wants your power factor to be as close to 1 as possible, and businesses are penalized if they cannot achieve a target set by the utility, as the transmission losses are nearly the same for both active and reactive power consumed. For home users no such rule exists, and it is interesting to see the changes in the power consumption patterns now that most of the home electricity use is not lighting, and even the lighting is by fluorescent lamps which are not resistive in nature. The utility only charges the home uses for active power, so that a low power factor is not an issue from an economic perspective, and transmission losses within the household are negligible. Mechanical Energy Meters. Mechanical energy meters are discussed in high school physics books as applications of "Lenz's law", "viz.", the generation of eddy currents which oppose the change that caused it. The number of revolutions of a metal disc between the poles of an electromagnet represents the amount of energy consumed. They are more accurately described as electro-mechanical meters, as they use mechanical components like a spinning disc to measure the energy consumed. Electronic Energy Meters. Electronic meters work by measuring the current flowing through the resistors in it at any time. The unit of measurement in the meter is the number of pulses, which is the smallest unit of energy measured by the meter. The pulses are calibrated in terms of kilowatt-hours of electricity, typically 3200 pulses per unit. Apart from the numbered wheel display found in mechanical meters, the energy consumed is also noted inside chips in the meter, so tampering can be detected. Lightning. Lightning is a very major cause for concern for a home user. Lightning consists of an immense current source which discharges itself through anything it can find. Proper lightning control and defense is very tricky, and improper methods can increase the risk to man and machine. A simple lightning arrestor consists of a choke which is in series with the loads. A spark gap which is grounded runs parallel. When lightning strikes, the pulse is almost a square wave, and the choke acts as a large resistance. At the same time, the large voltage generated causes the air to break down across the spark gap, and it acts as a short,
5,281
Guitar/Lead Guitar and Rhythm Guitar. The terms lead guitar and rhythm guitar are mildly confusing, especially to the beginner. Of course, a guitar should almost always follow some sort of rhythm, whether loose or tight. Plus, many times, guitars are very prominent in a song, where it drives the music, but it's not quite lead. Plus, the lead guitarist doesn't even play a lead part, and that happens almost all the time! How can we untangle this mess? The distinction is somewhat arbitrary. Many bands in contemporary music have two guitarists, where usually one would specialize in "lead" and the other in "rhythm". The Beatles, Dethklok, and Metallica are examples of bands who use this combination. Lead guitar means melody guitar, meaning that the lead guitarist must specialize in playing the melody of the song, so any guitar playing a solo is not a lead. Sure, a lead guitarist may get to solo, but someone cannot be called a lead guitarist simply because he/she plays a solo in a song. A lead part contributes entirely to melody (as lead guitar means melody guitar), instead of to the foundation, which is carried by the rhythm guitar. This means the rhythm guitarist is the driving source. Lead guitar uses few or no chords, although sometimes it can be following a chord structure, while rhythm guitar uses the chords to drive the music. It is important to realize that lead guitar and rhythm guitar fit into two different parts of a band, but it just happens that they are played on the same instrument. Lead guitar provides a solo voice, and is grouped with the lead vocals, lead piano, etc. Rhythm guitar is part of the underlying rhythm section, along with instruments like bass, drums, sometimes piano, background vocals, etc. Generally speaking, the rhythm provides the groove of the song, while lead provides the melody. However, these distinctions get fuzzy, especially when the so-called lead guitarists play chords and double-stops in their riffs. In some cases, a single guitar part provides both the melody and accompaniment (especially power chord riffs, commonly found in rock and metal, and finger picking, found in folk guitar). Some bands (often three piece bands) feature a single guitarist who can act as either, by either assuming one role at a time or, in a recording studio, recording a lead track over their own rhythm track. For example, the band Dire Straits has been in both situations: in the early days, David Knopfler played rhythm while Mark Knopfler played lead. When David left, Mark usually played both parts on studio albums, and hired another guitarist to play rhythm for live shows. Some guitarists reached such technical proficiency that they were able to play both parts "simultaneously". A famous example of this technique is Dimebag Darrell, particularly on songs such like Walk or Breathing New Life (using an harmonizing effect pedal). The bass plays a big part like in songs such as Seven Nation Army by The White Stripes (even though there is no bass guitar in the song, it's an electric acoustic) to set the speed and tone for the song and that the lead guitarist (otherwise known as melody guitarist) in the chorus follows the bass and drums not, the bass follow the chorus player. Playing Lead Guitar. Very often, a lead guitar part is played on an electric guitar, using moderate to heavy distortion (also known as drive or gain). For this reason, many amplifier manufacturers refer to their distortion channel as a lead channel. Distortion provides a more powerful sustain than a clean channel, and this is often best represented in extreme techniques like shredding and tapping, which some guitarists feel can only properly be done with distortion. Of course, lead guitar can be played on an acoustic guitar, but some techniques may not be as pronounced as on an electric. The most common techniques for creating lead parts are bending, vibrato and slides. These provide the basic means of emphasizing notes, and allow for greater expression in the melody. Often the lead guitarists may employ arpeggios or sweep picking to add depth, and the progression of the solo often mirrors the underlying rhythm guitar part. Playing Rhythm Guitar. Rhythm guitar is characterized mostly by playing chords in patterns. Some players criticize rhythm guitar as sounding "chordy", or not being as interesting as the lead part. Although rhythm guitar does not "express" as much as the lead guitar, there is so much to be learned about chords, chord progressions and rhythm patterns, and a player is limited only by their imagination. Rhythm guitar is just as easily played on electric or acoustic, clean or distorted. The technique is less about expressing individual notes, and more about choosing chords or chord voicings that enrich the overall sound, which may add its own expressive tone to the music.
1,082
Visual Language Interpreting/Introduction. A few assumptions. Visual Language interpreters may be found in nearly every part of public life, and so it is expected that those who look as this text will hardly be approaching without some opinion and background knowledge. Opinions and backgrounds vary widely, however, so it makes sense to outline the assumptions made by the authors of this book with respect to its readership. What this book is for. This book is written for those with a strong interest in interpreting where at least one of the languages is visual (more on that below). Many of the readers will probably be on the cusp or in the process of formal education to become an interpreter. Others may be consumers of interpreting services who wish to have a greater appreciation and understanding of the interpreter's task. The expectation is that most readers will know at least one visual language, in addition to a spoken language. For those wishing to become interpreters, it is expected that command of the spoken and visual languages will approach that of an educated native speaker; if there are any deficiencies in either language, it is vital for potential interpreters to make the necessary effort to ameliorate any problems. What this book is "not" for. There was a time in the not-too-distant past when there were no courses on interpreting, no textbooks, no linguistic research. But there were still interpreters. So it is possible to be an interpreter and a skilled one without any hint of formal training. But what is possible is not always what is best, and in this case it is possible to be an interpreter without training in the way that it is possible to row a small boat across an ocean. The benefits of formal training outweigh any cost, and the cost of avoiding formal training are not paid for by any benefit. Neither this nor any other book can be used as a substitute for a serious multi-year course of instruction by experts skilled both in the art interpretation and in the art of of teaching the art. Interpreting is not a "teach-it-yourself" occupation; the more experienced eyes are on you, the more feedback and instruction you receive, the greater the chanced that you will excel. This is also not a textbook on a visual language. Although many examples of proper translations of sentences into visual languages (notably are given, these are given in the context of the interpreting process. Natural conversation in a language is a qualitatively different cognitive task, and the two should not be confused. There are setting when communication occurs in a visual medium, such as underwater among SCUBA divers or in a military/police operation. This is not a book about interpreting those situations. Why "Visual Language?". Given the fact that interpreters typically work between a spoken and a signed language, the term "visual language" may seem unnecessarily convoluted. Why not "sign language interpreting" or "interpreting for the deaf?" The short answer (which will be elaborated on in the following chapters) is that such interpreting is not simply for those who are ("Note:" following common convention, "Deaf" is capitalized when referring to members of a linguistic and cultural group which uses a as the first language). Not only does this piece of verbal legerdemain place the deaf in the position of being helpless, it ignores the fact that in situations where there are deaf and hearing consumers of interpreting services, "both" sides are in need of such services. In fact, there are situations in which both consumers are hearing, but one may have a disorder such as an which necessitates the use of a visual language. Furthermore, not all visual languages are signed languages. There are systems such as which, although they do not qualify as signed languages, are used under same circumstances as those encountered by sign language interpreters. For this reason, this book uses "visual language", a decision which mirrors that of bodies such as the Association of Visual Language Interpreters of Canada. "Field" or "Profession?". It is often said that visual language interpreting is a , and many interpreters, when discussing professional practices compare interpreting to law, medicine, or another field of endeavor within the same societal rank. Others agree that while that is a worthy goal, a profession has a complicated body of knowledge which takes years to master. Interpreting certainly is a complex and demanding task which is acquired on the order of years; the best interpreters are still learning. Still, there is no broad agreement on how that knowledge is to be effectively transmitted. Furthermore, the level of is still a matter hotly debated within the profession. The Registry of Interpreters for the Deaf (RID) has recently passed a motion phasing in a degree requirement in the coming decade: first an associate's degree then a bachelor's degree will be required. There are also indications of growing pains in other areas which, the argument goes, are signs that visual language interpreting is not yet a profession in the strict sense. Still others assert that aspiring to professional status is elitist, and that visual language interpreting has done fine without being in such highfalutin' company as the learned professions. Furthermore, they would argue, formal education is no guarantee of quality, and to pretend to do so ignores current facts on the ground. Many interpreters are "home-grown", and provide superior service. To them, interpreting is a trade, learned by long apprenticeship, but not necessarily in a college classroom. Each of these viewpoints (and others) have powerful arguments both for and against them, and many of the authors of this text will hold to one or another of these views. In the interests of harmony, and while the debate is still far from concluded, this textbook will use the more neutral term "field", which may be applied equally to medicine and cosmetology. Conventions Used in this Book. Since sections of this book will be concerned with a particular brand of visual language, namely , there are times when it will be necessary to represent utterances in that language. To that end, every effort will be made to include videos and illustrations as needed. However, sometimes the use of glosses will suffice, and standard glossing conventions as found in most standard ASL textbooks and research journals will be used. The glosses will be written in the format given below: _cond_______________________________________________ _[smiling]__ SUPPOSE IX.2 READ IX."glossing on screen" UNDERSTAND, NONE PROBLEM.
1,457
Introduction to Psychology/Child and Adolescent Psychology. Child psychology is a field of study in which researchers work to understand and describe changes that take place as children grow. Some sections within this wikibook are translations from a French wikibook. __NOEDITSECTION__
63
Introduction to Psychology/Child and Adolescent Psychology/Introduction. Introduction. This work is addressed to all students who aim to practice in the fields - both vast and closed - of social psychology, psychological education, pedagogy, or other psychologies; that is to say, in any domain where the "rules of psychology" hold sway. Hence, this work will also interest a neophyte with any interest in psychology. De nombreux métiers, surtout récents, cherchent à se distinguer par un hermétisme de leur code linguistique, et c'est largement le cas dans le champ professionnel désigné ci-dessus. Le langage «psy» est l'apanage des professionnels « initiés» et les démarque des autres interlocuteurs pourtant de plus en plus nombreux dans ce domaine. Si l'on souhaite que « le fait psychologique » soit considéré comme un outil, et comme l'outil le plus efficace dont nous disposions dans notre domaine professionnel, il faut lui ôter son côté hermétique et expliquer clairement ce que nous savons avec certitude, ce que nous savons par intuition et ce que nous devons soumettre à des études plus approfondies pour intégrer tel ou tel fait dans le corpus des connaissances établies. The goal of this work is twofold: on the one hand, to describe as clearly as possible the present state of our knowledge in the field of child and adolescent psychology, and on the other, to allow future teachers, educators of young children, psychiatric teaching assistants[?aides médico-pédagogiques], etc., to further[?faire le point sur] their knowledge. This book comprises three parts: With this book, the reader can always find: If this book manages to accomplish these goals and, moreover, to transmit even a little of [?un tant soit peu] the pleasure of understanding, for example, what goes on when a child plays in front of our eyes or seeks to grasp a kind of reasoning up to that time unknown, if the reader (in addition to everything mentioned above) could sense the complexity - under the apparent simplicity - of these "human adventures," the goal of this work would be achieved.
515
Python Programming/Input and Output. Input. Python 3.x has one function for input from user, codice_1. By contrast, legacy Python 2.x has two functions for input from user: codice_1 and codice_3. There are also very simple ways of reading a file and, for stricter control over input, reading from stdin if necessary. input() in Python 3.x. In Python 3.x, input() asks the user for a string of data (ended with a newline), and simply returns the string. It can also take an argument, which is displayed as a prompt before the user enters the data. E.g. print(input('What is your name?')) prints out Example: to assign the user's name, i.e. string data, to a variable "x" you would type x = input('What is your name?') In legacy Python 2.x, the above applies to what was codice_3 function, and there was also codice_1 function that behaved differently, automatically evaluating what the user entered; in Python 3, the same would be achieved via codice_6. Links: input() in Python 2.x. In legacy Python 2.x, input() takes the input from the user as a string and evaluates it. Therefore, if a script says: x = input('What are the first 10 perfect squares? ') it is possible for a user to input: map(lambda x: x*x, range(10)) which yields the correct answer in list form. Note that no inputted statement can span more than one line. input() should not be used for anything but the most trivial program, for security reasons. Turning the strings returned from raw_input() into Python types using an idiom such as: x = None while not x: try: x = int(raw_input()) except ValueError: print('Invalid Number') is preferable, as input() uses eval() to turn a literal into a Python type, which allows a malicious person to run arbitrary code from inside your program trivially. Links: File Input. File Objects. To read from a file, you can iterate over the lines of the file using "open": f = open('test.txt', 'r') for line in f: print(line[0]) f.close() This will print the first character of each line. A newline is attached to the end of each line read this way. The second argument to "open" can be 'r', 'w', or 'rw', among some others. The newer and better way to read from a file: with open("test.txt", "r") as txt: for line in txt: print(line) The advantage is that the opened file will close itself after finishing the part within the "with" statement, and will do so even if an exception is thrown. Because files are automatically closed when the file object goes out of scope, there is no real need to close them explicitly. So, the loop in the previous code can also be written as: for line in open('test.txt', 'r'): print(line[0]) You can read a specific numbers of characters at a time: c = f.read(1) while len(c) > 0: if len(c.strip()) > 0: print(c) c = f.read(1) This will read the characters from f one at a time, and then print them if they're not whitespace. A file object implicitly contains a marker to represent the current position. If the file marker should be moved back to the beginning, one can either close the file object and reopen it or just move the marker back to the beginning with: f.seek(0) Standard File Objects. There are built-in file objects representing standard input, output, and error. These are in the sys module and are called stdin, stdout, and stderr. There are also immutable copies of these in __stdin__, __stdout__, and __stderr__. This is for IDLE and other tools in which the standard files have been changed. You must import the sys module to use the special stdin, stdout, stderr I/O handles. import sys For finer control over input, use sys.stdin.read(). To implement the UNIX 'cat' program in Python, you could do something like this: import sys for line in sys.stdin: print(line, end="") Note that sys.stdin.read() will read from standard input till EOF. (which is usually Ctrl+D.) Parsing command line. Command-line arguments passed to a Python program are stored in sys.argv list. The first item in the list is name of the Python program, which may or may not contain the full path depending on the manner of invocation. sys.argv list is modifiable. Printing all passed arguments except for the program name itself: import sys for arg in sys.argv[1:]: print(arg) Parsing passed arguments for passed minus options: import sys option_f = False option_p = False option_p_argument = "" i = 1 while i < len(sys.argv): if sys.argv[i] == "-f": option_f = True sys.argv.pop(i) elif sys.argv[i] == "-p": option_p = True sys.argv.pop(i) option_p_argument = sys.argv.pop(i) else: i += 1 Above, the arguments at which options are found are removed so that sys.argv can be looped for all remaining arguments. Parsing of command-line arguments is further supported by library modules optparse (deprecated), argparse (since Python 2.7) and getopt (to make life easy for C programmers). A minimum parsing example for argparse: import argparse parser = argparse.ArgumentParser(description="Concatenates two strings") addarg = parser.add_argument addarg("s1", help="First string to concatenate") addarg("s2", help="Second string to concatenate") args = parser.parse_args() result = args.s1 + args.s2 print(result) Parse with argparse--specify the arg type as int: import argparse parser = argparse.ArgumentParser(description="Sum two ints") addarg = parser.add_argument addarg("i1", help="First int to add", type=int) addarg("i2", help="Second int to add", type=int) args = parser.parse_args() result = args.i1 + args.i2 print(result) Parse with argparse--add optional switch -m to yield multiplication instead of addition: import argparse parser = argparse.ArgumentParser(description="Sums or multiplies two ints.") addarg = parser.add_argument addarg("i1", help="First int", type=int) addarg("i2", help="Second int", type=int) addarg("-m", help="Multiplies rather than adds.", action="store_true") args = parser.parse_args() if args.m: result = args.i1 * args.i2 else: result = args.i1 + args.i2 print(result) Parse with argparse--set an argument to consume one or more items: import argparse parser = argparse.ArgumentParser(description="Sums one or more ints.") addarg = parser.add_argument addarg("intlist", help="Ints", type=int, nargs="+") args = parser.parse_args() result = 0 for item in args.intlist: result += item print(result) Usage example: python ArgparseTest.py 1 3 5 Parse with argparse--as above but with a help epilog to be output after parameter descriptions upon -h: import argparse parser = argparse.ArgumentParser(description="Sums one or more ints.", epilog="Example: python ArgparseTest.py 1 3 5") addarg = parser.add_argument addarg("intlist", help="Ints", type=int, nargs="+") args = parser.parse_args() result = 0 for item in args.intlist: result += item print(result) Parse with argparse--make second integer argument optional via nargs: import argparse parser = argparse.ArgumentParser(description="Sums one or two integers.", epilog="Example: python ArgparseTest.py 3 4\n" "Example: python ArgparseTest.py 3") addarg = parser.add_argument addarg("i1", help="First int", type=int) addarg("i2", help="Second int, optional, defaulting to 1.", type=int, default=1, nargs="?") args = parser.parse_args() result = args.i1 + args.i2 print(result) Links: Output. The basic way to do output is the print statement. print('Hello, world') To print multiple things on the same line separated by spaces, use commas between them: print('Hello,', 'World') This will print out the following: Hello, World While neither string contained a space, a space was added by the print statement because of the comma between the two objects. Arbitrary data types can be printed: print(1, 2, 0xff, 0777, 10+5j, -0.999, map, sys) This will output the following: 1 2 255 511 (10+5j) -0.999 <built-in function map> <module 'sys' (built-in)> Objects can be printed on the same line without needing to be on the same line: for i in range(10): print(i, end=" ") This will output the following: 0 1 2 3 4 5 6 7 8 9 To end the printed line with a newline, add a print statement without any objects. for i in range(10): print(i, end=" ") print() for i in range(10,20): print(i, end=" ") This will output the following: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 If the bare print statement were not present, the above output would look like: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 You can print to a file instead of to standard output: print('Hello, world', file=f) This will print to any object that implements write(), which includes file objects. Note on legacy Python 2: in Python 2, print is a statement rather than a function and there is no need to put brackets around its arguments. Instead of print(i, end=" "), one would write print i. Omitting newlines. In Python 3.x, you can output without a newline by passing end="" to the print function or by using the method write: import sys print("Hello", end="") sys.stdout.write("Hello") # Or stderr to write to standard error stream. In Python 2.x, to avoid adding spaces and newlines between objects' output with subsequent print statements, you can do one of the following: "Concatenation": Concatenate the string representations of each object, then later print the whole thing at once. print(str(1)+str(2)+str(0xff)+str(0777)+str(10+5j)+str(-0.999)+str(map)+str(sys)) This will output the following: 12255511(10+5j)-0.999<built-in function map><module 'sys' (built-in)> "Write function": You can make a shorthand for "sys.stdout.write" and use that for output. import sys write = sys.stdout.write write('20') write('05\n') This will output the following: 2005 You may need sys.stdout.flush() to get that text on the screen quickly. Examples. Examples of output with "Python 3.x": Examples of output with "Python 2.x": File Output. Printing numbers from 1 to 10 to a file, one per line: file1 = open("TestFile.txt","w") for i in range(1,10+1): print(i, file=file1) file1.close() With "w", the file is opened for writing. With "file=file1", print sends its output to a file rather than standard output. Printing numbers from 1 to 10 to a file, separated with a dash: file1 = open("TestFile.txt", "w") for i in range(1, 10+1): if i > 1: file1.write("-") file1.write(str(i)) file1.close() Opening a file for appending rather than overwriting: file1 = open("TestFile.txt", "a") In Python 2.x, a redirect to a file is done like print »file1, i. See also ../Files/ chapter. Formatting. Formatting numbers and other values as strings using the string percent operator: v1 = "Int: %i" % 4 # 4 v2 = "Int zero padded: %03i" % 4 # 004 v3 = "Int space padded: %3i" % 4 # 4 v4 = "Hex: %x" % 31 # 1f v5 = "Hex 2: %X" % 31 # 1F - capitalized F v6 = "Oct: %o" % 8 # 10 v7 = "Float: %f" % 2.4 # 2.400000 v8 = "Float: %.2f" % 2.4 # 2.40 v9 = "Float in exp: %e" % 2.4 # 2.400000e+00 vA = "Float in exp: %E" % 2.4 # 2.400000E+00 vB = "List as string: %s" % [1, 2, 3] vC = "Left padded str: %10s" % "cat" vD = "Right padded str: %-10s" % "cat" vE = "Truncated str: %.2s" % "cat" vG = "Char: %c" % 65 # A vH = "Char: %c" % "A" # A Formatting numbers and other values as strings using the format() string method, since Python 2.6: v1 = "Arg 0: {0}".format(31) # 31 v2 = "Args 0 and 1: {0}, {1}".format(31, 65) v3 = "Args 0 and 1: {}, {}".format(31, 65) v4 = "Arg indexed: {0[0]}".format(["e1", "e2"]) v5 = "Arg named: {a}".format(a=31) v6 = "Hex: {0:x}".format(31) # 1f v7 = "Hex: {:x}".format(31) # 1f - arg 0 is implied v8 = "Char: {0:c}".format(65) # A v9 = "Hex: {:{h}}".format(31, h="x") # 1f - nested evaluation Formatting numbers and other values as strings using literal string interpolation, since Python 3.6: int1 = 31; int2 = 41; str1="aaa"; myhex = "x" v1 = f"Two ints: {int1} {int2}" v2 = f"Int plus 1: {int1+1}" # 32 - expression evaluation v3 = f"Str len: {len(str1)}" # 3 - expression evaluation v4 = f"Hex: {int1:x}" # 1f v5 = f"Hex: {int1:{myhex}}" # 1f - nested evaluation Links:
3,961
Python Programming/Strings. Overview. Strings in Python at a glance: str1 = "Hello" # A new string using double quotes str2 = 'Hello' # Single quotes do the same str3 = "Hello\tworld\n" # One with a tab and a newline str4 = str1 + " world" # Concatenation str5 = str1 + str(4) # Concatenation with a number str6 = str1[2] # 3rd character str6a = str1[-1] # Last character for char in str1: print(char) # For each character str7 = str1[1:] # Without the 1st character str8 = str1[:-1] # Without the last character str9 = str1[1:4] # Substring: 2nd to 4th character str10 = str1 * 3 # Repetition str11 = str1.lower() # Lowercase str12 = str1.upper() # Uppercase str13 = str1.rstrip() # Strip right (trailing) whitespace str14 = str1.replace('l','h') # Replacement list15 = str1.split('l') # Splitting if str1 == str2: print("Equ") # Equality test if "el" in str1: print("In") # Substring test length = len(str1) # Length pos1 = str1.find('llo') # Index of substring or -1 pos2 = str1.rfind('l') # Index of substring, from the right count = str1.count('l') # Number of occurrences of a substring print(str1, str2, str3, str4, str5, str6, str7, str8, str9, str10) print(str11, str12, str13, str14, list15) print(length, pos1, pos2, count) See also chapter ../Regular Expression/ for advanced pattern matching on strings in Python. String operations. Equality. Two strings are equal if they have "exactly" the same contents, meaning that they are both the same length and each character has a one-to-one positional correspondence. Many other languages compare strings by identity instead; that is, two strings are considered equal only if they occupy the same space in memory. Python uses the codice_1 operator to test the identity of strings and any two objects in general. Examples: »> a = 'hello'; b = 'hello' # Assign 'hello' to a and b. »> a == b # check for equality True »> a == 'hello' # True »> a == "hello" # (choice of delimiter is unimportant) True »> a == 'hello ' # (extra space) False »> a == 'Hello' # (wrong case) False Numerical. There are two quasi-numerical operations which can be done on strings -- addition and multiplication. String addition is just another name for concatenation, which is simply sticking the strings together. String multiplication is repetitive addition, or concatenation. So: »> c = 'a' »> c + 'b' 'ab' »> c * 5 'aaaaa' Containment. There is a simple operator 'in' that returns True if the first operand is contained in the second. This also works on substrings: »> x = 'hello' »> y = 'ell' »> x in y False »> y in x True Note that 'print(x in y)' would have also returned the same value. Indexing and Slicing. Much like arrays in other languages, the individual characters in a string can be accessed by an integer representing its position in the string. The first character in string s would be s[0] and the nth character would be at s[n-1]. »> s = "Xanadu" »> s[1] 'a' Unlike arrays in other languages, Python also indexes the arrays backwards, using negative numbers. The last character has index -1, the second to last character has index -2, and so on. »> s[-4] 'n' We can also use "slices" to access a substring of s. s[a:b] will give us a string starting with s[a] and ending with s[b-1]. »> s[1:4] 'ana' None of these are assignable. »> print(s) »> s[0] = 'J' Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: object does not support item assignment »> s[1:3] = "up" Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: object does not support slice assignment »> print(s) Outputs (assuming the errors were suppressed): Xanadu Xanadu Another feature of slices is that if the beginning or end is left empty, it will default to the first or last index, depending on context: »> s[2:] 'nadu' »> s[:3] 'Xan' »> s[:] 'Xanadu' You can also use negative numbers in slices: »> print(s[-2:]) 'du' To understand slices, it's easiest not to count the elements themselves. It is a bit like counting not on your fingers, but in the spaces between them. The list is indexed like this: Element: 1 2 3 4 Index: 0 1 2 3 4 -4 -3 -2 -1 So, when we ask for the [1:3] slice, that means we start at index 1, and end at index 2, and take everything in between them. If you are used to indexes in C or Java, this can be a bit disconcerting until you get used to it. String constants. String constants can be found in the standard string module. An example is string.digits, which equals to '0123456789'. Links: String methods. There are a number of methods or built-in string functions: Only emphasized items will be covered. is*. isalnum(), isalpha(), isdigit(), islower(), isupper(), isspace(), and istitle() fit into this category. The length of the string object being compared must be at least 1, or the is* methods will return False. In other words, a string object of len(string) == 0, is considered "empty", or False. Example: »> '2YK'.istitle() False »> 'Y2K'.istitle() True »> '2Y K'.istitle() True Title, Upper, Lower, Swapcase, Capitalize. Returns the string converted to title case, upper case, lower case, inverts case, or capitalizes, respectively. The title method capitalizes the first letter of each word in the string (and makes the rest lower case). Words are identified as substrings of alphabetic characters that are separated by non-alphabetic characters, such as digits, or whitespace. This can lead to some unexpected behavior. For example, the string "x1x" will be converted to "X1X" instead of "X1x". The swapcase method makes all uppercase letters lowercase and vice versa. The capitalize method is like title except that it considers the entire string to be a word. (i.e. it makes the first character upper case and the rest lower case) Example: s = 'Hello, wOrLD' print(s) # 'Hello, wOrLD' print(s.title()) # 'Hello, World' print(s.swapcase()) # 'hELLO, WoRld' print(s.upper()) # 'HELLO, WORLD' print(s.lower()) # 'hello, world' print(s.capitalize())# 'Hello, world' Keywords: to lower case, to upper case, lcase, ucase, downcase, upcase. count. Returns the number of the specified substrings in the string. i.e. »> s = 'Hello, world' »> s.count('o') # print the number of 'o's in 'Hello, World' (2) 2 Hint: .count() is case-sensitive, so this example will only count the number of lowercase letter 'o's. For example, if you ran: »> s = 'HELLO, WORLD' »> s.count('o') # print the number of lowercase 'o's in 'HELLO, WORLD' (0) 0 strip, rstrip, lstrip. Returns a copy of the string with the leading (lstrip) and trailing (rstrip) whitespace removed. strip removes both. »> s = '\t Hello, world\n\t ' »> print(s) Hello, world »> print(s.strip()) Hello, world »> print(s.lstrip()) Hello, world # ends here »> print(s.rstrip()) Hello, world Note the leading and trailing tabs and newlines. Strip methods can also be used to remove other types of characters. import string s = 'www.wikibooks.org' print(s) print(s.strip('w')) # Removes all w's from outside print(s.strip(string.lowercase)) # Removes all lowercase letters from outside print(s.strip(string.printable)) # Removes all printable characters Outputs: www.wikibooks.org .wikibooks.org .wikibooks. Note that string.lowercase and string.printable require an import string statement ljust, rjust, center. left, right or center justifies a string into a given field size (the rest is padded with spaces). »> s = 'foo' »> s 'foo' »> s.ljust(7) 'foo ' »> s.rjust(7) ' foo' »> s.center(7) ' foo ' join. Joins together the given sequence with the string as separator: »> seq = ['1', '2', '3', '4', '5'] »> ' '.join(seq) '1 2 3 4 5' »> '+'.join(seq) '1+2+3+4+5' map may be helpful here: (it converts numbers in seq into strings) »> seq = [1,2,3,4,5] »> ' '.join(map(str, seq)) '1 2 3 4 5' now arbitrary objects may be in seq instead of just strings. find, index, rfind, rindex. The find and index methods return the index of the first found occurrence of the given subsequence. If it is not found, find returns -1 but index raises a ValueError. rfind and rindex are the same as find and index except that they search through the string from right to left (i.e. they find the last occurrence) »> s = 'Hello, world' »> s.find('l') 2 »> s[s.index('l'):] 'llo, world' »> s.rfind('l') 10 »> s[:s.rindex('l')] 'Hello, wor' »> s[s.index('l'):s.rindex('l')] 'llo, wor' Because Python strings accept negative subscripts, index is probably better used in situations like the one shown because using find instead would yield an unintended value. replace. Replace works just like it sounds. It returns a copy of the string with all occurrences of the first parameter replaced with the second parameter. »> 'Hello, world'.replace('o', 'X') 'HellX, wXrld' Or, using variable assignment: string = 'Hello, world' newString = string.replace('o', 'X') print(string) print(newString) Outputs: Hello, world HellX, wXrld Notice, the original variable (codice_2) remains unchanged after the call to codice_3. expandtabs. Replaces tabs with the appropriate number of spaces (default number of spaces per tab = 8; this can be changed by passing the tab size as an argument). s = 'abcdefg\tabc\ta' print(s) print(len(s)) t = s.expandtabs() print(t) print(len(t)) Outputs: abcdefg abc a 13 abcdefg abc a 17 Notice how (although these both look the same) the second string (t) has a different length because each tab is represented by spaces not tab characters. To use a tab size of 4 instead of 8: v = s.expandtabs(4) print(v) print(len(v)) Outputs: abcdefg abc a 13 Please note each tab is not always counted as eight spaces. Rather a tab "pushes" the count to the next multiple of eight. For example: s = '\t\t' print(s.expandtabs().replace(' ', '*')) print(len(s.expandtabs())) Output: 16 s = 'abc\tabc\tabc' print(s.expandtabs().replace(' ', '*')) print(len(s.expandtabs())) Outputs: abc*****abc*****abc 19 split, splitlines. The split method returns a list of the words in the string. It can take a separator argument to use instead of whitespace. »> s = 'Hello, world' »> s.split() ['Hello,', 'world'] »> s.split('l') ['He', ", 'o, wor', 'd'] Note that in neither case is the separator included in the split strings, but empty strings are allowed. The splitlines method breaks a multiline string into many single line strings. It is analogous to split('\n') (but accepts '\r' and '\r\n' as delimiters as well) except that if the string ends in a newline character, splitlines ignores that final character (see example). »> s = """ ... One line ... Two lines ... Red lines ... Blue lines ... Green lines »> s.split('\n') [", 'One line', 'Two lines', 'Red lines', 'Blue lines', 'Green lines', "] »> s.splitlines() [", 'One line', 'Two lines', 'Red lines', 'Blue lines', 'Green lines'] The method split also accepts multi-character string literals: txt = 'May the force be with you' spl = txt.split('the') print(spl) Unicode. In Python 3.x, all strings (the type str) contain Unicode per default. In Python 2.x, there is a dedicated unicode type in addition to the str type: u = u"Hello"; type(u) is unicode. The topic name in the internal help is UNICODE. Examples for Python 3.x: Examples for Python 2.x: Links:
3,760
Python Programming/Exceptions. Python 2 handles all errors with exceptions. An "exception" is a signal that an error or other unusual condition has occurred. There are a number of built-in exceptions, which indicate conditions like reading past the end of a file, or dividing by zero. You can also define your own exceptions. Overview. Exceptions in Python at a glance: import random try: ri = random.randint(0, 2) if ri == 0: infinity = 1/0 elif ri == 1: raise ValueError("Message") #raise ValueError, "Message" # Deprecated elif ri == 2: raise ValueError # Without message except ZeroDivisionError: pass except ValueError as valerr: print(valerr) raise # Raises the exception just caught except: # Any other exception pass finally: # Optional pass # Clean up class CustomValueError(ValueError): pass # Custom exception try: raise CustomValueError raise TypeError except (ValueError, TypeError): # Value error catches custom, a derived class, as well pass # A tuple catches multiple exception classes Raising exceptions. Whenever your program attempts to do something erroneous or meaningless, Python raises exception to such conduct: »> 1 / 0 Traceback (most recent call last): File "<stdin>", line 1, in ? ZeroDivisionError: integer division or modulo by zero This "traceback" indicates that the codice_1 exception is being raised. This is a built-in exception -- see below for a list of all the other ones. Catching exceptions. In order to handle errors, you can set up "exception handling blocks" in your code. The keywords codice_2 and codice_3 are used to catch exceptions. When an error occurs within the codice_2 block, Python looks for a matching codice_3 block to handle it. If there is one, execution jumps there. If you execute this code: try: print(1/0) except ZeroDivisionError: print("You can't divide by zero!") Then Python will print this: You can't divide by zero! If you don't specify an exception type on the codice_3 line, it will cheerfully catch all exceptions. This is generally a bad idea in production code, since it means your program will blissfully ignore "unexpected" errors as well as ones which the codice_3 block is actually prepared to handle. Exceptions can propagate up the call stack: def f(x): return g(x) + 1 def g(x): if x < 0: raise ValueError, "I can't cope with a negative number here." else: return 5 try: print(f(-6)) except ValueError: print("That value was invalid.") In this code, the codice_8 statement calls the function codice_9. That function calls the function codice_10, which will raise an exception of type ValueError. Neither codice_9 nor codice_10 has a codice_2/codice_3 block to handle ValueError. So the exception raised propagates out to the main code, where there "is" an exception-handling block waiting for it. This code prints: That value was invalid. Sometimes it is useful to find out exactly what went wrong, or to print the python error text yourself. For example: try: the_file = open("the_parrot") except IOError, (ErrorNumber, ErrorMessage): if ErrorNumber == 2: # file not found print("Sorry, 'the_parrot' has apparently joined the choir invisible.") else: print("Congratulation! you have managed to trip a #%d error" % ErrorNumber) print(ErrorMessage) Which will print: Sorry, 'the_parrot' has apparently joined the choir invisible. Custom Exceptions. Code similar to that seen above can be used to create custom exceptions and pass information along with them. This can be very useful when trying to debug complicated projects. Here is how that code would look; first creating the custom exception class: class CustomException(Exception): def __init__(self, value): self.parameter = value def __str__(self): return repr(self.parameter) And then using that exception: try: raise CustomException("My Useful Error Message") except CustomException, (instance): print("Caught: " + instance.parameter) Recovering and continuing with codice_15. Exceptions could lead to a situation where, after raising an exception, the code block where the exception occurred might not be revisited. In some cases this might leave external resources used by the program in an unknown state. codice_16 clause allows programmers to close such resources in case of an exception. Between 2.4 and 2.5 version of python there is change of syntax for codice_16 clause. try: result = None try: result = x/y except ZeroDivisionError: print("division by zero!") print("result is ", result) finally: print("executing finally clause") try: result = x / y except ZeroDivisionError: print("division by zero!") else: print("result is", result) finally: print("executing finally clause") Built-in exception classes. All built-in Python exceptions Exotic uses of exceptions. Exceptions are good for more than just error handling. If you have a complicated piece of code to choose which of several courses of action to take, it can be useful to use exceptions to jump out of the code as soon as the decision can be made. The Python-based mailing list software Mailman does this in deciding how a message should be handled. Using exceptions like this may seem like it's a sort of GOTO -- and indeed it is, but a limited one called an "escape continuation". Continuations are a powerful functional-programming tool and it can be useful to learn them. Just as a simple example of how exceptions make programming easier, say you want to add items to a list but you don't want to use "if" statements to initialize the list we could replace this: if hasattr(self, 'items'): self.items.extend(new_items) else: self.items = list(new_items) Using exceptions, we can emphasize the normal program flow—that usually we just extend the list—rather than emphasizing the unusual case: try: self.items.extend(new_items) except AttributeError: self.items = list(new_items)
1,525
Waves/Geometrical optics Introduction. As was shown previously, when a plane wave is impingent on an aperture which has dimensions much greater than the wavelength of the wave, diffraction effects are minimal and a segment of the plane wave passes through the aperture essentially unaltered. This plane wave segment can be thought of as a wave packet or ray consisting of a superposition of wave vectors very close in direction and magnitude to the central wave vector of the wave packet. In most cases the ray simply moves in the direction defined by the central wave vector, i. e., normal to the orientation of the wave fronts. However, this is not true when the medium through which the light propagates is optically anisotropic, i. e., light traveling in different directions moves at different phase speeds. An example of such a medium is a calcite crystal. In the anisotropic case, the orientation of the ray can be determined once the dispersion relation for the waves in question is known, by using the techniques developed in the previous section. If light moves through some apparatus in which all apertures are much greater in dimension than the wavelength of light, then we can use the above rule to follow rays of light through the apparatus. This is called the geometrical optics approximation. This approximation can be applied to any wave theory.
293
General Mechanics/Fundamental Principles of Dynamics. History of Dynamics. Aristotle. Aristotle expounded a view of dynamics which agrees closely with our everyday experience of the world. Objects only move when a force is exerted upon them. As soon as the force goes away, the object stops moving. The act of pushing a box across the floor illustrates this principle -- the box certainly doesn't move by itself! However, if we try using Aristotle's dynamics to predict motion we soon run into problems. It suggests that objects under a constant force move with a fixed velocity but while gravity definitely feels like a constant force it clearly doesn't make objects move with constant velocity. A thrown ball can even reverse direction, under the influence of gravity alone. Eventually, people started looking for a view of dynamics that actually worked. Newton found the answer, partially inspired by the heavens. Newton. In contrast to earthly behavior, the motions of celestial objects seem effortless. No obvious forces act to keep the planets in motion around the sun. In fact, it appears that celestial objects simply coast along at constant velocity unless something acts on them. This Newtonian view of dynamics — objects change their velocity rather than their position when a force is exerted on them — is expressed by Newton's second law: where formula_1 is the force exerted on a body, formula_2 is its mass, and formula_3 is its acceleration. Newton's first law, which states that an object remains at rest or in uniform motion unless a force acts on it, is actually a special case of Newton's second law which applies when formula_4 . It is no wonder that the first successes of Newtonian mechanics were in the celestial realm, namely in the predictions of planetary orbits. It took Newton's genius to realize that the same principles which guided the planets also applied to the earthly realm as well. In the Newtonian view, the tendency of objects to stop when we stop pushing on them is simply a consequence of frictional forces opposing the motion. Friction, which is so important on the earth, is negligible for planetary motions, which is why Newtonian dynamics is more obviously valid for celestial bodies. Note that the principle of relativity is closely related to Newtonian physics and is incompatible with pre-Newtonian views. After all, two reference frames moving relative to each other cannot be equivalent in the pre-Newtonian view, because objects with nothing pushing on them can only come to rest in one of the two reference frames! Einstein's relativity is often viewed as a repudiation of Newton, but this is far from the truth — Newtonian physics makes the theory of relativity possible through its invention of the principle of relativity. Compared with the differences between pre-Newtonian and Newtonian dynamics, the changes needed to go from Newtonian to Einsteinian physics constitute minor tinkering. Newton's first law:. "An object at rest tends to stay at rest and an object in motion tends to stay in motion with the same speed and in the same direction unless acted upon by an external force". He gave this law assuming the body or the system to be isolated. If we look upon into our daily life we find that this law is applicable in reality like a bicycle stops slowly when we stop moving the pedals, this is because in our daily life there are 2 external forces which opposes the motion, these are frictional force and air resistance ( These forces aren't included in the course of an isolated system). But if these two forces are absent the above law is applicable, this can be observed in space. Newton's second law:. "The rate change of linear momentum of an object is directly proportional to the external force on the object." The concept behind this is - "Forces arise due to interaction between the bodies." With the help of this law we can derive this formula: formula_5 . In this formula, formula_1 means the force exerted on the object, formula_2 means the mass of the object, and formula_3 means the acceleration of the object. Also the second law of motion is universal law in nature, i.e. it consists of both the laws. Newton assumed the body to be an isolated one, thus according to second law, if there is no interaction between bodies of two different systems( means not to be an isolated one), then there is no force that can stop or shake the object from its state of being. Suppose two bodies are interacting with each other, then each of them would be exerting force on the other( Look from each side and apply second law of motion), it is third law. Newton's third law:. "Every action has an equal and opposite reaction" This means if body A exerts a force on body formula_9 , then body formula_9 exerts a force on body formula_11 that is equal in magnitude but opposite in direction from the force from formula_11 . formula_13
1,099
General Mechanics/Work and Power. Work. When a force is exerted on an object, energy is transferred to the object. The amount of energy transferred is called the work done on the object. Mathematically work done is defined as the dot\scalar product of force and displacement, thus it is a scalar quantity. However, energy is only transferred if the object moves. Work can be thought of as the process of transforming energy from one form into another. The work "W" done is where the distance moved by the object is Δ"x" and the force exerted on it is "F". Notice that work can either be positive or negative. The work is positive if the object being acted upon moves in the same direction as the force, with negative work occurring if the object moves opposite to the force. This equation assumes that the force remains constant over the full displacement or distance( depending on the situation). If it is not, then it is necessary to break up the displacement into a number of smaller displacements, over each of which the force can be assumed to be constant. The total work is then the sum of the works associated with each small displacement. In the infinitesimal limit this becomes an integral If more than one force acts on an object, the works due to the different forces each add or subtract energy, depending on whether they are positive or negative. The total work is the sum of these individual works. There are two special cases in which the work done on an object is related to other quantities. If "F" is the total force acting on the object, then by Newton's second law "W"="F"Δ"x"="m"Δ"x"·"a" . However, "a"="dv"/"dt" where "v" is the velocity of the object, and Δ"x"≈vΔ"t", where Δ"t" is the time required by the object to move through distance Δ"x" . The approximation becomes exact when Δ"x" and Δ"t" become very small. Putting all of this together results in We call the quantity "mv"2/2 the "kinetic energy", or "K". It represents the amount of work stored as motion. We can then say Thus, when "F" is the only force, the total work on the object equals the change in kinetic energy of the object. This transformation is known as "Work-Energy theorem." The other special case occurs when the force depends only on position, but is not necessarily the total force acting on the object. In this case we can define a function and the work done by the force in moving from "x"1 to "x"2 is "U"("x"1)-"U"("x"2), no matter how quickly or slowly the object moved. If the force is like this it is called "conservative" and "U" is called the potential energy. Differentiating the definition gives The minus sign in these equations is purely conventional. If a force is conservative( The force whose effect doesn't depend on the path it has taken to go through), we can write the work done by it as where is the change in the potential energy of the object associated with the force of interest. Energy. The sum of the "potential"( Energy by virtue of its position) and "kinetic"(Energy by virtue of its motion) energies is constant. We call this constant the total energy "E": If all the forces involved are conservative we can equate this with the previous expression for work to get the following relationship between work, kinetic energy, and potential energy: Following this, we have a very important formula, called the "'Conservation of Energy": This theorem states that the total amount of energy in a system is constant, and that energy can neither be created nor destroyed. Power. The power associated with a force is simply the amount of work done by the force divided by the time interval over which it is done. It is therefore the energy per unit time transferred to the object by the force of interest. From above we see that the power is where formula_12 is the velocity at which the object is moving. The total power is just the sum of the powers associated with each force. It equals the time rate of change of kinetic energy of the object:
945
Guitar/Learning Songs. Now that you've got a few chords under your belt, you're ready to start learning some songs. Great! There are several ways to learn songs, and some are more accessible than others. General Tips. There are two basic forms that appear in thousands of songs. They are the twelve bar blues and the thirty-two bar ballad. Both forms are used extensively in all genres. The blues and rock 'n 'roll genres both use the twelve-bar blues form and many songs by Chuck Berry, Eddie Cochran and Buddy Holly are twelve bar blues and therefore very easy to learn. If you are trying to learn a jazz standard then you will find that many of them are of the thirty-two bar form. Practicing and understanding these two basic forms is essential for the guitarist who wishes to learn songs. Practice the song slowly (especially if it's a fast song) until you can play it flawlessly. Then, when you are confident with the notes you are supposed to play, increase the speed until you can play along with the song. Using a drum-machine or metronome when practicing is essential. An alternative method for improving timing is to play along with your favorite artists. Methods of Learning. Sheet Music. The best way is to find sheet music for the song you are trying to learn, like a tab book, available from any guitar shop. Tab books are good, because they are almost always accurate, and they not only show the notes you're supposed to play, but they give good sense of how to play the notes. Generally they include both the rhythm and lead part, even written on the same page if they are played at the same time. Tab books are expensive and there's a learning curve associated with fluent tab reading, especially if you have no prior knowledge of music notation. Understanding music theory, even just enough to properly (and easily) read a tab book is a challenge but not insurmountable. Being able to read music, whether it's tab or notation, will improve your playing. Online Tab. A much quicker, cheaper and often faster way to learn is to search for an online tab of the song you're looking for. Simply type "Artist Name Song Name tab" into your favorite search engine, and "voila!", you have dozens to choose from. The online tab community is thriving, and there are many popular sites where you can find tabs for most popular songs. Some sites even feature a MIDI of the song, to make learning even easier. There are several downsides to online tab, some of which are outlined in the Tablature section. The biggest problem is lack of accuracy. Always remember that online tabs are not made by professionals like tab books, and that somewhere down the line someone was sitting at home with a CD and figured it out by trial and error. Thus, the more complicated the song, the less likely the tab you are reading is 100% accurate. But since most people don't play a song "exactly" as it sounds on the album (even the recording artists!), this isn't such a big deal. Another down side is that there is a huge amount of stealing in the community, and if you are looking for an obscure tab, you might only find one actual tab, with copies of it on every site you visit. Some sites allow for multiple versions, and some use voting or comments to give you a sense of how accurate the tab is. However, don't let voting alone determine which tab you read, because if the people who vote don't know how to play the song either, then they might vote a terrible tab really high. In general, you should read two or three tabs for a song, and then from that determine how you intend to play the song. Comments on a song can contain slight revisions or alternate fingerings for chords, so it is good to check those out. By Ear. Songs can also be learned "by ear", with no sheet music. Essentially you just listen to the song and try to figure it out, with nothing for reference. Knowledge of music theory is particularly helpful for this method. It probably sounds a lot harder to learn this way than it is, but it is a really good way to practice whatever music knowledge you have. And it is especially rewarding being able to figure out a famous musicians piece and saying "I could have made that up!" First, you should always try and figure out the key (or scale) the song is in. Knowing the key essentially tells you two important things; what the root notes are of the chords they are playing, and the scale that is used for soloing. When you know the scale, you can also probably figure out which scale degree is supposed to be major or minor. To figure out the key, try playing random notes on the fretboard, and when one "works", play a major or minor pentatonic scale beginning with that note. Once you have figure out a few more notes, you will probably have a good idea of what scale is being used. If that doesn't work, try humming the chords being used, and then match those tones on the guitar. Be careful you don't accidentally start humming the lead vocals, because although that will help determine the key, the chords are likely different. Once you know what key the song is in, the rest generally follows pretty quickly. Some of the tricky bits can be one-note riffs, arpeggios, of specific voicing of the chords they are using. If have no experience of keys and their relationship to writing songs, then figuring out songs by ear is more difficult. Essentially you need to just find the same notes or chords and write them down or remember them. Generally this involves a lot of trial and error, but working this way provides excellent ear training. Other Guitarists. This is perhaps the best way to learn. Playing with another guitarist gives you the opportunity to ask questions about chords and rhythms, and it gives you a chance to see and hear what the song is supposed to be like when it's performed live. However, the down side is that often a guitarist learns to play a song "their way", and they don't care about how it's "really" supposed to be played. Thus, you might not be learning the song exactly, but rather a slightly different version. Concert Videos. Another place to learn is by watching concert videos, especially on DVDs where they allow you to pick camera angles. Often they will have a camera never breaks away from lead guitarist. By following along, you can learn exactly how a particular guitarist plays a particular song live. The downside of this is that not every artist (especially new ones) has a concert DVD. Also, the guitarist may be playing the song differently live than on the album, so depending on how accurate you intend to be with your learning and playing, watching a video may not be the best way. Chord Progressions. Songs are created using chords. Chords are derived from scales. The chords that are derived from one diatonic scale never change. If you learn the seven chords in the key of C major, then when you find a song in that key, you can quickly work out the chord progressions that make up the song. Chords in C major Note that the chords in the key of C major consists of 3 major chords, 3 minor chords and 1 diminished chord. This holds true for all major keys. Chord Theory. Songs in the key of C major will start with a C major chord and end with a C major chord. The tonic chord of C major is the chord that defines the key ( the name tonic is derived from the word tonal). If you think of music as a journey then the tonic chord is the starting point and the return point. The notes in the scale of C major are named: I is the Tonic II is the Supertonic III is the Mediant IV is the Subdominant V is the Dominant VI is the Relative Minor VII is the Leading Note VIII is the Octave Tonic - is the first note of the scale and it is this note that determines the tonality or key, hence the name Tonic. Supertonic – the word “super” comes from the Latin verb “superare” which means “to be above”. The second note of any scale is always above the tonic. Mediant – the mediant refers to the fact that this note lies halfway between the tonic and the dominant. Subdominant – the word “sub” means "to be below". This note is below the dominant. Dominant – this note has this name because with the tonic it sets the tonality or key. The tonic and dominant notes, more than any of the others, determines the tonality of a piece of music. The fifth note of the scale is therefore a dominant factor. Relative Minor – so called because this is the tonic note of the corresponding natural minor scale. Every major scale has a corresponding natural minor scale that contains exactly the same notes. So the relative minor of the C major scale is A natural minor. It is also called the submediant because is lies three notes below the octave as the mediant lies three notes above the tonic. Leading-note – whenever you play a scale and arrive at this note, you will find that it naturally wants to move up to the octave note. People have a psychological expectation of music. The most important need is for the “musical journey” to have a start and end. If you were to play the C major scale and stopped at the leading-note, you would always have the sense that the scale is incomplete. Octave – the same note as the tonic but an octave higher in sound and the end of the musical journey that a scale takes us on. All the chords in C major take the same names given to the degrees of the scale. You can refer to the dominant note or the dominant chord. Common Progressions. The tonic, subdominant and dominant are called the tonal chords. The supertonic, mediant and relative minor are called the modal chords. The tonal chords define tonality (key) and the modal chords suggest modality. If you play only the modal chords Am and Em from the key of C major the listener will eventually interpret the music to be in the key of A minor (aeolian mode). It must be noted that Am and Em has to be stated over a lengthy period of time. Analyzing chord progressions starts with the tonal chords: Step One: Try the progression I-V (Tonic to Dominant) Step Two: Try the progression I-IV (Tonic to Subdominant) Step Three: Try the progression I-VI or I-III (Tonic to Relative Minor) or (Tonic to Mediant) Step Four: Try the progression I-II (Tonic to Supertonic) If you know the song starts with a C major chord and none of the above works then the song may contain chromatic chords. It is common practice to change the modal chords which are minor into their major counterparts. So D minor becomes D major and E minor becomes E major. The chromatic supertonic and the chromatic mediant are a common compositional device. Even though you have added chromatic chords the listener will still interpret the key as C major. Try playing this progression: C - E major - Am - G In the above chord progression you have played a chord that doesn't belong to the key of C major. The tonality of the piece is preserved by the following chords which are diatonic to the key. How To Continue Learning. A great way to continue learning if you can already play is to teach guitar to other people.
2,613
Geometry/Triangle. A triangle is a type of polygon having three sides and three angles. The triangle is a closed figure formed from three straightline segments joined at their ends. The Line Segments at the ends can be called the corners, angles, or vertices of the triangle. Since any given triangle lies completely within a plane, triangles are often treated as two-dimensional geometric figures. As such, a triangle has no volume and, because it is a two-dimensionally closed figure, the flat part of the plane inside the triangle has an area, typically referred to as the area of the triangle. A triangle must have at least some area, so all three corner points of a triangle cannot lie in the same line. The sum of the lengths of any two sides of a triangle is always greater than the length of the third side. The preceding statement is sometimes called the Triangle Inequality. Certain types of triangles. Categorized by angle. The sum of the interior angles in a triangle always equals 180o. This means that no more than one of the angles can be 90o or more. All three angles can all be less than 90oin the triangle; then it is called an acute triangle. One of the angles can be 90o and the other two less than 90o; then the triangle is called a right triangle. Finally, one of the angles can be more than 90o and the other two less; then the triangle is called an obtuse triangle. Categorized by sides. If all three of the sides of a triangle are of different length, then the triangle is called a scalene triangle. If two of the sides of a triangle are of equal length, then it is called an isosceles triangle. In an isosceles triangle, the angle between the two equal sides can be more than, equal to, or less than 90o. The other two angles are both less than 90o. If all three sides of a triangle are of equal length, then it is called an equilateral triangle and all three of the interior angles must be 60o, making it equiangular. Because the interior angles are all equal, all equilateral triangles are also the three-sided variety of a regular polygon and they are all similar, but might not be congruent. However, polygons having four or more equal sides might not have equal interior angles, might not be regular polygons, and might not be similar or congruent. Of course, pairs of triangles which are not equilateral might be similar or congruent. Further discussion of ../Congruent Triangles/ and ../Similar Triangles/ may be found in those corresponding sections. Opposite corners and sides in triangles. If one of the sides of a triangle is chosen, the interior angles of the corners at the side's endpoints can be called adjacent angles. The corner which is not one of these endpoints can be called the corner opposite to the side. The interior angle whose vertex is the opposite corner can be called the angle opposite to the side. Likewise, if a corner or its angle is chosen, then the two sides sharing an endpoint at that corner can be called adjacent sides. The side not having this corner as one of its two endpoints can be called the side opposite to the corner. The sides or their lengths of a triangle are typicaly labeled with lower case letters. The corners or their corresponding angles can be labeled with capital letters. The triangle as a whole can be labeled by a small triangle symbol and its corner points. In a triangle, the largest interior angle is opposite to longest side, and vice versa. Any triangle can be divided into two right triangles by taking the longest side as a base, and extending a line segment from the opposite corner to a point on the base such that it is perpendicular to the base. Such a line segment would be considered the height or altitude ( h ) for that particular base ( b ). The two right triangles resulting from this division would both share the height as one of its sides. The interior angles at the meeting of the height and base would be 90o for each new right triangle. For acute triangles, any of the three sides can act as the base and have a corresponding height. For more information on right triangles, see ../Right Triangles and Pythagorean Theorem/. Area of Triangles. If base and height of a triangle are known, then the area of the triangle can be calculated by the formula: formula_1 Ways of calculating the area inside of a triangle are further discussed under ../Area/. Centres. The centroid is constructed by drawing all the medians of the triangle. All three medians intersect at the same point: this crossing point is the centroid. Centroids are always inside a triangle. They are also the centre of gravity of the triangle. The three angle bisectors of the triangle intersect at a single point, called the incentre. Incentres are always inside the triangle. The three sides are equidistant from the incentre. The incentre is also the centre of the inscribed circle (incircle) of a triangle, or the interior circle which touches all three sides of the triangle. The circumcentre is the intersection of all three perpendicular bisectors. Unlike the incentre, it is outside the triangle if the triangle is obtuse. Acute triangles always have circumcentres inside, while the circumcentre of a right triangle is the midpoint of the hypotenuse. The vertices of the triangle are equidistant from the circumcentre. The circumcentre is so called because it is the centre of the circumcircle, or the exterior circle which touches all three vertices of the triangle. The orthocentre is the crossing point of the three altitudes. It is always inside acute triangles, outside obtuse triangles, and on the right vertex of the right-angled triangle. Please note that the centres of an equilateral triangle are always the same point.
1,319
Using GNOME. Using GNOME is an unofficial user’s guide for GNOME 3.x and later, a desktop environment that runs on the Linux operating system. Its core goal is to provide helpful information on how to use GNOME, and its application stack with ease and proficiency. We will also go over any and all core concepts, and the terminology that goes along with it. The screenshots in this guide have been taken while running GNOME 44.2. This guide is currently in the process of being updated for GNOME 3.x. As a result, much of the content and features you will read about in this guide no longer exists.
150
Using GNOME/Coverage of this guide. This guide covers : What this guide is not : Back to contents page
28
Using GNOME/Platforms. GNOME is written to run on the Linux operating system only. It used to use the X11 display technology, but now supports Wayland as well.
41
Using GNOME/File manager. The file manager in GNOME manages your files, folders and hardware. It lets you move them, create them, rename them, copy them and delete them The default file manager is called . There are other file managers available, such as Velocity, GMC (used in older GNOME versions), (from KDE) and (text based file manager), but this book will focus on the default. Table of contents. Back to contents page
105
Using GNOME/File manager changes. Nautilus underwent several changes between versions 2.4 and 2.6; most notable is the change in the default browsing method. In 2.4, the file manager used a "browse" view similar to using a Web browser; all folders opened in the same window, and it could be used to view files as well. GNOME 2.6 uses a spatial interface by default, which, in essence, makes the window the folder. Each folder opens in a separate window, and this window remembers its size, position, and even location in the file listings. Some users of GNOME dislike the new interface; it is easy to switch back to the old one by using the command, or by using the GConf editor to force these changes in the file manager by default. To do this, open gconf-editor, open /apps/nautilus/preferences, and check "always-use-browser". A "Computer" icon has also been added to the desktop, which shows disk drives; network computers; and remote FTP, SSH, and WebDAV servers. The file chooser dialog was completely revamped, replacing the older, harder-to-use version with one resembling the Windows XP file dialog. The newer window shows the path to your location as a series of clickable buttons that will bring you back to that directory. One con of the new "open" dialog is that it no longer allows the user to type in the filename, however. Back to contents page
340
Visual Language Interpreting/Tools of the Trade. Homicide Cartel ( 187 Cartel ) Homicide Cartel aka 187 Cartel was made in July of 2017 by OG Ocho from OriginalBlockMafiaFamily and King Capone and TG from FastMoneyKings. Ocho and King was homies and had the same mission as each other, and it was to Money Power and Respect. They decided to come up with a gang that could bring them closer to each other, Ocho came up with the name Homicide Cartel and King agreed, Homicide Cartel became the main source and FastMoneyKings and OriginalBlockMafiaFamily the sets of it.KeKe girlfriend of TG was OG of FastMoneyQueens which was connected to FastMoneyKings wanted apart of it. So OriginalBlockMafiaFamily became the 1st set of Homicide Cartel, FastMoneyKings 2nd set and FastMoneyQueens 3rd set, this was a new come up in Aug of 2017 Homicide Cartel had 5sets in all from 2 new sets called FamilyAndLoyaltyOverEverything and OffThePorch. MT who is OG of FALOE and BK60 who is OG of OTP. Visual Languages of North America. American Sign Language. Contrary to popular belief, American Sign Language is not international. For the most part there is a signed language for almost all countries. As with spoken languages, these vary from country to country. They are not based on the spoken language in the country of origin. And like spoken languages, they developed in antiquity. A signed language can also be used in other contexts, where normal speech cannot be used. Native Americans were known to use a signed pidgin to facilitate communication among tribes who used different spoken languages. American Sign Language is the dominant sign language in the United States, Canada and parts of Mexico. American Sign Language is usually abbreviated ASL, though it has also been known as Ameslan. As with other sign languages, its grammar and syntax are separate and distinct from the spoken language(s) spoken in its area of influence. Etymologically, ASL's origins stem from a nineteenth century blending of French Sign Language (LSF) when Deaf French Sign Language instructor, Leclerc, came over to the states. Other American regional and indigenous signed language systems, such as Martha's Vineyard Sign Language (MVSL), was developed by the residents of the Massachusetts island. Since there is no written form of ASL, it is possible that there may be other undocumented influences on the language, but there is no way to tell for certain. ASL is a natural language as proved to the satisfaction of the linguistic community and by the research findings of William Stokoe. It is a manual language meaning that the information is expressed not with combinations of sounds, but with combinations of handshapes, movements of the hands, arms and body, and facial expressions. Manual Codes of English. Manual codes of English (MCE) are signing systems (not languages) that utilize the manual component of a signed (read: nonverbal) language to convey the grammatical and syntactical structure of spoken English. They do not share the grammatical and syntactical structure of American Sign Language. Historically, there have been several attempted MCE systems, viz: Signed English (SE). Signed English is a simplified English-based code, SE only added fourteen grammatical markers. SE was developed in the mid-1970s by Harry Bornstein at Gallaudet College, and further explored in 1983 by Bornstein, Saulnier, & Hamilton. (See Gustason, G. (1990). Signing exact English and Bornstein, H. (1990). Signed English. In H. Bornstein (ed.) Manual Communication: Implications for Education. Washington, D.C.: Gallaudet University Press.) Seeing Essential English, or, formerly ‘SEE1’. Intended to reinforce basic English morphemic structure, in SEE1: SEE1 was developed in 1966 by David Anthony at Gallaudet College. SEE1 is no longer in use today. Signing Exact English, or, formerly ‘SEE2’. SEE2 is very similar to SEE1, however: SEE2 was developed in 1972 by Gerilee Gustason; SEE2 is currently the “signed English” that is used in American school systems. (This is the ‘Signing Exact English‘ referred to below.) Linguistics of Visual English (LOVE). The LOVE system was a chirography system based on Seeing Essential English modes; it used the Stokoe Notation System (tab-dez-sig) to codify sentence structure. Unfortunately, there is very little explanation and/or examples extant of the LOVE system. LOVE was developed in 1972 by Dennis Wampler. The Rochester Method. So-called because it was developed in 1878 by Zenas Westervelt, a teacher at the Western New York Institute for Deaf-Mutes (later Rochester School for the Deaf), in the Rochester Method, every word is fingerspelled. Sometimes used in tactile signing situations, some Deaf adults still use this method. Signing Exact English. Signing Exact English (SEE) is a system of signing that strives to be an exact representation of English. It is an artificial system that was devised in 1972. It takes much of its vocabulary of signs from American Sign Language (ASL). However, it often modifies the handshapes used in the ASL signs in order to incorporate the handshape used for the first letter of the English word that the SEE sign is meant to represent. SEE can be thought of as a code for visually representing spoken English. It is used most often with Deaf children in educational settings - the initial goal of SEE was to facilitate the learning of English. It often finds use in the home too, however, as it is often welcomed as an alternative to ASL by hearing parents of Deaf children because it does not require them to learn a new grammar or syntax. Therfore, it is easier learn for people who have already internalized English. It is not often used by adult Deaf people except to communicate with hearing people who know some sign but who are not fluent users of ASL. SEE is not a single coded sign: there is SEE1, SEE2, L.O.V.E, MCE, and more. As with almost every aspect of the education of deaf children, the use of SEE is mired in controversy concerning its efficacy and utility. In a way, it is a slight variation of the oralist vs. manualist controversy which has pitted those that have supported the use of sign language against those that believed in lipreading and speech therapy as the best way to educate deaf children. This debate has raged for centuries. Cued Speech. Cued speech is a manual system which, when produced near the mouth while speaking, helps the Deaf disambiguate the phonemes in spoken language. Cued speech combines eight arbitrary handshapes and four locations to visually and phonetically approximate the sounds of English. While not a signed language, it is a visual means of representing spoken language segments. Developed in 1966 by Dr. Robert Cornett, an engineer at Gallaudet College, as an educational and communicative tool for the Deaf. Consecutive Interpreting. In its purest form, consecutive interpretation is a mode in which the interpreter begins their interpretation of a complete message after the speaker has stopped producing the source utterance.  At the time that the interpretation is rendered the interpreter is the only person in the communication environment who is producing a message.  In practice, a consecutive interpretation may be rendered when the interpreter does not have a text in its entirety, that is, the person delivering the source utterance may have more to say, but the interpreter has enough information to deliver a message that could stand alone if need be.  It is important to note that although the person who originated the message has ceased their delivery of new information, this speaker has not necessarily given up the floor and, once the interpretation has been delivered, the speaker may resume delivery of their message.  Though most people may be more familiar with simultaneous interpretation, where the interpreter renders their interpretation while still receiving the source utterance, consecutive interpretation has distinct advantages in certain interpreting situations, not the least of which is that consecutive interpretations render more accurate, equivalent[i], and complete target texts.  In fact, the two modes, when performed successfully, employ the same cognitive processing skills, with the only difference being the amount of time that elapses between the delivery of the source utterance and the delivery of theinterpretation.  This being the case, mastery of techniques used in consecutive interpretation can enhance an interpreter’s ability to work in the simultaneous mode. The Interpreting Process Before we continue I would like to take a moment to explain the interpreting process in order to explain how consecutive interpretations produce more accurate and equivalent target texts.  In order to interpret a text the interpreter must be able to receive and understand the incoming message and then express it’s meaning in the target language.  In order to accomplish this task, the interpreter must go through an overlapping series of cognitive processing activities.  These include: attending to the message, concentrating on the task at hand, remembering the message, comprehending the meaning of the message, analyzing the message for meaning, visualizing the message nonverbally, and finally reformulating the message in the target language[ii].  Seleskovitch (1978) compresses these tasks into three steps, noting that the second step includes the, “Immediate and deliberate discarding of the wording and retention of the mental representation of the message” (Seleskovitch, 8); interpreters often refer to this as “dropping form.”  By discarding the form (words, structure etc.) of the source text the interpreter is free to concentrate on extracting and analyzing the meaning of the text, and conceiving strategies for reformulating the message into the target language. Seleskovitch, among others, points out that there is another practical reason for the interpreter to discard the form of the source text, there is only so much that a person can hold in their short-term memory.  As the interpreter receives the source text the information passes initially through their short-term memory.  If the interpreter does not do anything with this information it will soon disappear.  Smith (1985) notes that, “Short term memory...has a very limited duration.  We can remember...six or seven items only as long as we give all of our attention to them” (Smith, 38).  If an interpreter attempts to retain the form of a source utterance their short-term memory will be quickly filled with individual lexical items, which may not even compose a full sentence.  If the interpreter then attempts to find a corresponding lexical item in the target language for each of the source language forms in their short-term memory all of their attention will be wasted on translating these six items rather than attending to the incoming message, as Smith points out, “as long as pay attention to short-term memory we cannot attend to anything else” (Smith, 38).  In a consecutively interpreted situation this would result in the interpreter stopping the speaker every six or seven words so that the interpreter could clear their short-term memory and prepare to receive new information.  Cleary this is not a preferable manner in which to communicate, and, as Seleskovitch points out, it would require the interpreter to know every existing word in both languages. It is because of the limitations of short-term memory that interpreters are required to drop form and concentrate on meaning.  Both Seleskovitch and Smith propose that meaningful segments of great size can be placed into long-term memory and retrieved later. Of course a chunk of information must be understood in order to be meaningful.  To demonstrate this idea Seleskovitch uses the example of a person who has just seen a movie, after viewing the film the person will be able to relate the plot and many of the details of the film.  If the person continues to discuss the film with others the details will remain fresh in their mind for a longer period of time.  In this example the person is able to remember the film because they understood it, and are, “conversant with the various themes found in films...the movie-goer can easily and fully process the ‘information’ conveyed...and for this reason he remembers” (Seleskovitch, 1979, 32).  Smith adds, “it takes no longer to put a rich and relevant chunk of meaning into long-term memory than it does a useless letter or word” (Smith, 45), because of this the moviegoer will probably be able to relate the salient points of the film in a fraction of the time it took them to receive the information.  Since the information was understood, its salient points can be reformulated into another mode of communication.  For example, when the moviegoer discusses the plot of the film they do not recreate its form, nor do they take two hours to render their “interpretation.” Due to the greater ease of assimilating larger meaningful chunks of information it behooves the interpreter to focus their attention on these larger chunks.  A larger chunk of text will usually contain a greater amount of meaning.  It is this relationship that aids the interpreter’s understanding of the source text when working consecutively.  As shown above, once a chunk of information is understood it can be reformulated into another form.  As Seleskovitch (1978) points out, “In consecutive interpretation the interpreter has the advantage of knowing line of the argument before he interprets” (Seleskovitch, 28).  Interpreters are not charged with merely understanding the message, they must also be able to remember it, in order to deliver their interpretation.  Seleskovitch notes that dropping form aids the interpreter’s memory because they are not concentrating on remembering the words, or even the structure of the source text.  Instead, the interpreter understands the message, connects it to long-term memory, and is then able to reformulate it in much the same way the moviegoer can relate the points of a film.  Of course the interpreter must provide a more equivalent target text than the moviegoer.  To this end interpreters working consecutively will often make notes as they take in the source utterance.  These notes help the interpreter retrieve the message from their long-term memory and consist of, “symbols, arrows, and a key word here or there” (Seleskovitch, 1991, 7).  These few notes are effective because interpreters do not produce their target texts based on the form used by the speaker but on what they understood of the meaning of the source text.  The “key words” may consist of words that will remind the interpreter of the speaker’s point, or of specific information “such as proper names, headings and certain numbers” (Seleskovitch, 1978, 36). Seleskovitch also points to the time afforded an interpreter working in the consecutive mode as an asset in reformulating the message in the target language.  Because the interpreter does not need to split their attention between receiving the message, and monitoring their output, as is required in simultaneous, they can devote more of their processing to analysis and reformulation of the text thereby producing a more accurate and equivalent interpretation. Situations for Consecutive Interpreting Even though the interpreter’s goal is always to produce the most accurate and equivalent target text possible consecutive interpretation is not always possible.  Situations where one speaker maintains the floor, with little or no interaction with the audience and situations where there is rapid turn taking between a group of interlocutors may require the interpreter to work simultaneously.  While Seleskovitch notes that spoken language interpreters working at international conferences may sometimes interpret entire speeches consecutively, the consecutive mode often requires some type of pause so that the interpreter may render the message.  That said, there are situations that lend themselves to consecutive interpretation, I would like to discuss three such situations, one general, and two specific.  In general, consecutive interpretation can be employed successfully in one-on-one interpreted interactions.  One-on-one interactions often allow for more structured turn taking behavior than large group situations.  Interviews, parent teacher meetings, and various type of individual consultations may be interpreted consecutively with minimal disruption to the flow of communication perceived by the participants. Specifically, there are two types of interpreted situations that, due to the consequences involved, require consecutive interpretation rather than simultaneous.  These are legal and medical interpreted interactions.  In these situations, where a person’s life or freedom is at stake, accuracy and equivalence are of the utmost priority; as we have seen, consecutive interpretation provides greater accuracy and equivalence than simultaneous does.  Palma (1995) points out that the density and complexity of witness testimony requires the interpreter to work consecutively, and to be aware of how long a chunk they can manage effectively.  Palma notes that, especially during expert witness testimony, where the language used can be highly technical and is more likely to use complex sentence constructions; a segment of text that is short in duration may be extremely dense in terms of the content and complexity of its ideas.  In this case the consecutive mode has the added advantage of allowing the interpreter to ask speaker to pause so that the interpreter may deliver the message.  The interpreter may also take advantage of the time in which they hold the floor to ask the speaker for clarification.  Use of the consecutive mode is also helped by the fact that court officials (attorneys, judges etc.) may be familiar with the norms of consecutive interpretation and by the fact that turn taking between the witness and the attorney often proceeds with only one the two speaking at any one time. In the case of medical interpreting accuracy and equivalence are also at a premium due to the possible consequences of a misdiagnosis.  Like expert witness testimony, doctor-patient interactions may be filled with medical jargon or explanations of bodily systems that may be particularly dense for the interpreter.  Again turn taking may be more structured in a one-on-one medical environment especially if the patient is in full control of theirfaculties.  As in the legal setting, the medical interpreter may take advantage of the structure of a doctor-patient interaction in order to request for pauses and clarifications. Generally, the logistics of a consecutively interpreted interaction must be established before the communication takes place.  In the case of a single speaker who will have little or no interaction with the audience this means either the speaker will pause for the interpreter, or the interpreter, and hopefully the audience, knows that the interpretation will not be delivered until the speaker has finished.  Establishing the logistics with all the parties involved, before the interpreted interaction takes place, can help prevent the uneasiness that participants often feel while waiting for the interpreter to begin. Consecutive in Relation to Simultaneous As mentioned above the primary difference between consecutive and simultaneous interpreting is involves the time lapse between the delivery of the speaker’s message and the beginning of the interpretation.  While this is a significant difference, one that provides more challenges for the interpreter, at their roots consecutive and simultaneous interpreting modes stem from the same set of cognitive processes.  These processes are described by many interpreting theorists, (Gish, 1986-1994; Colonomos, 1989; Isham, 1986), while Seleskovitch (1978) establishes the parallel between consecutive and simultaneous.  According to Seleskovitch an interpreter working in the simultaneous mode uses the same strategies, dropping form, analyzing the message for meaning, and developing a linguistically equivalent reformulation, as does the interpreter working consecutively.  After all, the goal is the same for both interpreters; to deliver an accurate and equivalent target text.  The difference is that in the simultaneous mode the interpreter continues to receive and process new information while rendering, and monitoring the target for equivalence.  Because interpreters working in the simultaneous mode are still interpreting meaning rather than form they also allow for a lag between themselves and the speaker.  That is, the interpreter waits until the speaker has begun to develop their point before beginning to interpret.  By allowing for lag time, and the interpreter ensures that they are interpreting meaning, not just individual lexical items, which Seleskovitch suggests would be an exercise in futility. “Even memorizing a half dozen words would distract the interpreter, whose attention is already divided between listening to his own words, and those of the speaker...His memory does not store the words of the sentence delivered by the speaker, but only the meaning those words convey.” (Seleskovitch, 1978, 30-31)  Seleskovitch solidifies the correlation between the cognitive processes involved in each mode when she states, “simultaneous interpretation can be learned quite rapidly, assuming one has already learned the art of analysis in consecutive interpretation” (Seleskovitch, 30).  This view has been adopted at interpreter training programs at both California State University Northridge and Gallaudet University, both of whom require classes teaching text analysis and consecutive interpreting skills prior to those dealing with simultaneous interpreting. Conclusion Rather than being two separate skills, mastery of consecutive interpretation is in fact a building block for successful simultaneous interpretations.  In fact, thanks to the time allowed for comprehension and analysis of the source text consecutive interpretations offer greater accuracy and equivalence than do simultaneous interpretations.  There are situations that lend themselves to consecutive interpretations (one-on-one interactions), and others still which require use of the consecutive mode (legal, medical) due to the consequences of a possible misinterpretation.  [i]           For the purposes of this chapter, “accuracy” relates to the content of the text, while “equivalence” relates to the ability of the target text to convey the register, affect, and style of the source text.  An “accurate” interpretation will provide the target language audience with all of the information contained in the source text, while an equivalent interpretation will provide the content, and also have the same effect on the target language audience as it would on a source language audience.  By there definitions an interpretation may be accurate, without being totally equivalent, while an equivalent interpretation assumes accuracy. [ii] List of cognitive processing skills taken from class notes in Risa Shaw’s Gallaudet University class “ITP 724, Cognitive Processing Skills; English” (2002)
5,122
Electronics/Crystals. < ../Expanded Edition/ The vast majority of crystals used in electronics are piezoelectric crystals. The piezoelectric effect (converting back and forth between electrical voltage and mechanical position) makes quartz crystals physically vibrate. The quartz crystal can be cut into the right shape and appropriately mounted so that it "rings" (vibrates) only at one precise frequency, useful for clock circuits. Some quartz crystals are used as SAW filters. PZT crystals are used in ultrasonic devices such as Medical ultrasonography. They use the piezoelectric effect to convert electrical energy to sound energy to send out a pulse, then convert the sound energy of the echo back to electrical energy. See also. Wikipedia: Crystal radio receiver
191
Geometry/Area. Area of Circles. The method for finding the area of a circle is Where formula_2 is the radius of the circle; a line drawn from any point on the circle to its center. Area of triangles. Three ways of calculating the area inside of a ../Triangle/ are mentioned here. First method. If one of the sides of the triangle is chosen as a base, then a height for the triangle and that particular base can be defined. The height is a line segment perpendicular to the base or the line formed by extending the base and the endpoints of the height are the corner point not on the base and a point on the base or line extending the base. Let B = the length of the side chosen as the base. Let <br>h = the distance between the endpoints of the height segment which is perpendicular to the base. Then the area of the triangle is given by: This method of calculating the area is good if the value of a base and its corresponding height in the triangle is easily determined. This is particularly true if the triangle is a right triangle, and the lengths of the two sides sharing the formula_4 angle can be determined. Second method. , also known as Heron's Formula If the lengths of all three sides of a triangle are known, Hero's formula may be used to calculate the area of the triangle. First, the semiperimeter, s, must be calculated by dividing the sum of the lengths of all three sides by 2. For a triangle having side lengths formula_5 : Then the triangle's area is given by: If the triangle is needle shaped, that is, one of the sides is very much shorter than the other two then it can be difficult to compute the area because the precision needed is greater than that available in the calculator or computer that is used. In otherwords Heron's formula is numerically unstable. Another formula that is much more stable is: where formula_5 have been sorted so that formula_10 . See also Heron's Formula at MathWorld and How JAVA's Floating-Point Hurts Everyone Everywhere Third method. In a triangle with sides length formula_5 and angles formula_12 opposite them, This formula is true because formula_14 in the formula formula_15 . It is useful because you don't need to find the height from an angle in a separate step, and is also used to prove the law of sines (divide all terms in the above equation by formula_16 and you'll get it directly!) Area of rectangles. The area calculation of a rectangle is simple and easy to understand. One of the sides is chosen as the base, with a length formula_17 . An adjacent side is then the height, with a length formula_18 , because in a rectangle the adjacent sides are perpendicular to the side chosen as the base. The rectangle's area is given by: Sometimes, the baselength may be referred to as the length of the rectangle, l, and the height as the width of the rectangle, w. Then the area formula becomes: Regardless of the labels used for the sides, it is apparent that the two formulas are equivalent. Of course, the area of a square with sides having length formula_21 would be: Area of parallelograms. ../Parallelograms/ are described in their own chapter. The area of a parallelogram can be determined using the equation for the area of a rectangle. The formula is: The height is a perpendicular line segment that connects one of the vertices to its opposite side (the base). Area of a rhombus. Remember in a rombus all sides are equal in length. where formula_28 represent the diagonals. Area of trapezoids. The area of a trapezoid is derived from taking the arithmetic mean of its two parallel sides to form a rectangle of equal area. formula_29 Where formula_30 are the lengths of the two parallel bases. Area of kites. The area of a kite is based on splitting the kite into four pieces by halving it along each diagonal and using these pieces to form a rectangle of equal area. Where formula_32 are the diagonals of the kite. Alternatively, the kite may be divided into two halves, each of which is a triangle, by the longer of its diagonals, formula_33 . The area of each triangle is thus Where formula_17 is the other (shorter) diagonal of the kite. And the total area of the kite (which is composed of two identical such triangles) is For more details see w:Kite (geometry)#Properties. Areas of other quadrilaterals. The areas of other ../Quadrilaterals/ are slightly more complex to calculate, but can still be found if the quadrilateral is well-defined. For example, a quadrilateral can be divided into two triangles, or some combination of triangles and rectangles. The areas of the constituent polygons can be found and added up with arithmetic. Areas defining angles. The area of a circular sector is a fraction of the area of the whole circle. When the circle has radius that is the square root of two, the circle has area 2 π, and the radian measure of the sector corresponds to the fraction of the total circular area. In calculus, another type of angle called hyperbolic is related to the exponential function. This type of angle also corresponds to the area of a sector of the hyperbola xy=1. Use of area measure provides a means to unify these angle types. See Unified Angles.
1,271
Geometry/Right Triangles and Pythagorean Theorem. Right triangles. Right triangles are triangles in which one of the interior angles is 90o. A 90o angle is called a right angle.<br> Right triangles have special properties which make it easier to conceptualize and calculate their parameters in many cases. The side opposite of the right angle is called the hypotenuse. The sides adjacent to the right angle are the legs. When using the Pythagorean Theorem, the hypotenuse or its length is often labeled with a lower case c. The legs (or their lengths) are often labeled a and b. Either of the legs can be considered a base and the other leg would be considered the height (or altitude), because the right angle automatically makes them perpendicular. If the lengths of both the legs are known, then by setting one of these sides as the base ( b ) and the other as the height ( h ), the area of the right triangle is very easy to calculate using this formula: formula_1(1/2)formula_2 This is intuitively logical because another congruent right triangle can be placed against it so that the hypotenuses are the same line segment, forming a rectangle with sides having length b and width h. The area of the rectangle is b × h, so either one of the congruent right triangles forming it has an area equal to half of that rectangle. Right triangles can be neither equilateral, acute, nor obtuse triangles. Isosceles right triangles have two 45° angles as well as the 90° angle. All isosceles right triangles are similar since corresponding angles in isosceles right triangles are equal. If another triangle can be divided into two right triangles (see ../Triangle/), then the area of the triangle may be able to be determined from the sum of the two constituent right triangles. Also the Pythagorean theorem can be used for non right triangles. a2+b2=c2-2c Pythagorean Theorem. For history regarding the Pythagorean Theorem, see Pythagorean theorem. The Pythagorean Theorem states that: Let's take a right triangle as shown here and set c equal to the length of the hypotenuse and set a and b each equal to the lengths of the other two sides. Then the Pythagorean Theorem can be stated as this equation: formula_3 Using the Pythagorean Theorem, if the lengths of any two of the sides of a right triangle are known and it is known which side is the hypotenuse, then the length of the third side can be determined from the formula. Sine, Cosine, and Tangent for Right Triangles. Sine, Cosine, and Tangent are all functions of an angle, which are useful in right triangle calculations. For an angle designated as θ, the sine function is abbreviated as sin θ, the cosine function is abbreviated as cos θ, and the tangent function is abbreviated as tan θ. For any <br>angle θ, sin θ, cos θ, and tan θ are each single determined values and if θ is a known value, sin θ, cos θ, and tan θ can be looked up in a table or found with a calculator. There is a table listing these function values at the end of this section. For an angle between listed values, the sine, cosine, or tangent of that angle can be estimated from the values in the table. Conversely, if a number is known to be the sine, cosine, or tangent of an angle, then such tables could be used in reverse to find (or estimate) the value of a corresponding angle. These three functions are related to right triangles in the following ways: In a right triangle, For any value of θ where cos θ ≠ 0, formula_4. <br> If one considers the diagram representing a right triangle with the two non-right angles θ1and θ2, and the side lengths a,b,c as shown here: For the functions of angle θ1: formula_5 <br> Analogously, for the functions of angle θ2: formula_6 Table of sine, cosine, and tangent for angles θ from 0 to 90°. General rules for important angles: formula_7 formula_8 formula_9 formula_10
1,050
Using GNOME/What is GNOME. GNOME is a graphical environment designed for the GNU/Linux Operating System, but also works on other platforms (see Supported Platforms). It is designed to be usable, accessible, free (see Free Software) and internationalized. For more on GNOME see about GNOME. The word GNOME, which is pronounced "Guh-'nome", (as it's part of the GNU Project) is an acronym for GNU Object Model Environment, and is written in all capital letters. Contents. Back to contents page
132
Using GNOME/Differences. If you are new to GNOME but are used to using another desktop or an older version of GNOME, you may find GNOME's interface confusing at first. This chapter compares the terminology used between GNOME and other desktops, which will help ease the transition.
71
Using GNOME/File manager wastebasket. The Waste basket is a place to store unwanted files. Emptying the wastebasket deletes all files in it permanently, so make sure there is no important files in there by mistake. You can also have the option to delete files instantly without moving it to the wastebasket first. To enable this, choose "Edit, Preferences" from the file manager window, then on the behavior tab select "Include a Delete command that bypasses Wastebasket". use this with care. It is like using a shredder to destroy a document. Back to contents page
133
General Mechanics/Partial Derivatives. We will now take a break from physics, and discuss the topics of partial derivatives. Further information about this topic can be found at the Partial Differential Section in the Calculus book. Partial Derivatives. In one dimension, the slope of a function, "f"("x"), is described by a single number, "df/dx". In higher dimensions, the slope depends on the direction. For example, if "f"="x"+2"y", moving one unit "x"-ward increases "f" by 1 so the slope in the "x" direction is 1, but moving one unit "y"-ward increases "f" by 2 so the slope in the "y" direction is 2. It turns out that we can describe the slope in "n" dimensions with just "n" numbers, the "partial derivatives" of "f". To calculate them, we differentiate with respect to one coordinate, while holding all the others constant. They are written using a ∂ rather than "d". E.g. Notice this is almost the same as the definition of the ordinary derivative. If we move a small distance in each direction, we can combine three equations like 1 to get The change in "f" after a small displacement is the dot product of the displacement and a special vector This vector is called the gradient of "f". It points up the direction of steepest slope. We will be using this vector quite frequently. Partial Derivatives #2. Another way to approach differentiation of multiple-variable functions can be found in Feynman Lectures on Physics vol. 2. It's like this: The differentiation operator is defined like this: formula_4 in the limit of formula_5. Adding & subtracting some terms, we get formula_6 and this can also be written as Alternate Notations. For simplicity, we will often use various standard abbreviations, so we can write most of the formulae on one line. This can make it easier to see the important details. We can abbreviate partial differentials with a subscript, e.g., or Mostly, to make the formulae even more compact, we will put the subscript on the function itself. More Info. Refer to the Partial Differential Section in the Calculus book for more information.
537
General Mechanics/Motion in Two and Three Dimensions. Motion in 2 and 3 Directions. Previously, we discussed Newtonian dynamics in one dimension. Now that we are familiar with both vectors and partial differentiation, we can extend that discussion to two or three dimensions. Work becomes a dot product and likewise power If the force is at right angles to the direction of motion, no work will be done. In one dimension, we said a force was conservative if it was a function of position alone, or equivalently, the negative slope of a potential energy. The second definition extends to In two or more dimensions, these are not equivalent statements. To see this, consider Since it doesn't matter which order derivatives are taken in, the left hand side of this equation must be zero for any force which can be written as a gradient, but for an arbitrary force, depending only on position, such as F=("y", -"x", 0), the left hand side isn't zero. Conservative forces are useful because the total work done by them depends only on the difference in potential energy at the endpoints, not on the path taken, from which the conservation of energy immediately follows. If this is the case, the work done by an infinitesimal displacement "d"x must be Comparing this with the first equation above, we see that if we have a potential energy then we must have Any such F is a conservative force. Circular Motion. An important example of motion in two dimensions is circular motion. Consider a mass, "m", moving in a circle, radius "r". The "angular velocity", ω is the rate of change of angle with time. In time Δ"t" the mass moves through an angle Δθ= ωΔ"t". The distance the mass moves is then "r" sin Δθ, but this is approximately "r"Δθ for small angles. Thus, the distance moved in a small time Δ"t" is "r"ωΔ"t", and divided by Δ"t" gives us the speed, "v". This is the "speed" not the "velocity" because it is not a vector. The velocity is a vector, with magnitude ω"r" which points tangentially to the circle. The magnitude of the velocity is constant but its direction changes so the mass is being accelerated. By a similar argument to that above it can be shown that the magnitude of the acceleration is and that it is pointed inwards, along the radius vector. This is called "centripetal" acceleration. By eliminating "v" or ω from these two equations we can write
586
Guitar/Slide Guitar. Introduction. A slide is a metal/glass/ceramic tube which fits over a finger (most commonly the ring finger or little finger, but any will work). If you wish to experiment with slide guitar, but do not have a slide, objects ranging from lighters and glass bottles to sections of metal pipe and batteries can work just as well, and in some cases provide entertainment and stage presence to a performance. "Do not press the string down." The slide rests on the string, not enough to give fret buzz, but enough to stop the string buzzing against the slide. Some players will lightly deaden the string behind the slide with a trailing finger to stop any unwanted vibrations. Practice getting a crisp note without sliding first. Because the slide rests on the strings, the slide playing a single note should be directly above the fret, not behind it as with the fingers. Usually the slide guitarist keeps the slide moving backwards and forwards slightly to create a vibrato effect. A common technique found in slide guitar is playing fingerstyle as opposed to the use of a pick or plectrum. The benefits of fingerstyle playing includes the ability to more easily pick the desired strings, while using the other fingers to dampen the other strings from undesired vibration. Raising the action of the guitar is also recommended. The normal low action, which is ideal for playing lead in standard tuning, is counter-productive when playing slide because of string buzz and lack of a clear sounding note. For this reason many guitarists have a second guitar where they raise the action to such a height to make it almost unplayable using normal technique. This high action guitar is permanently kept in an "open tuning" and is used exclusively for slide playing. Note that raising (or lowering) the action means that the intonation of the guitar has to be re-set. This can simply be done with a guitar tuner and just involves turning the string adjuster until the open string and its octave at the twelfth fret (fretted and harmonic) produce exactly the same note. Basically the needle or display of an electronic guitar tuner should settle exactly dead center regardless of whether you are playing an open high E string or fretting its octave at the twelfth fret. A guitar that is correctly set up will show this on all strings. Setting the action of electric guitars is very easy due to the string adjusters; however, acoustic guitars have their intonation set at the factory and don't have string adjusters. Adjusting the action of an acoustic should be left to a guitar shop or luthier who specializes in repair and maintenance. Though slide guitar is often played in open chord tunings, Open G and Open D being the most common, playing slide in standard tuning is also possible and can add a new dimension to your playing. Slide guitar has always provided a fascinating approach to playing the guitar and the sound of the slide has found a home in genres such as rock and country. History. One of the earliest mentions of slide guitar is in W.C.Handy's autobiography "Father Of The Blues": "As he played, he pressed a knife on the strings of the guitar in a manner popularized by the Hawaiian guitarists who used steel bars" This is also one of the earliest references to the blues. As you can tell from the quote above the use of the slide in no late-comer to the blues genre and there is large body of work from the 1920s to the present day. No guitarist can confuse the slide playing of Duane Allman with that of Robert Johnson. Each period informs of itself the dictates of taste and style. 1930s. Robert Johnson is cited as the first great slide guitarist. Other famous blues performers had comer before him, Blind "Lemon" Jefferson was a major entertainer during the 1920s but Robert Johnson is considered to be the first major exponent of the slide. During his life-time he only recorded a handful of tracks and though known locally for being a fine entertainer; the world-wide fame that is associated with his name now is more down to later blues fans and guitarists who have sought the roots of the blues.
933
Lucid Dreaming/Reality Checks/Powers. Presentation. With the powers reality check, you try to use "magical" powers (i.e. flying, unlocking doors without touching them, walking through objects, etc.) and see if they work; if they do, then chances are you're dreaming. As long as you're dreaming and you expect your "magical powers" to work, then they will, and you'll hopefully realize that you must be dreaming.
110
Lucid Dreaming/Reality Checks/Memory. Presentation. With the memory reality check, you try to remember what just happened in the past few minutes, and what happened a little bit ago. In dreams, you'll often skip from one scene to the next with little or no transition, or you might remember something that never happened. If you try to remember what just happened as a reality check, then you might realize that you were just in a completely different place, or you might not be able to remember anything at all from more than a couple minutes ago.
119
Using GNOME/Menus. A menu is a list of actions or commands. They maybe menus in menus (submenus) There are three main types of menu used in GNOME Back to contents page
47
Using GNOME/Menu bar. The Menu bar in GNOME is a item on the panel. It is split into two sections : Applications and Actions. Applications is a menu of available applications installed, organized by category. Actions is a list of actions available for the system. The actions are :
66
Using GNOME/Browse folder. Browse Folder is a different way of exploring files. In this mode For users of GNOME prior to 2.6, this mode will be familiar to you. To use browse folder, right click a folder, then select "Browse Folder". Tip : To use Browse Folder by default, set the following configuration key to true : /apps/nautilus/preferences/always_use_browser
103
Using GNOME/Configuration Editor. The Configuration Editor, technically known as GConfg-editor is a special tool to change hidden settings within GNOME. They are usually hidden because they are not missed by non-power users but power users may appreciate them. The Configuration Editor is easy to get to grips with, yet learning all the Key's hard. There are two ways to launch Configuration Editor. The configuration editor screen The screen is split in to three panes, key tree (left), key list (top right ), key info (bottom right). Key Tree. There are four main key categories in the key tree. In each category there is a collection of folders which can contain sub folders and keys. Keys each have a name, value and description. Make sure you try all the keys to see what they do. This guide has references to popular keys. Look out for them in Tip boxes!
196
Using GNOME/Terminology. This guide uses various technical terms. This chapter explains ones used. Back to contents page
27
Using GNOME/Applets. Using_GNOME > An applet is a small program that sits and runs on the panel. This chapter is a guide to all the applets available for GNOME.
48
Using GNOME/Application windows. Applications in GNOME appear in Windows. Windows are usually box-shaped and can be manipulated. Things that can appear as Windows include Window layout. By default, the following layout is used
52
Python Programming/Data Types. Data types determine whether an object can do something, or whether it just would not make sense. Other programming languages often determine whether an operation makes sense for an object by making sure the object can never be stored somewhere where the operation will be performed on the object (this type system is called static typing). Python does not do that. Instead it stores the type of an object with the object, and checks when the operation is performed whether that operation makes sense for that object (this is called dynamic typing). Built-in Data types. Python's built-in (or standard) data types can be grouped into several classes. Sticking to the hierarchy scheme used in the official Python documentation these are numeric types, sequences, sets and mappings (and a few more not discussed further here). Some of the types are only available in certain versions of the language as noted below. Numeric types: Sequences: Sets: Mappings: Some others, such as type and callables Mutable vs Immutable Objects. In general, data types in Python can be distinguished based on whether objects of the type are mutable or immutable. The content of objects of immutable types cannot be changed after they are created. Only mutable objects support methods that change the object in place, such as reassignment of a sequence slice, which will work for lists, but raise an error for tuples and strings. It is important to understand that variables in Python are really just references to objects in memory. If you assign an object to a variable as below, a = 1 s = 'abc' l = ['a string', 456, ('a', 'tuple', 'inside', 'a', 'list')] all you really do is make this variable (a, s, or l) point to the object (1, 'abc', ['a string', 456, ('a', 'tuple', 'inside', 'a', 'list')]), which is kept somewhere in memory, as a convenient way of accessing it. If you reassign a variable as below a = 7 s = 'xyz' l = ['a simpler list', 99, 10] you make the variable point to a different object (newly created ones in our examples). As stated above, only mutable objects can be changed in place (l[0] = 1 is ok in our example, but s[0] = 'a' raises an error). This becomes tricky, when an operation is not explicitly asking for a change to happen in place, as is the case for the += (increment) operator, for example. When used on an immutable object (as in a += 1 or in s += 'qwertz'), Python will silently create a new object and make the variable point to it. However, when used on a mutable object (as in l += [1,2,3]), the object pointed to by the variable will be changed in place. While in most situations, you do not have to know about this different behavior, it is of relevance when several variables are pointing to the same object. In our example, assume you set p = s and m = l, then s += 'etc' and l += [9,8,7]. This will change s and leave p unaffected, but will change both m and l since both point to the same list object. Python's built-in id() function, which returns a unique object identifier for a given variable name, can be used to trace what is happening under the hood. Typically, this behavior of Python causes confusion in functions. As an illustration, consider this code: def append_to_sequence (myseq): myseq += (9,9,9) return myseq tuple1 = (1,2,3) # tuples are immutable list1 = [1,2,3] # lists are mutable tuple2 = append_to_sequence(tuple1) list2 = append_to_sequence(list1) print('tuple1 = ', tuple1) # outputs (1, 2, 3) print('tuple2 = ', tuple2) # outputs (1, 2, 3, 9, 9, 9) print('list1 = ', list1) # outputs [1, 2, 3, 9, 9, 9] print('list2 = ', list2) # outputs [1, 2, 3, 9, 9, 9] This will give the above indicated, and usually unintended, output. myseq is a local variable of the append_to_sequence function, but when this function gets called, myseq will nevertheless point to the same object as the variable that we pass in (t or l in our example). If that object is immutable (like a tuple), there is no problem. The += operator will cause the creation of a new tuple, and myseq will be set to point to it. However, if we pass in a reference to a mutable object, that object will be manipulated in place (so myseq and l, in our case, end up pointing to the same list object). Links: Creating Objects of Defined Types. Literal integers can be entered in three ways: Floating point numbers can be entered directly. Long integers are entered either directly (1234567891011121314151617181920 is a long integer) or by appending an L (0L is a long integer). Computations involving short integers that overflow are automatically turned into long integers. Complex numbers are entered by adding a real number and an imaginary one, which is entered by appending a j (i.e. 10+5j is a complex number. So is 10j). Note that j by itself does not constitute a number. If this is desired, use 1j. Strings can be either single or triple quoted strings. The difference is in the starting and ending delimiters, and in that single quoted strings cannot span more than one line. Single quoted strings are entered by entering either a single quote (') or a double quote (") followed by its match. So therefore 'foo' works, and "moo" works as well, but 'bar" does not work, and "baz' does not work either. "quux" is right out. Triple quoted strings are like single quoted strings, but can span more than one line. Their starting and ending delimiters must also match. They are entered with three consecutive single or double quotes, so foo works, and ""moo"" works as well, but '"'bar'"' does not work, and """baz does not work either. '"'quux"'" is right out. Tuples are entered in parentheses, with commas between the entries: Also, the parenthesis can be left out when it's not ambiguous to do so: 10, 'whose fleece was as white as snow' Note that one-element tuples can be entered by surrounding the entry with parentheses and adding a comma like so: Lists are similar, but with brackets: ['abc', 1,2,3] Dicts are created by surrounding with curly braces a list of key/value pairs separated from each other by a colon and from the other entries with commas: Any of these composite types can contain any other, to any depth: Null object. The Python analogue of null pointer known from other programming languages is "None". "None" is not a null pointer or a null reference but an actual object of which there is only one instance. One of the uses of "None" is in default argument values of functions, for which see ../Functions#Default_Argument_Values. Comparisons to "None" are usually made using "is" rather than ==. Testing for None and assignment: if item is None: another = None if not item is None: if item is not None: # Also possible Using None in a default argument value: def log(message, type = None): PEP8 states that "Comparisons to singletons like None should always be done with is or is not, never the equality operators." Therefore, "if item == None:" is inadvisable. A class can redefine the equality operator (==) such that instances of it will equal None. You can verify that None is an object by dir(None) or id(None). See also ../Operators/#Identity chapter. Links: Type conversion. Type conversion in Python by example: v1 = int(2.7) # 2 v2 = int(-3.9) # -3 v3 = int("2") # 2 v4 = int("11", 16) # 17, base 16 v5 = long(2) # Python 2.x only, not Python 3.x v6 = float(2) # 2.0 v7 = float("2.7") # 2.7 v8 = float("2.7E-2") # 0.027 v9 = float(False) # 0.0 vA = float(True) # 1.0 vB = str(4.5) # "4.5" vC = str([1, 3, 5]) # "[1, 3, 5]" vD = bool(0) # False; bool fn since Python 2.2.1 vE = bool(3) # True vF = bool([]) # False - empty list vG = bool([False]) # True - non-empty list vH = bool({}) # False - empty dict; same for empty tuple vI = bool("") # False - empty string vJ = bool(" ") # True - non-empty string vK = bool(None) # False vL = bool(len) # True vM = set([1, 2]) vN = set((1, 2)) # Converts any sequence, not just a list vQ = list(vM) vR = list({1: "a", 2: "b"}) # dict -> list of keys vS = tuple(vQ) vT = list("abc") # ['a', 'b', 'c'] print(v1, v2, v3, type(v1), type(v2), type(v3)) Implicit type conversion: int1 = 4 float1 = int1 + 2.1 # 4 converted to float str1 = "My int:" + str(int1) int2 = 4 + True # 5: bool is implicitly converted to int float2 = 4.5 + True # 5.5: True is converted to 1, which is converted to 1.0 Keywords: type casting. Links:
2,455
Python Programming/Scoping. Variables. Variables in Python are automatically declared by assignment. Variables are always references to objects, and are never typed. Variables exist only in the current scope or global scope. When they go out of scope, the variables are destroyed, but the objects to which they refer are not (unless the number of references to the object drops to zero). Scope is delineated by function and class blocks. Both functions and their scopes can be nested. So therefore def foo(): def bar(): x = 5 # x is now in scope return x + y # y is defined in the enclosing scope later y = 10 return bar() # now that y is defined, bar's scope includes y Now when this code is tested, »> foo() 15 »> bar() Traceback (most recent call last): File "<pyshell#26>", line 1, in -toplevel- bar() NameError: name 'bar' is not defined The name 'bar' is not found because a higher scope does not have access to the names lower in the hierarchy. It is a common pitfall to fail to assign an object to a variable before use. In its most common form: »> for x in range(10): y.append(x) # append is an attribute of lists Traceback (most recent call last): File "<pyshell#46>", line 2, in -toplevel- y.append(x) NameError: name 'y' is not defined Here, to correct this problem, one must add y = [] before the for loop executes. A loop does not create its own scope: for x in [1, 2, 3]: inner = x print(inner) # 3 rather than an error Keyword global. Global variables of a Python module are read-accessible from functions in that module. In fact, if they are mutable, they can be also modified via method call. However, they cannot be modified by a plain assignment unless they are declared "global" in the function. An example to clarify: count1 = 1 count2 = 1 list1 = [] list2 = [] def test1(): print(count1) # Read access is unproblematic, referring to the global def test2(): try: print(count1) # This try block is problematic because... count1 += 1 # count1 += 1 causes count1 to be local but local version is undefined. except UnboundLocalError as error: print("Error caught:", error) def test3(): list1 = [2] # No outside effect; this defines list1 to be a local variable def test4(): global count2, list2 print(count1) # Read access is unproblematic, referring to the global count2 += 1 # We can modify count2 via assignment list1.append(1) # Impacts the global list1 even without global declaration since its a method call list2 = [2] # We can modify list2 via assignment test1() test2() test3() test4() print("count1:", count1) # 1 print("count2:", count2) # 2 print("list1:", list1) # [1] print("list2:", list2) # [2] Links: Keyword nonlocal. Keyword nonlocal, available since Python 3.0, is an analogue of "global" for nested scopes. It enables a nested function to assign-modify even an immutable variable that is local to the outer function. An example: def outer(): outerint = 0 outerint2 = 10 def inner(): nonlocal outerint outerint = 1 # Impacts outer's outerint only because of the nonlocal declaration outerint2 = 1 # No impact inner() print(outerint) print(outerint2) outer() Simulation of nonlocal in Python 2 via a mutable object: def outer(): outerint = [1] # Technique 1: Store int in a list class outerNL: pass # Technique 2: Store int in a class outerNL.outerint2 = 11 def inner(): outerint[0] = 2 # List members can be modified outerNL.outerint2 = 12 # Class members can be modified inner() print(outerint[0]) print(outerNL.outerint2) outer() Links: globals and locals. To find out which variables exist in the global and local scopes, you can use "locals()" and "globals()" functions, which return dictionaries: int1 = 1 def test1(): int1 = 2 globals()["int1"] = 3 # Write access seems possible print(locals()["int1"])# 2 test1() print(int1) # 3 Write access to locals() dictionary is discouraged by the Python documentation. Links:
1,168
Python Programming/Operators. Basics. Python math works as expected: »> x = 2 »> y = 3 »> z = 5 »> x * y 6 »> x + y 5 »> y - x 1 »> x * y + z 11 »> (x + y) * z 25 »> 3.0 / 2.0 # True division 1.5 »> 3 // 2 # Floor division 1 »> 2 ** 3 # Exponentiation 8 Note that Python adheres to the PEMDAS order of operations. Powers. There is a built in exponentiation operator "**", which can take either integers, floating point or complex numbers. This occupies its proper place in the order of operations. »> 2**8 256 Floor Division and True Division. In Python 3.x, slash operator ("/") does "true division" for all types including integers, and therefore, e.g. codice_1. The result is of a floating-point type even if both inputs are integers: 4 / 2 yields 2.0. In Python 3.x and latest 2.x, "floor division" for both integer arguments and floating-point arguments is achieved by using the double slash ("//") operator. For negative results, this is unlike the integer division in the C language since -3 // 2 == -2 in Python while -3 / 2 == -1 in C: C rounds the negative result toward zero while Python toward negative infinity. Beware that due to the limitations of floating point arithmetic, rounding errors can cause unexpected results. For example: »> print(0.6/0.2) 3.0 »> print(0.6//0.2) 2.0 For Python 2.x, dividing two integers or longs using the slash operator ("/") uses "floor division" (applying the floor function after division) and results in an integer or long. Thus, 5 / 2 == 2 and -3 / 2 == -2. Using "/" to do division this way is deprecated; if you want floor division, use "//" (available in Python 2.2 and later). Dividing by or into a floating point number will cause Python to use true division. Thus, to ensure true division in Python 2.x: codice_2. Links: Modulus. The modulus (remainder of the division of the two operands, rather than the quotient) can be found using the "%" operator, or by the "divmod" builtin function. The "divmod" function returns a "tuple" containing the quotient and remainder. »> 10 % 7 3 »> -10 % 7 4 Note that -10 % 7 is equal to +4 while in the C language it is equal to -3. That is because Python floors towards negative infinity not zero. As a result, remainders add towards positive infinity. Consequently, since -10 / 7 = -1.4286 becomes floored to -2.0 the remainder becomes x such that -14 + x = -10. Links: Negation. Unlike some other languages, variables can be negated directly: »> x = 5 »> -x -5 Comparison. Numbers, strings and other types can be compared for equality/inequality and ordering: »> 2 == 3 False »> 3 == 3 True »> 3 == '3' False »> 2 < 3 True »> "a" < "aa" True Identity. The operators codice_3 and codice_4 test for object identity and stand in contrast to == (equals): codice_5 is true if and only if x and y are references to the same object in memory. codice_6 yields the inverse truth value. Note that an identity test is more stringent than an equality test since two distinct objects may have the same value. »> [1,2,3] == [1,2,3] True »> [1,2,3] is [1,2,3] False For the built-in immutable data types (like int, str and tuple) Python uses caching mechanisms to improve performance, i.e., the interpreter may decide to reuse an existing immutable object instead of generating a new one with the same value. The details of object caching are subject to changes between different Python versions and are not guaranteed to be system-independent, so identity checks on immutable objects like codice_7, codice_8, codice_9 may give different results on different machines. In some Python implementations, the following results are applicable: print(8 is 8) # True print("str" is "str") # True print((1, 2) is (1, 2)) # False - whyever, it is immutable print([1, 2] is [1, 2]) # False print(id(8) == id(8)) # True int1 = 8 print(int1 is 8) # True oldid = id(int1) int1 += 2 print(id(int1) == oldid)# False Links: Augmented Assignment. There is shorthand for assigning the output of an operation to one of the inputs: »> x = 2 »> x # 2 2 »> x *= 3 »> x # 2 * 3 6 »> x += 4 »> x # 2 * 3 + 4 10 »> x /= 5 »> x # (2 * 3 + 4) / 5 2 »> x **= 2 »> x # ((2 * 3 + 4) / 5) ** 2 4 »> x %= 3 »> x # ((2 * 3 + 4) / 5) ** 2 % 3 1 »> x = 'repeat this ' »> x # repeat this repeat this »> x *= 3 # fill with x repeated three times »> x repeat this repeat this repeat this Logical Operators. Logical operators are operators that act on booleans. or. The or operator returns true if any one of the booleans involved are true. If none of them are true (in other words, they are all false), the or operator returns false. if a or b: do_this else: do_this and. The and operator only returns true if all of the booleans are true. If any one of them is false, the and operator returns false. if a and b: do_this else: do_this not. The not operator only acts on one boolean and simply returns its opposite. So, true turns into false and false into true. if not a: do_this else: do_this The order of operations here is: "not" first, "and" second, "or" third. In particular, "True or True and False or False" becomes "True or False or False" which is True. Warning, logical operators can act on things other than booleans. For instance "1 and 6" will return 6. Specifically, "and" returns either the first value considered to be false, or the last value if all are considered true. "or" returns the first true value, or the last value if all are considered false. In Python the number "zero" and "empty" strings, lists, sets, etc. are considered false. You may use codice_10 to check whether a thing is considered to be true or false in Python. For instance, codice_11 and codice_12 both return codice_13. Bitwise Operators. Python operators for bitwise arithmetic are like those in the C language. They include & (bitwise and), | (bitwise or), ^ (exclusive or AKA xor), « (shift left), » (shift right), and ~ (complement). Augmented assignment operators (AKA compound assignment operators) for the bitwise operations include &=, |=, ^=, «=, and »=. Bitwise operators apply to integers, even negative ones and very large ones; for the shift operators, the second operand must be non-negative. In the Python internal help, this is covered under the topics of EXPRESSIONS and BITWISE. Examples: Examples of augmented assignment operators: Class definitions can overload the operators for the instances of the class; thus, for instance, sets overload the pipe (|) operator to mean set union: {1,2} | {3,4} == {1,2,3,4}. The names of the override methods are __and__ for &, __or__ for |, __xor__ for ^, __invert__ for ~, __lshift__ for «, __rshift__ for », __iand__ for &=, __ior_ for |=, __ixor__ for ^=, __ilshift__ for «=, and __irshift__ for »=. Examples of use of bitwise operations include calculation of CRC and MD5. Admittedly, these would usually be implemented in C rather than Python for maximum speed; indeed, Python has libraries for these written in C. Nonetheless, implementations in Python are possible and are shown in the links to Rosetta Code below. Links:
2,282
Python Programming/Functions. Function Calls. A "callable object" is an object that can accept some arguments (also called parameters) and possibly return an object (often a tuple containing multiple objects). A function is the simplest callable object in Python, but there are others, such as classes or certain class instances. Defining Functions. A function is defined in Python by the following format: def functionname(arg1, arg2, ...): statement1 statement2 »> def functionname(arg1,arg2): ... return arg1+arg2 »> t = functionname(24,24) # Result: 48 If a function takes no arguments, it must still include the parentheses, but without anything in them: def functionname(): statement1 statement2 The arguments in the function definition bind the arguments passed at function invocation (i.e. when the function is called), which are called actual parameters, to the names given when the function is defined, which are called formal parameters. The interior of the function has no knowledge of the names given to the actual parameters; the names of the actual parameters may not even be accessible (they could be inside another function). A function can 'return' a value, for example: def square(x): return x*x A function can define variables within the function body, which are considered 'local' to the function. The locals together with the arguments comprise all the variables within the scope of the function. Any names within the function are unbound when the function returns or reaches the end of the function body. You can return multiple values as follows: def first2items(list1): return list1[0], list1[1] a, b = first2items(["Hello", "world", "hi", "universe"]) print(a + " " + b) Keywords: returning multiple values, multiple return values. Declaring Arguments. When calling a function that takes some values for further processing, we need to send some values as Function Arguments. For example: »> def find_max(a,b): if(a > b): return str(a) + " is greater than " + str(b) elif(b > a): return str(b) + " is greater than " + str(a) »> find_max(30, 45) #Here (30, 45) are the arguments passing for finding max between this two numbers The output will be: 45 is greater than 30 Default Argument Values. If any of the formal parameters in the function definition are declared with the format "arg = value," then you will have the option of not specifying a value for those arguments when calling the function. If you do not specify a value, then that parameter will have the default value given when the function executes. »> def display_message(message, truncate_after=4): ... print(message[:truncate_after]) »> display_message("message") mess »> display_message("message", 6) messag Links: Variable-Length Argument Lists. Python allows you to declare two special arguments which allow you to create arbitrary-length argument lists. This means that each time you call the function, you can specify any number of arguments above a certain number. def function(first,second,*remaining): statement1 statement2 When calling the above function, you must provide value for each of the first two arguments. However, since the third parameter is marked with an asterisk, any actual parameters after the first two will be packed into a tuple and bound to "remaining." »> def print_tail(first,*tail): ... print(tail) »> print_tail(1, 5, 2, "omega") If we declare a formal parameter prefixed with "two" asterisks, then it will be bound to a dictionary containing any keyword arguments in the actual parameters which do not correspond to any formal parameters. For example, consider the function: def make_dictionary(max_length=10, **entries): return dict([(key, entries[key]) for i, key in enumerate(entries.keys()) if i < max_length]) If we call this function with any keyword arguments other than max_length, they will be placed in the dictionary "entries." If we include the keyword argument of max_length, it will be bound to the formal parameter max_length, as usual. »> make_dictionary(max_length=2, key1=5, key2=7, key3=9) Links: By Value and by Reference. Objects passed as arguments to functions are passed "by reference"; they are not being copied around. Thus, passing a large list as an argument does not involve copying all its members to a new location in memory. Note that even integers are objects. However, the distinction of "by value" and "by reference" present in some other programming languages often serves to distinguish whether the passed arguments can be "actually changed" by the called function and whether the "calling function can see the changes". Passed objects of "mutable" types such as lists and dictionaries can be changed by the called function and the changes are visible to the calling function. Passed objects of "immutable" types such as integers and strings cannot be changed by the called function; the calling function can be certain that the called function will not change them. For mutability, see also Data Types chapter. An example: def appendItem(ilist, item): ilist.append(item) # Modifies ilist in a way visible to the caller def replaceItems(ilist, newcontentlist): del ilist[:] # Modification visible to the caller ilist.extend(newcontentlist) # Modification visible to the caller ilist = [5, 6] # No outside effect; lets the local ilist point to a new list object, # losing the reference to the list object passed as an argument def clearSet(iset): iset.clear() def tryToTouchAnInteger(iint): iint += 1 # No outside effect; lets the local iint to point to a new int object, # losing the reference to the int object passed as an argument print("iint inside:",iint) # 4 if iint was 3 on function entry list1 = [1, 2] appendItem(list1, 3) print(list1) # [1, 2, 3] replaceItems(list1, [3, 4]) print(list1) # [3, 4] set1 = set([1, 2]) clearSet(set1 ) print(set1) # set([]) int1 = 3 tryToTouchAnInteger(int1) print(int1) # 3 Preventing Argument Change. If an argument is of an immutable type any changes made to it will remain local to the called function. However, if the argument is of a mutable type, such as a list, changes made to it will update the corresponding value in the calling function. Thus, if the calling function wants to make sure its mutable value passed to some unknown function will not be changed by it must create and pass a copy of the value. An example: def evil_get_length(ilist): length = len(ilist) del ilist[:] # Muhaha: clear the list return length list1 = [1, 2] print(evil_get_length(list1[:])) # Pass a copy of list1 print(list1) # list1 = [1, 2] print(evil_get_length(list1)) # list1 gets cleared print(list1) # list1 = [] Calling Functions. A function can be called by appending the arguments in parentheses to the function name or an empty pair of parentheses if the function takes no arguments. foo() square(3) bar(5, x) A function's return value can be used by assigning it to a variable, like so: x = foo() y = bar(5,x) As shown above, when calling a function you can specify the parameters by name and you can do so in any order def display_message(message, start=0, end=4): print(message[start:end]) display_message("message", end=3) This above is valid and start will have the default value of 0. A restriction placed on this is after the first named argument then all arguments after it must also be named. The following is not valid display_message(end=5, start=1, "my message") because the third argument ("my message") is an unnamed argument. Nested functions. Nested functions are functions defined within other functions. Arbitrary level of nesting is possible. Nested functions can read variables declared in the immediately outside function. For such variables that are mutable, nested functions can even modify them. For such variables that are immutable such as integers, attempt at modification in the nested function throws UnboundLocalError. In Python 3, an immutable immediately outside variable can be declared in the nested function to be "nonlocal", in an analogy to "global". Once this is done, the nested function can assign a new value to that variable and that modification is going to be seen outside of the nested function. Nested functions can be used in #Closures, as shown below. Furthermore, they can be used to reduce repetion of code that pertains only to a single function, often with reduced argument list owing to seeing the immediately outside variables. An example of a nested function that modifies an immediately outside variable that is a list and therefore mutable: def outside(): outsideList = [1, 2] def nested(): outsideList.append(3) nested() print(outsideList) An example in which the outside variable is first accessed "below" the nested function definition and it still works: def outside(): def nested(): outsideList.append(3) outsideList = [1, 2] nested() print(outsideList) Keywords: inner functions, internal functions, local functions. Links: Lambda Expressions. A lambda is an anonymous (unnamed) function. It is used primarily to write very short functions that are a hassle to define in the normal way. A function like this: »> def add(a, b): ... return a + b »> add(4, 3) 7 may also be defined using lambda »> print ((lambda a, b: a + b)(4, 3)) 7 Lambda is often used as an argument to other functions that expects a function object, such as sorted()'s 'key' argument. »> sorted(3, 4], [3, 5], [1, 2], [7, 3, key=lambda x: x[1]) 1, 2], [7, 3], [3, 4], [3, 5 The lambda form is often useful as a closure, such as illustrated in the following example: »> def attribution(name): ... return lambda x: x + ' -- ' + name »> pp = attribution('John') »> pp('Dinner is in the fridge') 'Dinner is in the fridge -- John' Note that the lambda function can use the values of variables from the scope in which it was created similar to regular locally defined functions described above. In fact, exporting the precalculations embodiedby its constructor function is one of the essential utilities of closures. Links: Generator Functions. When discussing loops, you came across the concept of an "iterator". This yields in turn each element of some sequence, rather than the entire sequence at once, allowing you to deal with sequences much larger than might be able to fit in memory at once. You can create your own iterators, by defining what is known as a "generator function". To illustrate the usefulness of this, let us start by considering a simple function to return the "concatenation" of two lists: def concat(a, b): return a + b print(concat([5, 4, 3], ["a", "b", "c"])) Imagine wanting to do something like codice_1 That would work, but it would consume a lot of memory. Consider an alternative definition, which takes two iterators as arguments: def concat(a, b): for i in a: yield i for i in b: yield i Notice the use of the yield statement, instead of return. We can now use this something like for i in concat(range(0, 1000000), range(1000000, 2000000)): print(i) and print out an awful lot of numbers, without using a lot of memory at all. Note: You can still pass a list or other sequence type wherever Python expects an iterator (like to an argument of your codice_2 function); this will still work, and makes it easy not to have to worry about the difference where you don’t need to. Links:
3,026
General Mechanics/Cross Product. There are two ways to multiply two vectors together, the dot product and the cross product. We have already studied the dot product of two vectors, which results in a scalar or single number. The cross product of two vectors results in a third vector, and is written symbolically as follows: The cross product of two vectors is defined to be perpendicular to the plane defined by these vectors. However, this doesn't tell us whether the resulting vector points upward out of the plane or downward. This ambiguity is resolved using the right-hand rule: The magnitude of the cross product is given by where formula_4 and formula_5 are the magnitudes of formula_6 , and formula_7 is the angle between these two vectors. Note that the magnitude of the cross product is zero when the vectors are parallel or anti-parallel, and maximum when they are perpendicular. This contrasts with the dot product, which is maximum for parallel vectors and zero for perpendicular vectors. Notice that the cross product does not commute, i.e. the order of the vectors is important. In particular, it is easy to show using the right-hand rule that An alternate way to compute the cross product is most useful when the two vectors are expressed in terms of components, where the determinant is expanded as if all the components were numbers, giving Note how the positive terms possess a forward alphabetical direction, "xyzxyzx..." (with x following z): With the cross product we can also multiply three vectors together, in two different ways. We can take the dot product of a vector with a cross product, a triple scalar product, The absolute value of this product is the volume of the parallelpiped defined by the three vectors formula_8 Alternately, we can take the cross product of a vector with a cross product, a triple vector product, which can be simplified to a combination of dot products. This form is easier to do calculations with. The triple vector product is not associative. A nice as well as a useful way to denote the cross product is using the indicial notation where formula_9 is the Levi-Civita alternating symbol and formula_10 is either of the unit vectors formula_11 . (A good exercise to convince yourself would be to use this expression and see if you can get formula_12 as defined before.)
518
Python Programming/Classes. Classes are a way of aggregating similar data and functions. A class is basically a scope inside which various code (especially function definitions) is executed, and the locals to this scope become "attributes" of the class, and of any objects constructed by this class. An object constructed by a class is called an "instance" of that class. Overview. Classes in Python at a glance: import math class MyComplex: """A complex number""" # Class documentation classvar = 0.0 # A class attribute, not an instance one def phase(self): # A method return math.atan2(self.imaginary, self.real) def __init__(self): # A constructor """A constructor""" self.real = 0.0 # An instance attribute self.imaginary = 0.0 c1 = MyComplex() c1.real = 3.14 # No access protection c1.imaginary = 2.71 phase = c1.phase() # Method call c1.undeclared = 9.99 # Add an instance attribute del c1.undeclared # Delete an instance attribute print(vars(c1)) # Attributes as a dictionary vars(c1)["undeclared2"] = 7.77 # Write access to an attribute print(c1.undeclared2) # 7.77, indeed MyComplex.classvar = 1 # Class attribute access print(c1.classvar == 1) # True; class attribute access, not an instance one print("classvar" in vars(c1)) # False c1.classvar = -1 # An instance attribute overshadowing the class one MyComplex.classvar = 2 # Class attribute access print(c1.classvar == -1) # True; instance attribute access print("classvar" in vars(c1)) # True class MyComplex2(MyComplex): # Class derivation or inheritance def __init__(self, re = 0, im = 0): self.real = re # A constructor with multiple arguments with defaults self.imaginary = im def phase(self): print("Derived phase") return MyComplex.phase(self) # Call to a base class; "super" c3 = MyComplex2() c4 = MyComplex2(1, 1) c4.phase() # Call to the method in the derived class class Record: pass # Class as a record/struct with arbitrary attributes record = Record() record.name = "Joe" record.surname = "Hoe" Defining a Class. To define a class, use the following format: class ClassName: "Here is an explanation about your class" pass The capitalization in this class definition is the convention, but is not required by the language. It's usually good to add at least a short explanation of what your class is supposed to do. The pass statement in the code above is just to say to the python interpreter just go on and do nothing. You can remove it as soon as you are adding your first statement. Instance Construction. The class is a callable object that constructs an instance of the class when called. Let's say we create a class Foo. class Foo: "Foo is our new toy." pass To construct an instance of the class, Foo, "call" the class object: f = Foo() This constructs an instance of class Foo and creates a reference to it in f. Class Members. In order to access the member of an instance of a class, use the syntax codice_1. It is also possible to access the members of the class definition with codice_2. Methods. A method is a function within a class. The first argument (methods must always take at least one argument) is always the instance of the class on which the function is invoked. For example »> class Foo: ... def setx(self, x): ... self.x = x ... def bar(self): ... print(self.x) If this code were executed, nothing would happen, at least until an instance of Foo were constructed, and then bar were called on that instance. Why a mandatory argument? In a normal function, if you were to set a variable, such as codice_3, you could not access the test variable. Typing codice_4 would say it is not defined. This is true in class functions unless they use the codice_5 variable. Basically, in the previous example, if we were to remove self.x, function bar could not do anything because it could not access x. The x in setx() would disappear. The self argument saves the variable into the class's "shared variables" database. Why self? You do not need to use self. However, it is a norm to use self. Invoking Methods. Calling a method is much like calling a function, but instead of passing the instance as the first parameter like the list of formal parameters suggests, use the function as an attribute of the instance. »> f = Foo() »> f.setx(5) »> f.bar() This will output 5 It is possible to call the method on an arbitrary object, by using it as an attribute of the defining class instead of an instance of that class, like so: »> Foo.setx(f,5) »> Foo.bar(f) This will have the same output. Dynamic Class Structure. As shown by the method setx above, the members of a Python class can change during runtime, not just their values, unlike classes in languages like C++ or Java. We can even delete f.x after running the code above. »> del f.x »> f.bar() Traceback (most recent call last): File "<stdin>", line 1, in ? File "<stdin>", line 5, in bar AttributeError: Foo instance has no attribute 'x' Another effect of this is that we can change the definition of the Foo class during program execution. In the code below, we create a member of the Foo class definition named y. If we then create a new instance of Foo, it will now have this new member. »> Foo.y = 10 »> g = Foo() »> g.y 10 Viewing Class Dictionaries. At the heart of all this is a dictionary that can be accessed by "vars(ClassName)" »> vars(g) At first, this output makes no sense. We just saw that g had the member y, so why isn't it in the member dictionary? If you remember, though, we put y in the class definition, Foo, not g. »> vars(Foo) And there we have all the members of the Foo class definition. When Python checks for g.member, it first checks g's vars dictionary for "member," then Foo. If we create a new member of g, it will be added to g's dictionary, but not Foo's. »> g.setx(5) »> vars(g) Note that if we now assign a value to g.y, we are not assigning that value to Foo.y. Foo.y will still be 10, but g.y will now override Foo.y »> g.y = 9 »> vars(g) »> vars(Foo) Sure enough, if we check the values: »> g.y 9 »> Foo.y 10 Note that f.y will also be 10, as Python won't find 'y' in vars(f), so it will get the value of 'y' from vars(Foo). Some may have also noticed that the methods in Foo appear in the class dictionary along with the x and y. If you remember from the section on lambda functions, we can treat functions just like variables. This means that we can assign methods to a class during runtime in the same way we assigned variables. If you do this, though, remember that if we call a method of a class instance, the first parameter passed to the method will always be the class instance itself. Changing Class Dictionaries. We can also access the members dictionary of a class using the __dict__ member of the class. »> g.__dict__ If we add, remove, or change key-value pairs from g.__dict__, this has the same effect as if we had made those changes to the members of g. »> g.__dict__['z'] = -4 »> g.z -4 Why use classes? Classes are special due to the fact once an instance is made, the instance is independent of all other instances. I could have two instances, each with a different x value, and they will not affect the other's x. f = Foo() f.setx(324) f.boo() g = Foo() g.setx(100) g.boo() codice_6 and codice_7 will print different values. New Style Classes. New style classes were introduced in python 2.2. A new-style class is a class that has a built-in as its base, most commonly object. At a low level, a major difference between old and new classes is their type. Old class instances were all of type codice_8. New style class instances will return the same thing as x.__class__ for their type. This puts user defined classes on a level playing field with built-ins. Old/Classic classes are slated to disappear in Python 3. With this in mind all development should use new style classes. New Style classes also add constructs like properties and static methods familiar to Java programmers. Old/Classic Class »> class ClassicFoo: ... def __init__(self): ... pass New Style Class »> class NewStyleFoo(object): ... def __init__(self): ... pass Properties. Properties are attributes with getter and setter methods. »> class SpamWithProperties(object): ... def __init__(self): ... self.__egg = "MyEgg" ... def get_egg(self): ... return self.__egg ... def set_egg(self, egg): ... self.__egg = egg ... egg = property(get_egg, set_egg) »> sp = SpamWithProperties() »> sp.egg 'MyEgg' »> sp.egg = "Eggs With Spam" »> sp.egg 'Eggs With Spam' and since Python 2.6, with @property decorator »> class SpamWithProperties(object): ... def __init__(self): ... self.__egg = "MyEgg" ... @property ... def egg(self): ... return self.__egg ... @egg.setter ... def egg(self, egg): ... self.__egg = egg Static Methods. Static methods in Python are just like their counterparts in C++ or Java. Static methods have no "self" argument and don't require you to instantiate the class before using them. They can be defined using staticmethod() »> class StaticSpam(object): ... def StaticNoSpam(): ... print("You can't have have the spam, spam, eggs and spam without any spam... that's disgusting") ... NoSpam = staticmethod(StaticNoSpam) »> StaticSpam.NoSpam() You can't have have the spam, spam, eggs and spam without any spam... that's disgusting They can also be defined using the function decorator @staticmethod. »> class StaticSpam(object): ... @staticmethod ... def StaticNoSpam(): ... print("You can't have have the spam, spam, eggs and spam without any spam... that's disgusting") Inheritance. Like all object oriented languages, Python provides support for inheritance. Inheritance is a simple concept by which a class can extend the facilities of another class, or in Python's case, multiple other classes. Use the following format for this: class ClassName(BaseClass1, BaseClass2, BaseClass3...): ClassName is what is known as the derived class, that is, derived from the base classes. The derived class will then have all the members of its base classes. If a method is defined in the derived class and in the base class, the member in the derived class will override the one in the base class. In order to use the method defined in the base class, it is necessary to call the method as an attribute on the defining class, as in Foo.setx(f,5) above: »> class Foo: ... def bar(self): ... print("I'm doing Foo.bar()") ... x = 10 »> class Bar(Foo): ... def bar(self): ... print("I'm doing Bar.bar()") ... Foo.bar(self) ... y = 9 »> g = Bar() »> Bar.bar(g) I'm doing Bar.bar() I'm doing Foo.bar() »> g.y 9 »> g.x 10 Once again, we can see what's going on under the hood by looking at the class dictionaries. »> vars(g) »> vars(Bar) »> vars(Foo) When we call g.x, it first looks in the vars(g) dictionary, as usual. Also as above, it checks vars(Bar) next, since g is an instance of Bar. However, thanks to inheritance, Python will check vars(Foo) if it doesn't find x in vars(Bar). Multiple inheritance. As shown in section #Inheritance, a class can be derived from multiple classes: class ClassName(BaseClass1, BaseClass2, BaseClass3): pass A tricky part about multiple inheritance is method resolution: upon a method call, if the method name is available from multiple base classes or their base classes, which base class method should be called. The method resolution order depends on whether the class is an old-style class or a new-style class. For old-style classes, derived classes are considered from left to right, and base classes of base classes are considered before moving to the right. Thus, above, BaseClass1 is considered first, and if method is not found there, the base classes of BaseClass1 are considered. If that fails, BaseClass2 is considered, then its base classes, and so on. For new-style classes, see the Python documentation online. Links: Special Methods. There are a number of methods which have reserved names which are used for special purposes like mimicking numerical or container operations, among other things. All of these names begin and end with two underscores. It is convention that methods beginning with a single underscore are 'private' to the scope they are introduced within. Initialization and Deletion. __init__. One of these purposes is constructing an instance, and the special name for this is '__init__'. __init__() is called before an instance is returned (it is not necessary to return the instance manually). As an example, class A: def __init__(self): print('A.__init__()') a = A() outputs A.__init__() __init__() can take arguments, in which case it is necessary to pass arguments to the class in order to create an instance. For example, class Foo: def __init__ (self, printme): print(printme) foo = Foo('Hi!') outputs Hi! Here is an example showing the difference between using __init__() and not using __init__(): class Foo: def __init__ (self, x): print(x) foo = Foo('Hi!') class Foo2: def setx(self, x): print(x) f = Foo2() Foo2.setx(f,'Hi!') outputs Hi! Hi! __del__. Similarly, '__del__' is called when an instance is destroyed; e.g. when it is no longer referenced. __enter__ and __exit__. These methods are also a constructor and a destructor but they're only executed when the class is instantiated with codice_9. Example: class ConstructorsDestructors: def __init__(self): print('init') def __del__(self): print('del') def __enter__(self): print('enter') def __exit__(self, exc_type, exc_value, traceback): print('exit') with ConstructorsDestructors(): pass init enter exit del __new__. constructor. Operator Overloading. Operator overloading allows us to use the built-in Python syntax and operators to call functions which we define. Programming Practices. The flexibility of python classes means that classes can adopt a varied set of behaviors. For the sake of understandability, however, it's best to use many of Python's tools sparingly. Try to declare all methods in the class definition, and always use the <class>.<member> syntax instead of __dict__ whenever possible. Look at classes in C++ and Java to see what most programmers will expect from a class. Encapsulation. Since all python members of a python class are accessible by functions/methods outside the class, there is no way to enforce encapsulation short of overriding __getattr__, __setattr__ and __delattr__. General practice, however, is for the creator of a class or module to simply trust that users will use only the intended interface and avoid limiting access to the workings of the module for the sake of users who do need to access it. When using parts of a class or module other than the intended interface, keep in mind that the those parts may change in later versions of the module, and you may even cause errors or undefined behaviors in the module, since encapsulation is private. Doc Strings. When defining a class, it is convention to document the class using a string literal at the start of the class definition. This string will then be placed in the __doc__ attribute of the class definition. »> class Documented: ... """This is a docstring""" ... def explode(self): ... This method is documented, too! The coder is really serious about ... making this class usable by others who don't know the code as well ... as he does. ... """ ... print("boom") »> d = Documented() »> d.__doc__ 'This is a docstring' Docstrings are a very useful way to document your code. Even if you never write a single piece of separate documentation (and let's admit it, doing so is the lowest priority for many coders), including informative docstrings in your classes will go a long way toward making them usable. Several tools exist for turning the docstrings in Python code into readable API documentation, "e.g.", EpyDoc. Don't just stop at documenting the class definition, either. Each method in the class should have its own docstring as well. Note that the docstring for the method "explode" in the example class "Documented" above has a fairly lengthy docstring that spans several lines. Its formatting is in accordance with the style suggestions of Python's creator, Guido van Rossum in PEP 8. Adding methods at runtime. To a class. It is fairly easy to add methods to a class at runtime. Lets assume that we have a class called "Spam" and a function cook. We want to be able to use the function cook on all instances of the class Spam: class Spam: def __init__(self): self.myeggs = 5 def cook(self): print("cooking %s eggs" % self.myeggs) Spam.cook = cook #add the function to the class Spam eggs = Spam() #NOW create a new instance of Spam eggs.cook() #and we are ready to cook! This will output cooking 5 eggs To an instance of a class. It is a bit more tricky to add methods to an instance of a class that has already been created. Lets assume again that we have a class called "Spam" and we have already created eggs. But then we notice that we wanted to cook those eggs, but we do not want to create a new instance but rather use the already created one: class Spam: def __init__(self): self.myeggs = 5 eggs = Spam() def cook(self): print("cooking %s eggs" % self.myeggs) import types f = types.MethodType(cook, eggs, Spam) eggs.cook = f eggs.cook() Now we can cook our eggs and the last statement will output: cooking 5 eggs Using a function. We can also write a function that will make the process of adding methods to an instance of a class easier. def attach_method(fxn, instance, myclass): f = types.MethodType(fxn, instance, myclass) setattr(instance, fxn.__name__, f) All we now need to do is call the attach_method with the arguments of the function we want to attach, the instance we want to attach it to and the class the instance is derived from. Thus our function call might look like this: attach_method(cook, eggs, Spam) Note that in the function add_method we cannot write instance.fxn = f since this would add a function called fxn to the instance.
5,086
Using GNOME/Tips and Warnings. < GNOME Terminology In GNOME, there may be things that you could do more efficiently, there are also things that could potentially harm your user experience. To indicate these, Tip and warning boxes will appear throughout the guide. Examples. This is a tip. I provide helpful tips. This is a warning. Please read me if I appear!
92
Trigonometry/Vectors and Dot Products. Consider the vectors U and V (with respective magnitudes |U| and |V|). If those vectors enclose an angle θ then the dot product of those vectors can be written as: If the vectors can be written as: then the "dot product" is given by: For example, and We can interpret the last case by noting that the product is zero because the angle between the two vectors is 90 degrees. Since and this means that
114
Electronics/Voltage. Electric Field: E. A stationary charge has an electric field surrounding it given by: formula_1 A is 4 π r2 the surface area of a sphere<br> Q/εo is how much the electric field deforms space This becomes: formula_2 Voltage: V. The difference between two points in an electric field is the voltage V. formula_4 Voltage is used in accelerators to accelerate charged particles to the speed of light. In Electronics voltage is the potential between two opposing charges given by: formula_5 Where &epsilono is the permittivity of the vacuum, q1 and q2 are the values of the two charges (in coulombs), and r1 and r2 are their distances. Note how the Voltage falls off as 1/r. If the charges were similar it would not make any sense. But if the particles have opposite charges then the voltage connects the charges. Through voltage positive charges go to the negative end, and negative charges go to the positive end. Voltage causes charged particles to move according to the rules of repulsion and attraction, so electrons move from negative to positive. Two charged particles have a potential between them that relates to their separation distance. Charged particles separated by a distance have a voltage associated with them. If the particles have a similar charge the voltage is repulsive and does not mean much. Work: W. PE<br> A charged particle in an electric field at distance r has an electrical potential energy U associated with it. formula_6 KE<br> When a charged particle is place in an electric field at distance r it has a force on it. The direction of force depends on the two charges, but minimizes the PE. formula_7 Acceleration<br> When a charged particle moves due to the force of an electric field it does work. This work causes the particle to accelerate. Work W is the change in U, or F applied at a distance. formula_8 Falling downhill is positive work for the electric field and climbing uphill is positive work for the charged particle. Similar charges repel so bringing them together is uphill. Opposite charges attract so moving them apart is uphill. Current: I. So the voltage on a charged particle causes it to accelerate. This is known as current. It is sometimes taught that current in electric circuits is composed of electrons, which flow from the negative terminal of the power source to the positive at the speed of light. This is not (completely) true. As you increase voltage you increase the electric field and the speed at which charged particles travel. This is why increasing voltage directly increases current. Reversing the voltage reverses the current. Sometimes you have voltage but no current. This is used in analog and digital circuits to control switches. So, negative particles drift from negative to positive voltage, and positive particles drift in the opposite direction from positive to negative voltage. The particles drift at different speeds in different materials. speed of "holes" based on bandgap. Given the presence of holes we tend to ignore the particles and focus on the current flow. Current is measured by the amount of charge flow per unit time and represents the speed of the electromagnetics waves. In talking about current we will mainly talk about electrons flowing, as they are the predominant charge carriers in metal and many circuit components. Current = flow of charge (usually electrons Current is the change in charge over time. formula_9 Accumulation of current is charge. Talk about cells and capacitors. formula_10 Resistance: R. Resistance opposes the flow of electrons. In the absence of resistance current shorts and flows unhindered like that of a power surge. Resistance combined with voltage set limits on the current that is allowed to flow through electronics. This is necessary otherwise the parts would melt (extreme electromigration). As resistance increases the flow of charge slows to a trickle until current stops flowing. Given the sheer number of electrons flowing this does not happen until resistance is effectively infinite. Without resistance this is effectively a short meaning the electrons flow unhindered. The current is limited by the voltage. Resistance stops the flow of current. A short circuit has no resistance. As resistance increases to infinity the current stops flowing and becomes an open circuit. formula_11 This is known as Ohm's law. Which says that Current "I" is equal to Voltage "V" divided by Resistance "R". Or that voltage creates current and resistance limits the flow of current. In a circuit resistance does not change much, so most of the behavior of a circuit depends on the voltage which controls the current. Current through a conductor versus an insulator like air.
1,094
XML - Managing Data Exchange/XPath. Introduction. Throughout the previous chapters you have learned the basic concepts of XSL and how you must refer to nodes in an XML document when performing an XSL transformation. Up to this point you have been using a straightforward syntax for referring to nodes in an XML document. Although the syntax you have used so far has been XPath there are many more functions and capabilities that you will learn in this chapter. As you begin to comprehend how path language is used for referring to nodes in an XML document your understanding of XML as a tree structure will begin to fall into place. This chapter contains examples that demonstrate many of the common uses of XPath, but for the full XPath specification, see the latest version of the standard at: http://www.w3.org/TR/xpath XSL uses XPath heavily. XPath. When you go to copy a file or ‘cd’ into a directory at a command prompt you often type something along the lines of ‘/home/darnell/’ to refer to folders. This enables you to change into or refer to folders throughout your computer’s file system. XML has a similar way of referring to elements in an XML document. This special syntax is called XPath, which is short for XML Path Language. XPath is a language for finding information in an XML document. XPath is used to navigate through elements and attributes in an XML document. XPath, although used for referring to nodes in an XML tree, is not itself written in XML. This was a wise choice on the part of the W3C, because trying to specify path information in XML would be a very cumbersome task. Any characters that form XML syntax would need to be escaped so that it is not confused with XML when being processed. XPath is also very succinct, allowing you to call upon nodes in the XML tree with a great degree of specificity without being unnecessarily verbose. XML as a tree structure. The great benefit about XML is that the document itself describes the structure of data. If any of you have researched your family history, you have probably come across a family tree. At the top of the tree is some early ancestor and at the bottom of the tree are the latest children. With a tree structure you can see which children belong to which parents, which grandchildren belong to which grandparents and many other relationships. The neat thing about XML is that it also fits nicely into this tree structure, often referred to as an XML Tree. Understanding node relationships. We will use the following example to demonstrate the different node relationships. <bookstore> <book> <title>Less Than Zero</title> <author>Bret Easton Ellis</author> <year>1985</year> <price>13.95</price> </book> </bookstore> Also, it is still useful in some ways to think of an XML file as simultaneously being a serialized file, like you would view it in an XML editor. This is so you can understand the concepts of preceding and following nodes. A node is said to precede another if the original node is before the other in document order. Likewise, a node follows another if it is after that node in document order. Ancestors and descendants are not considered to be either preceding or following a node. This concept will come in handy later when discussing the concept of an axis. Abbreviated vs. Unabbreviated XPath syntax. XPath was created so that nodes can be referred to very succinctly, while retaining the ability to search on many options. Most uses of XPath will involve searching for child nodes, parent nodes, or attribute nodes of a particular node. Because these uses are so common, an abbreviated syntax can be used to refer to these commonly-searched nodes. Following is an XML document that simulates a tree (the type that has leaves and branches.) It will be used to demonstrate the different types of syntax. <?xml version="1.0" encoding="UTF-8"?> <trunk name="the_trunk"> <bigBranch name="bb1" thickness="thick"> <smallBranch name="sb1"> <leaf name="leaf1" color="brown" /> <leaf name="leaf2" weight="50" /> <leaf name="leaf3" /> </smallBranch> <smallBranch name="sb2"> <leaf name="leaf4" weight="90" /> <leaf name="leaf5" color="purple" /> </smallBranch> </bigBranch> <bigBranch name="bb2"> <smallBranch name="sb3"> <leaf name="leaf6" /> </smallBranch> <smallBranch name="sb4"> <leaf name="leaf7" /> <leaf name="leaf8" /> <leaf name="leaf9" color="black" /> <leaf name="leaf10" weight="100" /> </smallBranch> </bigBranch> </trunk> Following are a few examples of XPath location paths in English, Abbreviated XPath, then Unabbreviated XPath. Selection 1: English: All <leaf> elements in this document that are children of <smallBranch> elements that are children of <bigBranch> elements, that are children of the trunk, which is a child of the root. Abbreviated: /trunk/bigBranch/smallBranch/leaf Unabbreviated: /child::trunk/child::bigBranch/child::smallBranch/child::leaf Selection 2: English: The <bigBranch> elements with ‘name’ attribute equal to ‘bb3,’ that are children of the trunk element, which is a child of the root. Abbreviated: /trunk/bigBranch[@name=’bb3’] Unabbreviated: /child::trunk/child::bigBranch[attribute::name=’bb3’] Notice how we can specify which bigBranch objects we want by using a predicate in the previous example. This narrows the search down to only bigBranch nodes that satisfy the predicate. The predicate is the part of the XPath statement that is in square brackets. In this case, the predicate is asking for bigBranch nodes with their ‘name’ attribute set to ‘bb3’. The last two examples assume we want to specify the path from the root. Let’s now assume that we are specifying the path from a <smallBranch> node. Selection 3: English:The parent node of the current <smallBranch>. (Notice that this selection is relative to a <smallBranch>) Abbreviated: .. Unabbreviated: parent::node() When using the Unabbreviated Syntax, you may notice that you are calling a parent or child followed by two colons (::). Each of those are called an axis. You will learn more about axes shortly. Also, this may be a good time to explain the concept of a location path. A location path is the series of location steps taken to reach the node/nodes being selected. Location steps are the parts of XPath statements separated by / characters. They are one step on the way to finding the nodes you would like to select. Location steps are comprised of three parts: an axis (child, parents, descendant, etc.), a node test (name of a node, or a function that retrieves one or more nodes), and a series of predicates (tests on the retrieved nodes that narrow the results, eliminating nodes that do not pass the predicate’s test). So, in a location path, each of its location steps returns a node-list. If there are further steps on the path after a location step, the next step is executed on all the nodes returned by that step. Relative vs. Absolute paths. When specifying a path with XPath, there are times when you will already be ‘in’ a node. But other times, you will want to select nodes starting from the root node. XPath lets you do both. If you have ever worked with websites in HTML, it works the same way as referring to other files in HTML hyperlinks. In HTML, you can specify an Absolute Path for the hyperlink, describing where another page is with the server name, folders, and filename all in the URL. Or, if you are referring to another file on the same site, you need not enter the server name or all of the path information. This is called a Relative Path. The concept can be applied similarly in XPath. You can tell the difference by whether there is a ‘/’ character at the beginning of the XPath expression. If so, the path is being specified from the root, which makes it an Absolute Path. But if there is no ‘/’ at the beginning of the path, you are specifying a Relative Path, which describes where the other nodes are relative to the context node, or the node for which the next step is being taken. Below is an XSL stylesheet (Exhibit 9.3) for use with our tree.xml file above (Exhibit 9.2). <?xml version="1.0" encoding="UTF-8" ?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="html"/> <!-- Example of an absolute link. The element '/child::trunk' is being specified from the root element. --> <xsl:template match="/child::trunk"> <html> <head> <title>XPath Tree Tests</title> </head> <body> <!-- Example of a relative link. The <for-each> xsl statement will execute for every <bigBranch> node in the ‘current’ node, which is the <trunk>node. --> <xsl:for-each select="child::bigBranch"> <xsl:call-template name="print_out" /> </xsl:for-each> </body> </html> </xsl:template> <xsl:template name="print_out"> <xsl:value-of select="attribute::name" /> <br /> </xsl:template> </xsl:stylesheet> Four types of XPath location paths. In the last two sections you learned about two different distinctions to separate out different location paths: Unabbreviated vs. Abbreviated and Relative vs. Absolute. Combining these two concepts could be helpful when talking about XPath location paths. Not to mention, it could make you sound really smart in front of your friends when you say things like: I only mention this four-way distinction now because it could come in handy while reading the specification, or other texts on the subject. XPath axes. In XPath, there are some node selections whose performance requires the Unabbreviated Syntax. In this case, you will be using an axis to specify each location step on your way through the location path. From any node in the tree, there are 13 axes along which you can step. They are as follows: XPath predicates and functions. Sometimes, you may want to use a predicate in an XPath Location Path to further filter your selection. Normally, you would get a set of nodes from a location path. A predicate is a small expression that gets evaluated for each node in a set of nodes. If the expression evaluates to ‘false’, then the node is not included in the selection. An example is as follows: //p[@class=‘alert’] In the preceding example, every <p> tag in the document is checked to see if its ‘class’ attribute is set to ‘alert’. Only those <p> tags with a ‘class’ attribute with value ‘alert’ are included in the set of nodes for this location path. The following example uses a function, which can be used in a predicate to get information about the context node. /book/chapter[position()=3] This previous example selects only the chapter of the book in the third position. So, for something to be returned, the current <book> element must have at least 3 <chapter> elements. Also notice that the position function returns an integer. There are many functions in the XPath specification. For a complete list, see the W3C specification at http://www.w3.org/TR/xpath#corelib Here are a few more functions that may be helpful: number last() – last node in the current node set number position() – position of the context node being tested number count(node-set) – the number of nodes in a node-set boolean starts-with(string, string) – returns true if the first argument starts with the second boolean contains(string, string) – returns true if the first argument contains the second number sum(node-set) – the sum of the numeric values of the nodes in the node-set number floor(number) – the number, rounded down to the nearest integer number ceiling(number) – the number, rounded up to the nearest integer number round(number) – the number, rounded to the nearest integer Example. The following XML document, XSD schemas, and XSL stylesheet examples are to help you put everything you have learned in this chapter together using real life data. As you study this example you will notice how XPath can be used in the stylesheet to call and modify the output of specific information from the document. Below is an XML document (Exhibit 9.4) <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet href="movies.xsl" type="text/xsl" media="screen"?> <movieCollection xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="movies.xsd"> <movie> <movieTitle>Meet the Parents</movieTitle> <movieSynopsis> Greg Focker is head over heels in love with his girlfriend Pam, and is ready to pop the big question. When his attempt to propose is thwarted by a phone call with the news that Pam's younger sister is getting married, Greg realizes that the key to Pam's hand in marriage lies with her formidable father. </movieSynopsis> <role> <roleIDREF>bs1</roleIDREF> <roleType>Lead Actor</roleType> </role> <role> <roleIDREF>tp1</roleIDREF> <roleType>Lead Actress</roleType> </role> <role> <roleIDREF>rd1</roleIDREF> <roleType>Lead Actor</roleType> </role> <role> <roleIDREF>bd1</roleIDREF> <roleType>Supporting Actress</roleType> </role> </movie> <movie> <movieTitle>Elf</movieTitle> <movieSynopsis> One Christmas Eve, a long time ago, a small baby at an orphanage crawled into Santa’s bag of toys, only to go undetected and accidentally carried back to Santa’s workshop in the North Pole. Though he was quickly taken under the wing of a surrogate father and raised to be an elf, as he grows to be three sizes larger than everyone else, it becomes clear that Buddy will never truly fit into the elf world. What he needs is to find his real family. This holiday season, Buddy decides to find his true place in the world and sets off for New York City to track down his roots. </movieSynopsis> <role> <roleIDREF>wf1</roleIDREF> <roleType>Lead Actor</roleType> </role> <role> <roleIDREF>jc1</roleIDREF> <roleType>Supporting Actor</roleType> </role> <role> <roleIDREF>zd1</roleIDREF> <roleType>Lead Actress</roleType> </role> <role> <roleIDREF>ms1</roleIDREF> <roleType>Supporting Actress</roleType> </role> </movie> <castMember> <castMemberID>rd1</castMemberID> <castFirstName>Robert</castFirstName> <castLastName>De Niro</castLastName> <castSSN>489-32-5984</castSSN> <castGender>male</castGender> </castMember> <castMember> <castMemberID>bs1</castMemberID> <castFirstName>Ben</castFirstName> <castLastName>Stiller</castLastName> <castSSN>590-59-2774</castSSN> <castGender>male</castGender> </castMember> <castMember> <castMemberID>tp1</castMemberID> <castFirstName>Teri</castFirstName> <castLastName>Polo</castLastName> <castSSN>099-37-8765</castSSN> <castGender>female</castGender> </castMember> <castMember> <castMemberID>bd1</castMemberID> <castFirstName>Blythe</castFirstName> <castLastName>Danner</castLastName> <castSSN>273-44-8690</castSSN> <castGender>male</castGender> </castMember> <castMember> <castMemberID>wf1</castMemberID> <castFirstName>Will</castFirstName> <castLastName>Ferrell</castLastName> <castSSN>383-56-2095</castSSN> <castGender>male</castGender> </castMember> <castMember> <castMemberID>jc1</castMemberID> <castFirstName>James</castFirstName> <castLastName>Caan</castLastName> <castSSN>389-49-3029</castSSN> <castGender>male</castGender> </castMember> <castMember> <castMemberID>zd1</castMemberID> <castFirstName>Zooey</castFirstName> <castLastName>Deschanel</castLastName> <castSSN>309-49-4005</castSSN> <castGender>female</castGender> </castMember> <castMember> <castMemberID>ms1</castMemberID> <castFirstName>Mary</castFirstName> <castLastName>Steenburgen</castLastName> <castSSN>988-43-4950</castSSN> <castGender>female</castGender> </castMember> </movieCollection> Below is the second XML document (Exhibit 9.5) <?xml version="1.0" encoding="UTF-8"?> <cities xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="cities.xsd"> <city> <cityID>c2</cityID> <cityName>Mandal</cityName> <cityPopulation>13840</cityPopulation> <cityCountry>Norway</cityCountry> <tourismDescription>A small town with a big atmosphere. Mandal provides comfort away from normal luxuries. </tourismDescription> <capitalCity>c3</capitalCity> </city> <city> <cityID>c3</cityID> <cityName>Oslo</cityName> <cityPopulation>533050</cityPopulation> <cityCountry>Norway</cityCountry> <tourismDescription>Oslo is the capital of Norway for many reasons. It is also the capital location for tourism. The culture, shopping, and attractions can all be experienced in Oslo. Just remember to bring your wallet. </tourismDescription> </city> </cities> Below is the Movies schema (Exhibit 9.6) <?xml version="1.0" encoding="UTF-8"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" elementFormDefault="unqualified"> <!--Movie Collection--> <xsd:element name="movieCollection"> <xsd:complexType> <xsd:sequence> <xsd:element name="movie" type="movieDetails" minOccurs="1" maxOccurs="unbounded"/> </xsd:sequence> </xsd:complexType> </xsd:element> <!--This contains the movie details.--> <xsd:complexType name="movieDetails"> <xsd:sequence> <xsd:element name="movieTitle" type="xsd:string" minOccurs="1" maxOccurs="unbounded"/> <xsd:element name="movieSynopsis" type="xsd:string"/> <xsd:element name="role" type="roleDetails" minOccurs="1" maxOccurs="unbounded"/> </xsd:sequence> </xsd:complexType> <!--The contains the genre details.--> <xsd:complexType name="roleDetails"> <xsd:sequence> <xsd:element name="roleIDREF" type="xsd:IDREF"/> <xsd:element name="roleType" type="xsd:string"/> </xsd:sequence> </xsd:complexType> <xsd:simpleType name="ssnType"> <xsd:restriction base="xsd:string"> <xsd:pattern value="\d{3}-\d{2}-\d{4}"/> </xsd:restriction> </xsd:simpleType> <xsd:complexType name="castDetails"> <xsd:sequence> <xsd:element name="castMemberID" type="xsd:ID"/> <xsd:element name="castFirstName" type="xsd:string"/> <xsd:element name="castLastName" type="xsd:string"/> <xsd:element name="castSSN" type="ssnType"/> <xsd:element name="castGender" type="xsd:string"/> </xsd:sequence> </xsd:complexType> </xsd:schema> Below is the Cities schema (Exhibit 9.7) <?xml version="1.0" encoding="UTF-8"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" attributeFormDefault="unqualified"> <xsd:element name="cities"> <xsd:complexType> <xsd:sequence> <xsd:element name="city" type="cityType" maxOccurs="unbounded"/> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:complexType name="cityType"> <xsd:sequence> <xsd:element name="cityID" type="xsd:ID"/> <xsd:element name="cityName" type="xsd:string"/> <xsd:element name="cityPopulation" type="xsd:integer"/> <xsd:element name="cityCountry" type="xsd:string"/> <xsd:element name="tourismDescription" type="xsd:string"/> <xsd:element name="capitalCity" type="xsd:IDREF" minOccurs="0" maxOccurs="1"/> </xsd:sequence> </xsd:complexType> </xsd:schema> Below is the XSL stylesheet (Exhibit 9.8) <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:key name="castList" match="castMember" use="castMemberID"/> <xsl:output method="html"/> <!-- example of using an abbreviated absolute path to pull info from cities_xpath.xml for the city "Oslo" specifically --> <!-- specify absolute path to select cityName and assign it the variable "city" --> <xsl:variable name="city" select="document('cities_xpath.xml') /cities/city[cityName='Oslo']/cityName" /> <!-- specify absolute path to select cityCountry and assign it the variable "country" --> <xsl:variable name="country" select="document('cities_xpath.xml') /cities/city[cityName='Oslo']/cityCountry" /> <!-- specify absolute path to select tourismDescription and assign it the variable "description" --> <xsl:variable name="description" select="document('cities_xpath.xml') /cities/city[cityName='Oslo']/tourismDescription" /> <xsl:template match="/"> <html> <head> <title>Movie Collection</title> </head> <body> <h2>Movie Collection</h2> <xsl:apply-templates select="movieCollection"/> </body> </html> </xsl:template> <xsl:template match="movieCollection"> <!-- let's say we just want to see the actors. --> <xsl:for-each select="movie"> <hr /> <br /> <b><xsl:text>Movie Title: </xsl:text></b> <xsl:value-of select="movieTitle"/> <br /> <br /> <b><xsl:text>Movie Synopsis: </xsl:text></b> <xsl:value-of select="movieSynopsis"/> <br /> <br />--> <!-- actor info begins here. --> <b><xsl:text>Cast: </xsl:text></b> <br /> <!-- specify an abbreviated relative path here for "role." NOTE: there is no predicate in this one; it's just a path. --> <xsl:for-each select="movie/role"> <xsl:sort select="key('castList',roleIDREF)/castLastName"/> <xsl:number value="position()" format="
 0. " /> <xsl:value-of select="key('castList',roleIDREF)/castFirstName"/> <xsl:text> </xsl:text> <xsl:value-of select="key('castList',roleIDREF)/castLastName"/> <xsl:text>, </xsl:text> <xsl:value-of select="roleType"/> <br /> <xsl:value-of select="key('castList',roleIDREF)/castGender"/> <xsl:text>, </xsl:text> <xsl:value-of select="key('castList',roleIDREF)/castSSN"/> <br /> <br /> </xsl:for-each> </xsl:for-each>--> <hr /> <!--calling the variables --> <span style="color:red;"> <p><b>Travel Advertisement</b></p> <!-- reference the city, followed by a comma, and then the country --> <p><xsl:value-of select="$city" />, <xsl:value-of select="$country" /></p> <!-- reference the description --> <xsl:value-of select="$description" /> </span> </xsl:template> </xsl:stylesheet>
8,313
General Mechanics/Torque and Angular Momentum. Torque. Torque is the action of a force on a mass which induces it to revolve about some point, called the origin. It is defined as where formula_1 is the position of the mass relative to the origin. Notice that the torque is 0 in a number of circumstances. If the force points directly toward or away from the origin, the cross product is zero, resulting in zero torque, even though the force is non-zero. Likewise, if formula_2 , the torque is 0. Thus, a force acting at the origin produces no torque. Both of these limits make sense intuitively, since neither induces the mass to revolve around the origin. Angular momentum. The angular momentum of a mass relative to a point O is defined as where p is the ordinary (also called "linear") momentum of the mass. The angular momentum is 0 if the motion of the object is directly towards or away from the origin, or if it is located at the origin. If we take the cross product of the position vector and Newton's second law, we obtain an equation that relates torque and angular momentum: Since the cross product of parallel vectors is zero, this simplifies to This is the rotational version of Newton's second law. For both torque and angular momentum the location of the origin is arbitrary, and is generally chosen for maximum convenience. However, it is necessary to choose the same origin for both the torque and the angular momentum. For the case of a central force, i.e. one which acts along the line of centers between two objects (such as gravity), there often exists a particularly convenient choice of origin. If the origin is placed at the center of the sun (which is assumed not to move under the influence of the planet's gravity), then the torque exerted on the planet by the sun's gravity is 0, which means that the angular momentum of the planet about the center of the sun is constant in time. No other choice of origin would yield this convenient result. We already know about two fundamental conservation laws—those of energy and linear momentum. We believe that angular momentum is similarly conserved in isolated systems. In other words, particles can exchange angular momentum between themselves, but the vector sum of the angular momentum of all the particles in a system isolated from outside influences must remain constant. In the modern view, conservation of angular momentum is a consequence of the isotropy of space—i.e. the properties of space don't depend on direction. This is in direct analogy with conservation of ordinary momentum, which we recall is a consequence of the homogeneity of space. Angular velocity and centrifugal force. If an object is rotating about an axis, formula_3 being a unit vector, at frequency formula_4 we say it has angular velocity formula_5 . Despite the name, this is "not" the rate of change of an angle, nor even of a vector. If a constant vector formula_6 is rotating with angular velocity formula_5 about a fixed point then This says the acceleration is always at right angles to both the velocity and the axis of rotation. When the axis is changing formula_5 can be defined as the vector which makes this true. Note that on the left hand side of this equation formula_9 is a vector in a fixed coordinate system with variable components but on the right hand side its components are given in a moving coordinate system, where they are fixed. We can distinguish them more clearly by using subscripts, formula_1 for rotating and formula_11 for fixed, then extend this to arbitrary vectors for "any" vector formula_12 Using this, we can write Newton's second law in the rotating frame. or, rearranging The mass behaves as if there were two additional forces acting on it. The first term, formula_13 is called the "Coriolis" force. The second term is recognizable as the familiar centrifugal force.
873
Using GNOME/Panel. The Panel is an area, often along the edge of the screen, which can contain program launchers, menus, the window list and notification area, and other applets. GNOME by default has two panels on the desktop, one up the top of the screen and one down the bottom. The top panel contains some menus to the left, for Applications, Actions and Places, a notification area to the right, and a clock to the left of the notification area. The bottom panel contains a button on the left which hides all visible windows, displaying the desktop, a window list to the right of that, which contains buttons for all windows open on the desktop, and to the right of the panel a desktop pager. Customizing Panels. Panels can be customized in a number of ways. It is possible to change their appearance slightly, to change their contents, and to change their position, size and behaviour. Contents of Panels. The contents of panels are all applets, which can be added, moved and removed on the panel. To move applets on the panel, right-clicking the applet in the appropriate area will open a context menu which contains options such as to "Lock to Panel", "Move" and "Remove from Panel". To move the applet, select Move and the applet will follow your mouse as you move it along the panel, changing its position. You will notice when moving applets around, the applet will flip to other sides of other applets, or move those applets out of the way if necessary. By right-clicking an applet and selecting "Lock to Panel", the applet will not move around for other applets, and will be locked in position on the panel. If you are unable to move an applet on the panel, it might be because it is locked to the panel. Clicking the option from the context menu will deselect this option, allowing you to move the applet. Also note that you must sometimes click on the appropriate spot on the applet. For instance, right-clicking on an entry in the window list will open a context menu related to the window represented by the button (i.e. "Close", "Maximize", etc.). In the case of the window list, notification area, and some other applets, the right place to click is a small area to one side of the applet. The "Remove from Panel" option under the context menu for applets will remove the applet in question from the panel completely. It can be returned at any time. Sometimes however, preferences related to the applet can be lost when removing them from the panel. To add an applet to the panel, right-click on an empty space on the panel. Empty space might be hard to find sometimes i.e. the window list, when empty, looks like empty space, but is not, so watch out for this, and move / resize / remove applets if they get in your way. The context menu that appear will provide you with an option to "Add to Panel". Selecting this will open a dialog which lists many applets that you can add, and their descriptions. They will be placed on the panel roughly where you right-clicked to open the context menu, and you can move them around and change them at will. Sometimes when clicking on applets special options appear that are specific to that applet. These can provide options and preferences for that applet that you can change, for example, on their behaviour and appearance. You can experiment with some applets to see how you can customize them. Appearance and Behaviour of Panels. The appearance of panels can be changed in ways by right-clicking on an empty space on the panel to open a context menu, and selecting "Properties". Under the "Background" tab, it is possible to change the background colour of the panel, to provide a background image which will be repeated across the panel, or to change the transparency of the panel. The size and position of panels can be changed by opening the "Properties" dialog, and looking under the "General" tab. "Orientation" changes which edge of the screen the panel sits on. It is possible for it to sit horizontally on the top or bottom of the screen, or for it to sit vertically on the left or right. It is possible to change the height or width (depending on the orientation of the panel) of the panel by changing the "Size" option. Selecting or unselecting "Expand" will determine whether the panel expands to fill an entire side of the screen or whether it is only as big as its contents. "Autohide" will hide the panel, and make it reappear when your mouse hovers over the appropriate edge of the screen. "Show hide buttons" determines whether there are "hide" buttons on the panel, which you can click to make the panel appear or disappear. It is possible to change the side of the screen a panel is on by clicking on an empty space on the panel and dragging it. When the "Expand" option for the panel is not set, it is possible for the panel to occupy parts of the screen other than the edges. Adding or Removing Panels. It is possible to add a new panel by right-clicking on an empty space on an existing panel to open the context menu and selecting "New Panel". This will create a new panel on another side of the screen, which you can change around however you want. You can delete an existing panel by selecting "Delete this Panel" from the context menu. Customising Panels from the command line. Seehttp://www.byteclub.net/wiki/GnomePanelhttp://web.archive.org/web/20080723151557/http://www.byteclub.net/wiki/GnomePanel
1,262
Latin/Lesson 2-Active v Passive. A verb's voice shows the relationship between the subject and the action expressed by the verb. Latin has two voices: active and passive. In the active voice, the subject of the clause performs the verb on something else (the object), e.g., "The girl sees the boy." In the passive voice, the subject of the sentence receives the action of the verb, e.g., "The boy is seen by the girl." The personal endings in the active voice are: -ō/-m, -s, -t, -mus, -tis, -nt. The personal endings in the passive voice (present, imperfect, future) are: -r, -ris, -tur, -mur, -mini, -ntur. In the perfect, pluperfect and future perfect, the passive voice is formed by the fourth principal part plus the proper forms of sum, esse. For the perfect tense, use the present forms of esse, for the pluperfect use the imperfect forms of esse, and for the future perfect use the future forms of esse. The fourth principal part, when used in a passive construction, acts as a first-second declension adjective and is declined accordingly. As stated before, when the passive voice is used, the subject receives the action of the verb from another agent. This agent, when it is a person, is expressed by the preposition ā/ab plus the ablative case. This construction is called the "ablative of personal agent". The "ablative of cause" is used without a preposition when the agent is not a person. Deponent verbs. Some verbs are always passive in form, even though they have an active meaning. For example: Some, called semi-deponent verbs, take on a passive form on only in the perfect. For example: Note that some deponent and semi-deponent verbs take the accusative case (eg. vereor, vereri, veritus sum = I fear), some the ablative (eg. utor, uti, usus sum = I use) and some the dative (eg. confido, confidere, confisus sum = I trust). When you first encounter such a verb in Latin, be sure to remember the case of the object the verb is taking along with its spelling and meaning.
525
General Mechanics/Momentum. Changing mass. So far we've assumed that the mass of the objects being considered is constant, which is not always true. Mass is conserved overall, but it can be useful to consider objects, such as rockets, which are losing or gaining mass. We can work out how to extend Newton's second law to this situation by considering a rocket two ways, as a single object of variable mass, and as two objects of fixed mass which are being pushed apart. We find that Force is the rate of change of a quantity, formula_1 , which we call "linear momentum". Newton's third law. Newton's third law says that the force, formula_2 exerted by a mass formula_3 , on a second mass formula_4 , is equal and opposite to the force formula_5 exerted by the second mass on the first. if there are no external forces on the two bodies. We can add the two momenta together to get, so the total linear momentum is conserved. Ultimately, this is consequence of space being homogeneous. Centre of mass. Suppose two constant masses are subject to external forces, formula_6 Then the total force on the system, formula_7 , is because the internal forces cancel out. If the two masses are considered as one system, formula_7 should be the product of the total mass and formula_9 , the average acceleration, which we expect to be related to some kind of average position. The average acceleration is the second derivative of the average position, "weighted by mass". This average position is called the "centre of mass", and accelerates at the same rate as if it had the total mass of the system, and were subject to the total force. We can extend this to any number of masses under arbitrary external and internal forces. then the position of the centre of mass formula_20 is We can now take the second derivative of formula_20 But the sum of "all" the internal forces is zero, because Newton's third law makes them cancel in pairs. Thus, the second term in the above equation drops out and we are left with: The centre of mass always moves like a body of the same total mass under the total external force, irrespective of the internal forces.
508
Portuguese/Contents/Numbers. « Portuguese:Contents
11
XML - Managing Data Exchange/Web Services. Web Services Overview. Web Services are a new breed of Web application. They are self-contained, self-describing, modular applications that can be published, located, and invoked across the Web. Web services perform functions, which can be anything from simple requests to complicated business processes. Once a Web service is deployed, other applications (and other Web services) can discover and invoke the deployed service. Web services make use of XML to describe the request and response, and HTTP as its network transport. The primary difference between a Web Service and a web application relates to collaboration. Web applications are simply business applications which are located or invoked using web protocols. Similarly, Web Services also perform computing functions remotely over a network. However, Web Services use internet protocols with the specific intent of enabling inter operable machine to machine coordination. Web Services have emerged as a solution to problems associated with distributed computing. Distributed computing is the use of multiple systems to perform a function rather than having a single system perform it. The previous technologies used in distributed computing, primarily Common Object Request Broker Architecture (CORBA) and Distributed Component Object Model (DCOM), had some limitations. For example, neither has achieved complete platform independence or easy transport over firewalls. Additionally, DCOM is not vendor independent, being a Microsoft product. Some of the primary needs for a distributed computing standard were: Over time, business information systems became highly configured and differentiated. This inevitably made system interaction extremely costly and time consuming. Developers began realizing the benefits of standardizing Web Service development. Using web standards seemed to be an intuitive and logical step toward attaining these goals. Web standards already provided a platform independent means for system communication and were readily accepted by information system users. The end result was the development of Web Services. A Web Service forms a distributed environment, in which objects can be accessed remotely via standardized interfaces. It uses a three-tiered model, defining a service provider, a service consumer, and a service broker. This allows the Web Service to be a loose relationship, so that if a service provider goes down, the broker can always direct consumers to another one. Similarly, there are many brokers, so consumers can always find an available one. For communication, Web Services use open Web standards: TCP/IP, HTTP, and XML based SOAP. At higher levels technologies such as XAML, XLANG, (transactional support for complex web transactions involving multiple web services) and XKMS (ongoing work by Microsoft and Verisign to support authentication and registration) might be added. SOAP. Simple Object Access Protocol (SOAP) is a method for sending information to and from Web Services in an extensible format. SOAP can be used to send information or remote procedure calls encoded as XML. Essentially, SOAP serves as a universally accepted method of communication with web services. Businesses adhere to the SOAP conventions in order to simplify the process of interacting with Web Services. <SOAP:Envelope xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP:Header> <!-- SOAP header --> </SOAP:Header> <SOAP:Body SOAP:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"> <!-- SOAP body --> </SOAP:Body> </SOAP:Envelope> A SOAP message contains either a request method for invoking a Web Service, or contains response information to a Web Service request. Adhering to this layout when developing independent Web Services provides notable benefits to the businesses. Due to the fact that Web Applications are designed to be utilized by a myriad of actors, developers want them to be easily adoptable. Using established and familiar standards of communication ultimately reduces the amount of effort it takes users to effectively interact with a Web Service. The SOAP Envelope is used for defining and organizing the content contained in Web Service messages. Primarily, the SOAP envelope serves to indicate that the specified document will be used for service interaction. It contains an optional SOAP Header and a SOAP Body. Messages are sent in the SOAP body, and the SOAP head is used for sending other information that wouldn't be expected in the body. For example, if the SOAP:actor attribute is present in the SOAP header, it indicates who the recipient of the message should be. A web service transaction involves a SOAP request and a SOAP response. The example we will be using is a Web Service provided by Weather.gov. The input is latitude, longitude, a start date, how many days of forecast information desired, and the format of the data. The SOAP request will look like this: <?xml version="1.0" encoding="UTF-8" standalone="no"?/> <SOAP-ENV:Envelope SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Body> <m:NDFDgenByDayRequest xmlns:SOAPSDK1="http://www.weather.gov/forecasts/xml/DWMLgen/wsdl/ndfdXML.wsdl"> <latitude xsi:type="xsd:decimal">33.955464</latitude> <longitude xsi:type="xsd:decimal">-83.383245</longitude> <startDate xsi:type="xsd:date"></startDate> <numDays xsi:type="xsd:integer">1</numDays> <format>24 Hourly</format> </m:NDFDgenByDayRequest> </SOAP-ENV:Body> </SOAP-ENV:Envelope> The startDate was left empty because this will automatically get the most recent data. The format data type is not defined because it is defined in the WSDL document. The response SOAP looks like this. <?xml version="1.0" encoding="UTF-8" standalone="no"?/> <SOAP-ENV:Envelope SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Body> <NDFDgenByDayResponse xmlns:SOAPSDK1="http://www.weather.gov/forecasts/xml/DWMLgen/wsdl/ndfdXML.wsdl"> <dwmlByDayOut xsi:type="xsd:string">...</dwmlByDayOut> </NDFDgenByDayResponse> </SOAP-ENV:Body> </SOAP-ENV:Envelope> SOAP handles data by encoding it on the sender side and decoding it on the receiver side. The data types handled by SOAP are based on the W3C XML Schema specification. Simple types include strings, integers, floats, and doubles, while compound types are made up of primitive types. <element name="name" type="xsd:string" /> <SOAP:Array SOAP:arrayType="xsd:string[2]"> <string>Web</string> <string>Services</string> </SOAP:Array> Because they are text based, SOAP messages generally have no problem getting through firewalls or other barriers. They are the ideal way to pass information to and from web services. Service Description - WSDL. Web Service Description Language (WSDL) was created to provide information about how to connect to and query a specific Web Service. This document also adheres to strict formatting and organizational guidelines. However, the methods, parameters, and service information are application specific. Web Services perform different functionality and contain independent information, however they are all organized the same way. By creating a standard organizational architecture for these services, developers can effectively invoke and utilize them with little to no familiarization. To use a web service, a developer can follow the design standards of the WSDL to easily determine all the information and procedures associated with its usage. Essentially, a WSDL document serves as an instruction for interacting with a Web Service. It contains no application logic, giving the service a level of autonomy. This enables users to effectively interact with the service without having to understand its inner workings. The following is an example of a WSDL file for a web service that provides a temperature, given a U.S. zip code. <?xml version="1.0"?> <definitions xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:si="http://soapinterop.org/xsd" xmlns:tns="http://www.weather.gov/forecasts/xml/DWMLgen/wsdl/ndfdXML.wsdl" xmlns:typens="http://www.weather.gov/forecasts/xml/DWMLgen/schema/DWML.xsd" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns="http://schemas.xmlsoap.org/wsdl/" targetNamespace="http://www.weather.gov/forecasts/xml/DWMLgen/wsdl/ndfdXML.wsdl> <types> <xsd:schema targetNamespace="http://www.weather.gov/forecasts/xml/DWMLgen/schema/DWML.xsd"> <xsd:import namespace="http://schemas.xmlsoap.org/soap/encoding/" /> <xsd:import namespace="http://schemas.xmlsoap.org/wsdl/" /> <xsd:simpleType name="formatType"> <xsd:restriction base="xsd:string"> <xsd:enumeration value="24 hourly" /> <xsd:enumeration value="12 hourly" /> </xsd:restriction> </xsd:simpleType> <xsd:simpleType name="productType"> <xsd:restriction base="xsd:string"> <xsd:enumeration value="time-series" /> <xsd:enumeration value="glance" /> </xsd:restriction> </xsd:simpleType> <xsd:complexType name="weatherParametersType"> <xsd:all> <xsd:element name="maxt" type="xsd:boolean" /> <xsd:element name="mint" type="xsd:boolean" /> <xsd:element name="temp" type="xsd:boolean" /> <xsd:element name="dew" type="xsd:boolean" /> <xsd:element name="pop12" type="xsd:boolean" /> <xsd:element name="qpf" type="xsd:boolean" /> <xsd:element name="sky" type="xsd:boolean" /> <xsd:element name="snow" type="xsd:boolean" /> <xsd:element name="wspd" type="xsd:boolean" /> <xsd:element name="wdir" type="xsd:boolean" /> <xsd:element name="wx" type="xsd:boolean" /> <xsd:element name="waveh" type="xsd:boolean" /> <xsd:element name="icons" type="xsd:boolean" /> <xsd:element name="rh" type="xsd:boolean" /> <xsd:element name="appt" type="xsd:boolean" /> </xsd:all> </xsd:complexType> </xsd:schema> </types> <message name="NDFDgenRequest"> <part name="latitude" type="xsd:decimal"/> <part name="longitude" type="xsd:decimal" /> <part name="product" type="typens:productType" /> <part name="startTime" type="xsd:dateTime" /> <part name="endTime" type="xsd:dateTime" /> <part name="weatherParameters" type="typens:weatherParametersType" /> </message> <message name="NDFDgenResponse"> <part name="dwmlOut" type="xsd:string" /> </message> <message name="NDFDgenByDayRequest"> <part name="latitude" type="xsd:decimal" /> <part name="longitude" type="xsd:decimal" /> <part name="startDate" type="xsd:date" /> <part name="numDays" type="xsd:integer" /> <part name="format" type="typens:formatType" /> </message> <message name="NDFDgenByDayResponse"> <part name="dwmlByDayOut" type="xsd:string" /> </message> <portType name="ndfdXMLPortType"> <operation name="NDFDgen"> <documentation> Returns National Weather Service digital weather forecast data </documentation> <input message="tns:NDFDgenRequest" /> <output message="tns:NDFDgenResponse" /> </operation> <operation name="NDFDgenByDay"> <documentation> Returns National Weather Service digital weather forecast data summarized over either 24- or 12-hourly periods </documentation> <input message="tns:NDFDgenByDayRequest" /> <output message="tns:NDFDgenByDayResponse" /> </operation> </portType> <binding name="ndfdXMLBinding" type="tns:ndfdXMLPortType"> <soap:binding style="rpc" transport="http://schemas.xmlsoap.org/soap/http" /> <operation name="NDFDgen"> <soap:operation soapAction="http://www.weather.gov/forecasts/xml/DWMLgen/wsdl/ndfdXML.wsdl#NDFDgen" style="rpc" /> <input> <soap:body use="encoded" namespace="http://www.weather.gov/forecasts/xml/DWMLgen/wsdl/ndfdXML.wsdl" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" /> </input> <output> <soap:body use="encoded" namespace="http://www.weather.gov/forecasts/xml/DWMLgen/wsdl/ndfdXML.wsdl" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" /> </output> </operation> <operation name="NDFDgenByDay"> <soap:operation soapAction="http://www.weather.gov/forecasts/xml/DWMLgen/wsdl/ndfdXML.wsdl#NDFDgenByDay" style="rpc" /> <input> <soap:body use="encoded" namespace="http://www.weather.gov/forecasts/xml/DWMLgen/wsdl/ndfdXML.wsdl" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" /> </input> <output> <soap:body use="encoded" namespace="http://www.weather.gov/forecasts/xml/DWMLgen/wsdl/ndfdXML.wsdl" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" /> </output> </operation> </binding> <service name="ndfdXML"> <documentation>The service has two exposed functions, NDFDgen and NDFDgenByDay. For the NDFDgen function, the client needs to provide a latitude and longitude pair and the product type. The client also needs to provide the start and end time of the period that it wants data for. For the time-series product, the client needs to provide an array of boolean values corresponding to which weather values should appear in the time series product. For the NDFDgenByDay function, the client needs to provide a latitude and longitude pair, the date it wants to start retrieving data for and the number of days worth of data. The client also needs to provide the format that is desired.</documentation> <port name="ndfdXMLPort" binding="tns:ndfdXMLBinding"> <soap:address location="http://www.weather.gov/forecasts/xml/SOAP_server/ndfdXMLserver.php" /> </port> </service> </definitions> The WSDL file defines a service, made up of different endpoints, called ports. The port is made up of a network address and a binding. <service name="ndfdXML"> <documentation>The service has two exposed functions, NDFDgen and NDFDgenByDay. For the NDFDgen function, the client needs to provide a latitude and longitude pair and the product type. The client also needs to provide the start and end time of the period that it wants data for. For the time-series product, the client needs to provide an array of boolean values corresponding to which weather values should appear in the time series product. For the NDFDgenByDay function, the client needs to provide a latitude and longitude pair, the date it wants to start retrieving data for and the number of days worth of data. The client also needs to provide the format that is desired.</documentation> <port name="ndfdXMLPort" binding="tns:ndfdXMLBinding"> <soap:address location="http://www.weather.gov/forecasts/xml/SOAP_server/ndfdXMLserver.php" /> </port> </service> The binding identifies the binding style and protocol for each operation. In this case, it uses Remote Procedure Call style binding, using SOAP. <binding name="ndfdXMLBinding" type="tns:ndfdXMLPortType"> <soap:binding style="rpc" transport="http://schemas.xmlsoap.org/soap/http" /> <operation name="NDFDgen"> <soap:operation soapAction="http://www.weather.gov/forecasts/xml/DWMLgen/wsdl/ndfdXML.wsdl#NDFDgen" style="rpc" /> <input> <soap:body use="encoded" namespace="http://www.weather.gov/forecasts/xml/DWMLgen/wsdl/ndfdXML.wsdl" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" /> </input> <output> <soap:body use="encoded" namespace="http://www.weather.gov/forecasts/xml/DWMLgen/wsdl/ndfdXML.wsdl" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" /> </output> </operation> <operation name="NDFDgenByDay"> <soap:operation soapAction="http://www.weather.gov/forecasts/xml/DWMLgen/wsdl/ndfdXML.wsdl#NDFDgenByDay" style="rpc" /> <input> <soap:body use="encoded" namespace="http://www.weather.gov/forecasts/xml/DWMLgen/wsdl/ndfdXML.wsdl" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" /> </input> <output> <soap:body use="encoded" namespace="http://www.weather.gov/forecasts/xml/DWMLgen/wsdl/ndfdXML.wsdl" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" /> </output> </operation> </binding> Port Types are abstract collections of operations. In this case, the operation is getTemp. <portType name="ndfdXMLPortType"> <operation name="NDFDgen"> <documentation> Returns National Weather Service digital weather forecast data </documentation> <input message="tns:NDFDgenRequest" /> <output message="tns:NDFDgenResponse" /> </operation> <operation name="NDFDgenByDay"> <documentation> Returns National Weather Service digital weather forecast data summarized over either 24- or 12-hourly periods </documentation> <input message="tns:NDFDgenByDayRequest" /> <output message="tns:NDFDgenByDayResponse" /> </operation> </portType> Finally, messages are used by the operations to communicate - in other words, to pass parameters and return values. <message name="NDFDgenByDayRequest"> <part name="latitude" type="xsd:decimal" /> <part name="longitude" type="xsd:decimal" /> <part name="startDate" type="xsd:date" /> <part name="numDays" type="xsd:integer" /> <part name="format" type="typens:formatType" /> </message> <message name="NDFDgenByDayResponse"> <part name="dwmlByDayOut" type="xsd:string" /> </message> From the WSDL file, a consumer should be able to access data in a web service. For a more detailed analysis of how this particular web service, please visit Weather.gov Service Discovery - UDDI. You've seen how WSDL can be used to share interface definitions for Web Services, but how do you go about finding a Web Service in the first place? There are countless independent Web Services that are developed and maintained by just as many different organizations. Upon adopting Web Service practices and methodologies, developers sought to foster the involvement and creative reuse of their systems. It soon became apparent that there was a need for an enumerated record of these services and their respective locations. This information would empower developers to leverage the best practices and processes of Web Services quickly and easily. Additionally, having a central reference of current Web Service capabilities enables developers avoid developing redundant applications. UDDI defines registries in which services can be published and found. The UDDI specification was creaed by Microsoft, Ariba, and IBM. UDDI defines a data structure and Application Programming Interface (API). In the three-tier model mentioned before, UDDI is the service broker. Its function is to enable service consumers to find appropriate service providers. Connecting to UDDI registries using Java can be accomplished through the Java API for XML Registries (JAXR). JAXR creates a layer of abstraction, so that it can be used with UDDI and other types of XML Registries, such as the ebXML Registry and Repository standard. Using Java With Web Services. To execute a SOAP message, an application must be used to communicate with the service provider. Due to its flexibility, almost any programming language can be used to execute SOAP message. For our purposes, however, we will be focusing on using Java to interact with Web Services. Using Java with web services requires some external libraries. Let's go through using Java to query the Temperature Web Service we talked about earlier. import java.io.*; import java.net.*; import java.util.*; import org.apache.soap.util.xml.*; import org.apache.soap.*; import org.apache.soap.rpc.*; public class TempClient public static float getTemp (URL url, String zipcode) throws Exception Call call = new Call (); // Service uses standard SOAP encoding String encodingStyleURI = Constants.NS_URI_SOAP_ENC; call.setEncodingStyleURI(encodingStyleURI); // Set service locator parameters call.setTargetObjectURI ("urn:xmethods-Temperature"); call.setMethodName ("getTemp"); // Create input parameter vector Vector params = new Vector (); params.addElement (new Parameter("zipcode", String.class, zipcode, null)); call.setParams (params); // Invoke the service ... Response resp = call.invoke (url,""); // ... and evaluate the response if (resp.generatedFault ()) throw new Exception(); else // Call was successful. Extract response parameter and return result Parameter result = resp.getReturnValue (); Float rate=(Float) result.getValue(); return rate.floatValue(); // Driver to illustrate service invocation public static void main(String[] args) try URL url=new URL("http://services.xmethods.net:80/soap/servlet/rpcrouter"); String zipcode= "30605"; float temp = getTemp(url,zipcode); System.out.println(temp); catch (Exception e) e.printStackTrace(); This Java code effectively hides all the SOAP from the user. It invokes the target object by name and URL, and sets the parameter "zipcode". But what does the underlying SOAP Request look like? <?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns:n="urn:xmethods-Temperature" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soap:Body soap:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"> <n:getTemp> <zipcode xsi:type="xs:string">30605</zipcode> </n:getTemp> </soap:Body> </soap:Envelope> As you see, the SOAP request uses the parameters passed in by the Java Call to fill out the SOAP envelope and direct the message. Similarly, the response comes back into the Java program as '70.0'. The response SOAP is also hidden by the Java program. <?xml version='1.0' encoding='UTF-8'?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <SOAP-ENV:Body> <ns1:getTempResponse xmlns:ns1="urn:xmethods-Temperature" SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"> <return xsi:type="xsd:float">70.0</return> </ns1:getTempResponse> </SOAP-ENV:Body> </SOAP-ENV:Envelope> Here's an additional example of using Java and SOAP to interact with Web Services. This particular Web Service is called the "US Zip Validator" and takes a ZipCode as a parameter, which then returns a corresponding latitude and longitude. When developing applications to interact with Web Services, the first step should be to review the WSDL document. The WSDL document for this service is located here: http://www.webservicemart.com/uszip.asmx?WSDL This document will contain all the necessary instructions for interacting with the "US Zip Validator" Web Service. SOAPClient4XG Modified by - Duncan McAllister From: http://www.ibm.com/developerworks/xml/library/x-soapcl/ import java.io.*; import java.net.*; import java.util.*; public class SOAPClient4XG { public static void main(String[] args) throws Exception { args = new String[2]; args[0] = "http://services.xmethods.net:80/soap/servlet/rpcrouter"; args[1] = "SOAPrequest.xml"; if (args.length < 2) { System.err.println("Usage: java SOAPClient4XG " + "http://soapURL soapEnvelopefile.xml" + " [SOAPAction]"); System.err.println("SOAPAction is optional."); System.exit(1); String SOAPUrl = args[0]; String xmlFile2Send = args[1]; String SOAPAction = ""; // Create the connection where we're going to send the file. URL url = new URL(SOAPUrl); URLConnection connection = url.openConnection(); HttpURLConnection httpConn = (HttpURLConnection) connection; // Open the input file. After we copy it to a byte array, we can see // how big it is so that we can set the HTTP Cotent-Length // property. (See complete e-mail below for more on this.) FileInputStream fin = new FileInputStream(xmlFile2Send); ByteArrayOutputStream bout = new ByteArrayOutputStream(); // Copy the SOAP file to the open connection. copy(fin,bout); fin.close(); byte[] b = bout.toByteArray(); // Set the appropriate HTTP parameters. httpConn.setRequestProperty( "Content-Length", String.valueOf( b.length ) ); httpConn.setRequestProperty("Content-Type","text/xml; charset=utf-8"); httpConn.setRequestProperty("SOAPAction",SOAPAction); httpConn.setRequestMethod( "POST" ); httpConn.setDoOutput(true); httpConn.setDoInput(true); // Everything's set up; send the XML that was read in to b. OutputStream out = httpConn.getOutputStream(); out.write( b ); out.close(); // Read the response and write it to standard out. InputStreamReader isr = new InputStreamReader(httpConn.getInputStream()); BufferedReader in = new BufferedReader(isr); String inputLine; while ((inputLine = in.readLine()) != null) System.out.println(inputLine); in.close(); // copy method from From E.R. Harold's book "Java I/O" public static void copy(InputStream in, OutputStream out) throws IOException { // do not allow other threads to read from the // input or write to the output while copying is // taking place synchronized (in) { synchronized (out) { byte[] buffer = new byte[256]; while (true) { int bytesRead = in.read(buffer); if (bytesRead == -1) break; out.write(buffer, 0, bytesRead); } This Java class refers to an XML document(SOAPRequest.xml), which is used as the SOAP message. This document should be included in the same project folder as the Java application invoking the service. After reviewing the "US Zip Validator" WSDL document, it is clear that we would like to invoke the "getTemp" method. This information is contained within the SOAP body and includes the appropriate parameters. SOAPRequest.xml <?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns:n="urn:xmethods-Temperature" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soap:Body soap:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"> <n:getTemp> <zipcode xsi:type="xs:string">30605</zipcode> </n:getTemp> </soap:Body> </soap:Envelope> Following a successful interaction, the Web Service provider will provide a response that is similar in format to the user request. When developing in NetBeans, run this project and examine the subsequent SOAP message response in the Tomcat output window. Web Services with Netbeans. The Netbeans version used for this explanation is 5.0. After Netbeans is open, click on the "Runtime" tab on the left pane, then right-click "Web Services" and select "Add Web Service." In the "URL" field, enter the address of the web service WSDL file, in our example above it is "http://www.weather.gov/forecasts/xml/DWMLgen/wsdl/ndfdXML.wsdl" and click Get Web Service Description. This will bring up the information of the web service.
8,773
OpenOffice.org. "OpenOffice.org User's Manual:"OpenOffice.org OpenOffice.org is a free, open source alternative to Microsoft Office and WordPerfect. It provides almost all of the functionality at no cost to the end user. You can read more about the creators and the project's goals here. This manual also works for early versions of LibreOffice. Applications. OpenOffice.org is as an office suite, is divided into several related sub-applications. These applications are designed to work together but can also work like a stand alone programs: Derived Works (Forks). An up-to-date list is kept at http://wiki.services.openoffice.org/wiki/DerivedWorks. Contributors. __NOEDITSECTION__
183
OpenOffice.org/Performance Tips. If you want to OpenOffice.org startup even faster, try these speed boosting tricks. Version-specific information. Version 2.0. In 2006, there were some benchmarks which showed that OOo-2.0 still used a lot of memory and was slower than MS-Office, but OOo developers promised that they were working on improvements. This version branch requires at least 128 Mb of RAM to run properly.<BR>Running OO.o 2.x off a LiveCD (such as Knoppix) without swap space requires 2–4 times as much, depending on whither window manager (such as IceWM) or desktop enviornment (like KDE/GNOME) you use. By default, OpenOffice.org 2.x and earlier include a number of writing aids (dictionaries and thesauri). By version 2.4, the amount of these is extensive, with writing aids for approximately 25 languages and language variants, and all are loaded into memory when starting OO.o. For everyday use, most languages' and some of their variants' dictionaries can be turned off through<BR>Tools > Options > Language settings > Writing Aids — Version 1.x. In early versions of OpenOffice.org, before 1.1, it was cursed with a severe speed problem, taking a long time to load, often over a minute. As of May 2004, that issue seems to have been resolved with OO.o 1.1. This version branch needs at least 64 Mb of RAM to run properly. Compiling. If you are compiling OpenOffice.org, make sure you are using the right cflags. A well tuned selection can have a 20% increase. Using the latest version of GCC can also make a big difference.
446
OpenGL Programming. Welcome to the OpenGL Programming book. OpenGL is an API used for drawing 3D graphics. OpenGL is not a programming language; an OpenGL application is typically written in C or C++. What OpenGL does allow you to do is draw attractive, realistic 3D graphics with minimal effort. The API is typically used to interact with a GPU, to achieve hardware-accelerated rendering. You are free, and encouraged, to share and contribute to this wikibook: it is written in the spirit of , that belongs to humanity. Feel free to make copies, teach it in school or professional classes, improve the text, write comments or even new sections. We're looking for contributors. If you know about OpenGL, feel free to leave comments, expand TODO sections and write new ones! Modern OpenGL. "Modern" OpenGL is about OpenGL 2.1+, OpenGL ES 2.0+ and WebGL, with a programmable pipeline and shaders. The basics arc. Tutorial_drafts: ideas and notes for upcoming tutorials The lighting arc. This series of tutorials is a C++ port of the GLSL wikibook Basic Lighting tutorials. This series of tutorials is a C++ port of the GLSL wikibook Basic Texturing tutorials. This series of tutorials is a C++ port of the GLSL wikibook tutorials about Textures in 3D. There are more tutorials to port at the GLSL wikibook! The scientific arc. And more to come. Mini-portal. This series shows how to implement a teleportation system similar to Valve's Portal, step-by-step, using OpenGL. Glescraft. This series shows how to render a voxel based world, similar to Minecraft. Using the accumulation buffer. Note: not all videocards support accumulation buffer Cutting-edge OpenGL. If you do not target old mobile devices or the web, you can upgrade to OpenGL (ES) 3.x / 4.x. It notably introduces new kinds of shaders: Geometry, Tessellation Control and Tessellation Evaluation, and Compute. and lots of other features. Legacy OpenGL 1.x. "Legacy" OpenGL is about OpenGL 1.x and OpenGL ES 1.x, with a fixed pipeline and no shaders. External links. Wikibooks. Related WikiBooks: Ports. The following websites provide conversion of the tutorials to other programming languages or platforms:
594
Using GNOME/Applications. GNOME comes with a variety of applications to allow you to work.
23
Using GNOME/Preferences. GNOME is designed with good defaults in mind. However, it also allows configurability. Hence GNOME allows you to change your preferences. Here is a guide to the available preferences.
47
Guitar/Tuning the Guitar. Advances in manufacturing have solved many of the tuning problems associated with the budget guitars of yesteryear. Entry level guitars are available from major manufacturers such as Yamaha and Fender which are entirely suitable for beginners. All guitar stores sell tuning forks and electronic tuners. A tuning fork provides a single reference note for tuning and for this reason an electronic tuner will be more useful to the complete beginner. When new strings have been put on a guitar they often fall out of tune very easily. New strings will stretch until they reach a point where their elasticity diminishes and then they will remain at the correct tension and frequency. Strings need to be broken in. It will take time to work all the slack out of the strings but the process can be sped up. Put on new strings and tune to just below concert pitch using an electronic guitar tuner. Then pull each string an inch away from the fretboard and this will instantly put them out of tune. Use your electronic guitar tuner to retune the strings to just below concert pitch and repeat the process. After a while the slack should be gone from the strings and the guitar can be tuned to concert pitch and should stay in tune. Tuning the Guitar. Sound is created by the disturbance of particles in the air. The vibrations of a struck string causes the air particles to moves in waves which the ear receives and the brain interprets. When a string is attached to two points, as the strings on a guitar are, then striking it causes a sound to be produced at a regular frequency. The length, thickness and tension of the string determines the pitch of the note it produces. If you have a string of a certain length and tension stretched across a wooden board which produced a known frequency and you wished to double the frequency to produce the note an octave above - you simply halve the distance that it is stretched across and keep the same tension. That is exactly what happens on a guitar when you fret any of the open strings at the twelfth fret. There are many different tunings for the open strings of the guitar but the most common is known as standard tuning or E tuning. In standard tuning the open strings should be tuned to the notes E A D G B e. The diagram below illustrates the open strings and the twelfth fret. Note that the upper case E represents the thickest string and the lower case e represents the thinnest string. The diagram is orientated towards the player's view. e|-----------------------| B|-----------------------| G|-----------------------| D|-----------------------| A|-----------------------| E|-----------------------| Four-Five Tuning. Four-Five tuning uses the open A string as the first reference note. A tuning aid is useful to ensure that the open A string is at concert pitch. Concert pitch is an Internationally agreed standard that assigns A = 440 Hz. The guitar is a transposing instrument and is notated an octave higher than its actual pitch to avoid having to use the bass clef in standard notation. The notated middle C is played on the third fret of the A string though the pitched middle C is to be found on the first fret of the B string. A = 440 Hz is the fifth fret of the high e string but for convenience the open A string (110 Hz) is used as the reference note. The diagram below shows the notes to be fretted. e|-------------------0---| B|---------------0---5---| G|-----------0---4-------| D|-------0---5-----------| A|---0---5---------------| E|---5-------------------| Follow these six steps to tune your guitar using the Four-Five method: Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 It is recommended that strings be brought up to their correct pitch when tuning. The Four-Five method has the disadvantage of progressively increasing tuning inaccuracies by the use of multiple reference notes. Harmonic Tuning. This method of tuning uses harmonics. By lightly touching a string directly above its fret-wire the fundamental of a note is silenced leaving only a series of overtones. Any note played on any instrument consists of a fundamental and a harmonic series of overtones. The twelfth, seventh and fifth nodes are the easiest frets with which to sound harmonics. After striking the string the finger should be removed quickly to produce the harmonic. The fretboard diagram below shows the pairs of harmonics that are used. You start by tuning the harmonic on the 7th fret of the A string to the harmonic on the low E string. Then the harmonic on the 7th fret of the D string is tuned with the harmonic on the 5th fret of the A string. Tuning the G string to the D string is done in the same manner. Tune the harmonic on the B string to the harmonic on the 4th fret of the G string. Tune the harmonic on the e string to the harmonic on the B string. e|-------------7*------------| B|--------5*-----------------| G|------4*-----7*------------| * = Play a harmonic at this fret D|--------5*---7*------------| A|--------5*---7*------------| E|--------5*-----------------| Tuning with harmonics can progressively increase tuning errors due to the use of multiple reference notes. The fundamental is the most dominant frequency of the harmonic series and it is recommended that a further tuning check be made using fretted notes. Tempered Tuning. This method is recommended because it applies equal temperament with the use of a single reference note. This method uses the open high e string as the reference note. You tune the unison and octave E notes that are found on the other strings to the open high e string. Hold the fretted note down as you turn the tuning peg and you will feel the string move under your fingertip. This involves striking the strings with your right hand and then using the right hand to turn the tuning pegs. If may feel awkward at first but with practice it becomes familiar. The open low E string is the only string to be tuned to the high e string without fretting. The fretted note on the 5th fret of the B string should be tuned wide by the amount of two beats per second in relation to the high e string.
1,465
Latin/Lesson 7-The Gerund and Participles. = Participles = Participles are verbs which function grammatically like adjectives. English, aided by auxiliary participles, is able have participle phrases in many tenses. Latin has participles that do not have auxiliary supplementary participles. This limits the usage of the participle in Latin, according to some wiki-scholars of Classical Studies. Present Active Participles. Present participles are formed by adding -ns to the stem of the verb. Present Participles are declined like 3rd declension adjectives. In cases besides the nominative, the -s becomes -t. Examples: 1. ferens, ferentis 2. capiens, capientis 3. ens, entis Exercises. Form the Present Participle and translate of the following Latin verbs: Uses. The examples will show participles of the verb "amo, amare, amavi, amatum" (to love). In deponent verbs, the perfect passive participle is formed in the same way as in regular verbs. However, since the nature of the deponent verb is passive in form and active in meaning, the participle is translated actively. Remember that participles are adjectives, and therefore must be declined to agree with the noun which they modify in case, number and gender. Gerund. The gerund is a verbal noun which is used to refer to the action of a verb. For example: ars scribendi = the art of writing. The gerund is declined as a second declension neuter noun. It is formed by taking the present stem and adding -ndum. Gerundive. The gerundive is a 1st/2nd declension adjective formed the same way as the gerund, and its function overlaps somewhat with the gerund, but otherwise differs. The literal translation of the gerundive is with "to be", eg. defendendus, -a, -um = "to be defended". =Exercises= 1. Convert the following subjunctive purpose clauses into gerund or gerundive clauses with the same meaning. For example: militabat ut patriam defenderet → militabat ad patriam defendendum "or" militabat patriam defendendi causa "or" militabat ad patriam defendendam. Try to use each construction twice. 2. Translate into Latin. For example: I must see the temple -> templum mihi videndum est
592
Constructivist Theories in Education/Piaget. What would Piaget (1896-1980) say to today's primary teachers to encourage a constructivist approach? Luckily, we need not use our imagination here: Piaget dealt with elementary teachers and had very specific recommendations for them in different subject areas (Piaget, 1998, De la pédagogie. Paris: Éditions Odile Jacob). As early as 1921 he was hired by Claparède to become director of studies at the Jean Jacques Rousseau Institute in Geneva. He worked for the International Bureau of Education from 1929 to 1967 and Director of the Institute of Educational Sciences at the University of Geneva from 1932-1971. Piaget's advice to teachers, in essence, was to provide conditions under which the child can be guided to learn for themselves: Not just to master existing knowledge, but to become excited about the possibility of creating new knowledge. What this means in different disciplines will depend on the specific topics being learned. The idea being that the student constructs their own understandings rather than being a passive recipient of knowledge. Of course, Jean Piaget's ideas are not new. 2000 years ago, Plutarch said: "The mind is not a vessel to be filled but a fire to be kindled." Plutarch (46 - 127) See also. —discusses Piaget's theory as it applies to classroom learning.
346
Physics with Calculus/Mechanics/Energy and Conservation of Energy. One of the most fundamental concepts in physics is energy. It's difficult to define what energy actually "is", but one useful definition might be "a measure of the amount of change taking place within a system, or the potential for change to take place within the system". Roughly speaking, energy can be divided into two forms, "kinetic" and "potential". Kinetic energy is the energy of movement or change. Potential energy is the energy a system has as a result of being able to undergo some change. To provide a specific example, a falling book has kinetic energy, because its position in space is changing (it is moving downwards). A book resting on a shelf has no potential energy relative to the shelf since it has a height of zero meters relative to the shelf. However, if the book is elevated to some height above the shelf, then it has potential energy proportional to the height at which it resides above the shelf. An object can have both kinetic and potential energy at the same time. For example, an object which is falling, but has not yet reached the ground has kinetic energy because it is moving downwards, and potential energy because it is able to move downwards even further than it already has. The sum of an object's potential and kinetic energies is called the object's mechanical energy. As an object falls its potential energy decreases, while its kinetic energy increases. The decrease in potential energy is exactly equal to the increase in kinetic energy. Another important concept is work. Similarly to the way we defined energy, we may define work as "a measure of the amount of change brought about in a system, by the application of energy". For instance, you might do work on a book by picking it up off the floor and putting it on a shelf. In doing so, you have increased the potential energy of the book (by increasing its potential to fall down to the floor). The amount of potential energy you have "given" to the book is exactly equal to the amount of work you do by lifting it onto the shelf. Mathematically, however, energy is very easy to define. Kinetic energy is 1/2 m v^2. Potential energy is a little bit trickier. Say we have a force which can be written as the gradient (a three-dimensional derivative. If you don't know what it is, pretend it is a normal derivative and you should be able to understand things in one dimension.) of some function, formula_1 times the mass of the particle. That is formula_2. Then potential energy is just formula_3 where C is an arbitrary constant. What arbitrary definitions, you might say. At first, you might think so, but it turns out, the work done by the force is the change in kinetic energy (see ../Work and Energy/). They are actually very closely related. In fact, the potential energy plus the kinetic energy due to the force is constant! Aha, so this "arbitrary" potential energy decreases at exactly the same rate this "arbitrary" kinetic energy increases. They must be the same thing in different forms! It is not so arbitrary after all. This is the conservation of energy. In fact, since the particles are moving at finite velocities, this is the much stronger local conservation of energy for mechanical systems. Another amazing fact is that it appears that all forces are conservative (this changes in electrodynamics, but energy is still conserved)! Even friction appears to be conservative on a molecular level. The slightly more mathematical treatment is available in ../Work and Energy/. We may concisely state the following principle, which applies to closed systems (i.e. when there are no interactions with things outside the system): When we consider open systems (i.e. when there are interactions with things outside the system), it is possible for energy to be added to the system (by doing work on it) or taken from the system (by having the system do work). In this case the following rule applies: This leads us to consider the conservation of energy and other quantities. In many cases, "you get out what you put in". If you put 3 pairs of socks into an empty dryer, you don't need to analyze the exact configuration of the dryer, the temperature profile, or other things to figure out how many socks will come out of the dryer. You'll get 3 pairs of socks out[*]. A conservation law, in its most general form, simply states that the total amount of some quantity within a closed system doesn't change. In the example above, the conserved quantity would be socks, the system would be the dryer, and the system is closed as long as nobody puts socks into or takes socks out of the dryer. If the system is not closed, we can always regard a larger system which is closed and which encompasses the system we were initially considering (e.g. the house in which the dryer is located), even though, in extreme cases, this may lead us to consider the number of socks (or whatever) in the entire Universe! Conservation laws help us solve problems quickly because we know that we will have the same amount of the conserved quantity at the end of some process as we did at the start. The fundamental laws of conservation are; Returning to our example above, the 'conservation of socks' is, in fact, a consequence of the law of conservation of mass. It should be noted that in the context of nuclear reactions, energy can be converted to mass and vice-versa. In such reactions, the total amount of mass plus energy doesn't change. Therefore the first two of these conservation laws are often treated as a single law of "conservation of mass-energy" Combining these laws with Newton's laws gives other derived conserved quantities such as Within a closed system, the total amount of energy is always conserved. This translates as the sum of the n changes in energy totaling to 0. An example of such a change in energy is dropping a ball from a distance above the ground. The energy of the ball changes from potential energy to kinetic energy as it falls. Because this is the only change in energy within our system, we will take a simple physical problem and model it in order to demonstrate. An object of mass 10kg is dropped from a height of 3m. What is its velocity when it is 1m above the ground? We start by evaluating the Potential Energy when the object is at its initial state. The Potential Energy of the object at a height of 1m above the ground is given in a similar fashion. Hence the change in Potential Energy is given By definition, the change in Potential Energy is equivalent to the change in Kinetic Energy. The initial KE of the object is 0, because it is at rest. Hence the final Kinetic Energy is equal to the change in KE. Rearranging for v We can check our work using the following kinematic equation. This follows because we can actually use the equations for energy to generate the above kinematic equation.
1,565
Physics with Calculus/Appendix 2/Examples of Derivatives. < Physics with Calculus Motion. For x(t), position as a function of time Velocity: The rate of change of position with respect to time Acceleration: The rate of change of velocity with respect to time Jerk: The rate of change of acceleration with respect to time Jerk is not commonly used in first year motion. Its main application is in dealing with travel of large objects that change their weight as they move due to a change in mass. One example might be a rocket travelling up from rest. As it burns fuel, its centre of gravity changes and as such, its acceleration is not constant (violation of Newton's Second Law). Mechanics (Statics). Given the details of the loading of a beam, we can represent it on a diagram of the beam with arrows indicating forces, curved arrows indicating moments (resistance to torque) and shaded regions representing universally varying or distributed loads. We can use this diagram (commonly known as a free body diagram) and the information contained within it to draw a diagram representing the shear forces (V in the beam, and can also derive an equation that represents these. The equation may not be as simple as a polynomial, and is quite often a series of continuous functions with endpoints at the points on the beam where the forces occur. We can perform an indefinite integral on each of these segments of the beam to get more information on it. The indefinite integrals combine to form a diagram of the bending moments in the beam. Bending moments are a special type of moment, as the beam is most likely to fail where the bending moments are at a relative extrema. By definition, any indefinite integral will contain a constant, C. In the case of the bending moment diagram, our C is merely the endpoint of the previous segment. The only exception being when we have a moment, we add or subtract its value (depending on direction). Hence, differentiating the bending moment model will give us the shear force. Differentiating the shear force model brings us back to our loading diagram, and differentiating that will give us the shape of the deflection of the beam under the loading. For any bending moment model b, as a function of distance from the end of the beam, x, Where f(x) is a function describing the deflection of the beam.
545
Physics with Calculus/Appendix 1/Derivatives. We define a derivative as the limit of a function as the ratio of the changes in the function and the changes in the variable as the change in the variable goes to zero. i.e. An example. Quantity B is found to depend on quantity A such that B is always the square of A. A ... B In this case, x=A and f(x) is [the function of x] = B. Look at what happens at the point A=2 (for example). 0 ... 0 At x=2 f(x)=4. Changing x very slightly, by only 0.01, we have delta x=0.01, so that x+delta x is 2.01. 1 ... 1 Then the function of (x+delta x) = the function of 2.01 = the square of 2.01 = 4.0401. The difference between f(x+delta x) and f(x) 2 ... 4 is 4.0401 and 4. It is 0.0401. Dividing it by delta x we have 0.0401/0.01 which is 4.01, meaning that 3 ... 9 the CHANGE of the function divided by the CHANGE in x is nearly 4 when x=2. If we would have made delta x much smaller etc. ... we would have got exactly 4, which happens to be 2 times x at x=2
352
Modern Physics/The Uneven Dumbbell. The kinetic energy and angular momentum of the dumbbell may be split into two parts, one having to do with the motion of the center of mass of the dumbbell, the other having to do with the motion of the dumbbell relative to its center of mass. To do this we first split the position vectors into two parts. The centre of mass is at. so we can define new position vectors, giving the position of the masses relative to the centre of mass, as shown. The total kinetic energy is which is the sum of the kinetic energy the dumbbell would have if both masses were concentrated at the center of mass, the "translational kinetic energy" and the kinetic energy it would have if it were observed from a reference frame in which the center of mass is stationary, the "rotational kinetic energy". The total angular momentum can be similarly split up into the sum of the angular momentum the system would have if all the mass were concentrated at the center of mass, the "orbital angular momentum", and the angular momentum of motion about the center of mass, the "spin angular momentum". We can therefore assume the centre of mass to be fixed. Since ω is enough to describe the dumbbells motion, it should be enough to determine the angular momentum and internal kinetic energy. We will try writing both of these in terms of "ω" First we use two results from earlier to write the angular momentum in terms of the angular velocity formula_6 The first term in the angular momentum is proportional to the angular velocity, as might be expected, but the second term is not. What this means becomes clearer if we look at the components of L For notational convenience we'll write These six numbers are constants, reflecting the geometry of the dumbbell. This, we recognise as being a matrix multiplication. where formula_10 The nine coefficients of the matrix I are called "moments of inertia". By choosing our axis carefully we can make this matrix diagonal. E.g if then Because the dumbbell is aligned along the "x"-axis, rotating it around that axis has no effect. The relationship between the kinetic energy, "T", and ω quickly follows. On the right hand side we immediately recognise the definition of angular momentum. Substituting for L gives Using the definition this reduces to where the "momement of inertia around the axis n is " a constant. If the dumbbell is aligned along the "x"-axis as before we get These equations of rotational dynamics are similar to those for linear dynamics, except that I is a matrix rather than a scalar.
617
PHP Programming/Beginning with "Hello World!". This page makes use of . Use the [ discussion page] to leave any feedback regarding this new feature. Return to The Basics. The Code. Simple Hello World. This is as basic as PHP gets. Three simple lines, the first line identifies that everything beyond the "<?php" tag is PHP code (until the end of the file, or until a "?>" tag). The second and third lines write a text greeting on the web page. This next example is slightly more complex and uses variables. Hello World With Variables. The previous example contained two outputs. PHP can output HTML that your browser will format and display. The "PHP Output" box is the exact PHP output. The "HTML Render" box is approximately how your browser would display that output. Don't let this confuse you, this is just to let you know that PHP can output HTML. We will cover this much more in depth later. New Concepts. Variables. Variables are the basis of any programming language: they are "containers" (spaces in memory) that hold data. The data can be changed, thus it is "variable". If you've had any experience with other programming languages, you know that in some of the languages, you must define the type of data that the variable will hold. Those languages are called "statically-typed", because the types of variables must be known before you store something in them. Programming languages such as and Java are statically-typed. PHP, on the other hand, is "dynamically-typed", because the type of the variable is linked to the value of the variable. You could define a variable for a string, store a string, and then replace the string with a number. To do the same thing in C++, you would have to cast, or change the type of, the variable, and store it in a different "container". All variables in PHP follow the format of a dollar sign ($) followed by an identifier i.e. $variable_name. These identifiers are case-sensitive, meaning that capitalization matters, so $wiki is different from $Wiki. Real world analogy. To compare a variable to real world objects, imagine your computer's memory as a storage shed. A variable would be a box in that storage shed and the contents of the box (such as a cup) would be the data in that variable. If the box was labeled "kitchen stuff" and the box's contents were a cup, the PHP code would be: $kitchen_stuff = 'cup'; If I then went into the storage shed, opened the box labeled "kitchen stuff", and then replaced the cup with a fork, the new code would be: $kitchen_stuff = 'fork'; Notice the addition of the codice_1 in the middle and the codice_2 at the end of the code block. The codice_1 is the assignment operator, or in our analogy, instructions that came with the box that states "put the cup in the box". The codice_2 indicates to stop evaluating the block of code, or in our analogy, finish up with what you are doing and move on to something else. Also notice the cup was wrapped in single quotes instead of double. Using double quotes would tell the PHP parser that there may be more than just a cup going into the box and to look for additional instructions. $bathroom_stuff = 'toothbrush'; $kitchen_stuff = "cup $bathroom_stuff"; //$kitchen_stuff contents is now cup toothbrush Single quotes tell the PHP parser that it's only a cup and not to look for anything more. In this example the bathroom box that should've had its contents added to the kitchen box has its name added instead. $bathroom_stuff = 'toothbrush'; $kitchen_stuff = 'cup $bathroom_stuff'; //$kitchen_stuff contents is now cup $bathroom_stuff So again, try to visualize and associate the analogy to grasp the concept of variables with the comparison below. Note that this is a real world object comparison and NOT PHP code. Computer memory (RAM) = storage shed Variable = a box to hold stuff Variable name = a label on the box such as "kitchen stuff" Variable data = the contents of the box such as a "cup" Notice that you wouldn't name the variable codice_5, as the relationship between the variable and the box is represented by the codice_6 and how the data is stored in memory. For example, a constant and array can be considered a type of variable when using the box analogy as they all are containers to hold some sort of contents, however, the difference is on how they are defined to handle the contents in the box. codice_7: a box that can be opened while in the storage shed to exchange the contents in the box. codice_8: a box that cannot be opened to exchange its contents. Its contents can only be viewed and not exchanged while inside the storage shed. codice_9: a box that contains one or more additional boxes in the main box. To complicate matters for beginners, each additional box may contain a box as well. In the kitchen stuff box we have two boxes, the clean cup box $kitchen_stuff["clean_cup"] = 'the clean cup'; and the dirty cup box $kitchen_stuff["dirty_cup"] = 'the dirty cup'; More on variables, from the PHP manual The print and echo statements. Print is the key to output. It sends whatever is in the quotes (or parentheses) that follow it to the output device (browser window). A similar function is echo, but print allows the user to check whether or not the print succeeded. The quoted text is treated as if it were a string, and thus can be used in conjunction with the concatenation (joining two strings together) operator as well as any that returns a string value. The dot symbol concatenates two strings. In other programming languages, concatenating a string is done with the plus symbol and the dot symbol is generally used to call functions from classes. Also, it might be useful to note that under most conditions echo can be used interchangeably with print. print returns a value, so it can be used to test, if the print succeeded, while echo assumes everything worked. Under most conditions there is nothing we can do, if echo fails. We will use echo in most sections of this book, since it is the more commonly used statement. It should be noted that while echo and print can be called in the same way as functions, they are, in fact, language constructs, and can be called without the brackets. Normal functions (almost all others) must be called with brackets following the function identifier.
1,540
PHP Programming/Get Apache and PHP. Get Apache. To get Apache, first you must go to the Apache website. From there, find the section for the HTTP Server Project, and then the download page. Unless you have an understanding of compiling an executable from the source code, be sure you download the binary (for Windows users, I recommend the latest (2.0.52) MSI installer package). Once you've obtained an Apache installer, whether an EXE or an MSI or what have you, run it. Apache will prompt you (eventually) for several (three) pieces of information. Following this are, very basically, your choices regarding what to input: When given an option between running on when started and running as a service, I recommend using Apache as a service. This means that it will run when Windows begins, saving you the trouble of using the Start menu to start it every time you want to use it. To start Apache manually: Start > All Programs > Apache... > Control Apache Server > Start Apache In Console. "Note: you will also see some other options, like an option to stop Apache and an option to restart Apache. You will need to be able to control the server later. Alternatively, when I run Apache, I get an icon in the system tray next to the clock. I can right-click this icon and it has options to stop and restart the Apache server. This system tray icon should appear by default on the windows installation." Once the install is finished, you'll have Apache installed. However, it's not yet configured. Before we do so, though, let's test Apache to see if the installation went according to plan. You should be able to now, if the server is started, run your preferred browser and type "http://localhost/" or, if your computer is on a network, the name of the computer (in my case "http://dellpc/"). You should see a page with the message "If you can see this, it means that the installation of the Apache software on this system was successful." Congratulations! Configure Apache. First, you must set up a location for your files to be stored. I created a folder in an easy to remember and easy to type location. All of my documents are stored in the folder "C:/Web/". In this folder, I also included a shortcut to the httpd.conf document in the Apache folder for easy modification. This httpd.conf document is located in the conf directory of where Apache is installed. On my computer this location is "C:/Program Files/Apache Group/Apache2/conf/". Regardless of where it is, find it and open it before continuing. This file is the primary (if not only) configuration file for your Apache server. The size and amount of words looks intimidating, but really most of them are comments; any line that begins with a hash mark / pound sign (#) is a comment. Find (using ctrl+f), "DirectoryIndex" and you will eventually see a line that reads DirectoryIndex index.html index.html.var. We are going to change that to read DirectoryIndex index.html index.html.var index.php index.htm. This means that if an index.html is not found in your web directory, the server will look for an index.php, and then will look for index.htm if index.php is not found. Go ahead and save the file. Fantastic. For the changes to take effect, you must restart the server. To define where your web folder is, find (via ctrl+f) "DocumentRoot". Replace what follows "DocumentRoot" in quotes with the full path to your web directory. If you're using C:/Web/ as your web directory, your line would read DocumentRoot "C:/Web/". Scroll down a tad to find the comment line that reads "This should be changed to whatever you set DocumentRoot to." Change the following line to read <Directory "C:/Web/"> or whatever you set DocumentRoot to. Testing Apache. You should have a functioning Apache server now. You can test this by first restarting Apache, then placing an HTML file in your web directory named "index.htm" and then accessing it by opening your browser and browsing to http://localhost/. If you see your index.htm, excellent work. "Note: for a while, I would see the Apache test page if I just went to http://localhost/ or http://dellpc/. To see my index page, I would have to go directly to that file, i.e. http://localhost/index.htm. Eventually, this just stopped happening. I'm not sure what happened." "This probabally happened because the Apache test page was cached. This means your web browser had stored a copy of it locally and was serving that file instead of the real webpage. Hitting refresh should fix this problem." Since Apache is configured and working, all that's left is to download, install, and configure PHP, and then reconfigure Apache to use it. Get PHP. The PHP website is the home of PHP on the web. There you can download PHP and also find the PHP manual. In any language, having a manual is a huge help. Navigate to the downloads page and find the latest ZIP package. At the time of this writing, the current version is 4.3.9, and the ZIP package is here. Unzip, via WinZip or WinRAR or PKUnzip or whatever decompressing program you use, to the root (C:/, usually) directory. It will leave a folder called "php-...". Rename this folder to "php", and it is in this C:/PHP/ directory that your new script interpreter now resides. "Note: There is also an installer available for PHP, but I do not recommend this as using it shall lessen your knowledge of how PHP works." "PHP 5.0.2 is also available for download. This is a newer code base and generally has a greater performance, and more capabilities than then 4.x.x line. It is generally advised that you use the 5.x.x line in preference to 4.x.x. The code for PHP5 is very similar to the code for PHP4, and everything covered in this book should work under both environments." Configure PHP. In your C:/PHP/ directory, find the files called "php.ini-dist" and "php.ini-recommended". These are two separate files that are included with PHP that contain separate configurations for PHP depending on your needs. The PHP website recommends you use the recommended version, so you need to rename this to "php.ini". Here you have a choice. At this stage you need to make the file accessible to your webserver and the PHP parser. You can either: # If you chose PHP 4 insert this: LoadModule php4_module "c:/php/sapi/php4apache2.dll" AddType application/x-httpd-php .php # If you chose PHP 5 insert this: LoadModule php5_module "c:/php/php5apache2.dll" AddType application/x-httpd-php .php # configure the path to php.ini PHPIniDir "C:/php" AddType application/x-httpd-php-source .phps In php.ini, find "doc_root". Much like you did with the Apache DocumentRoot directive, make the line read doc_root = "c:\web" or whatever your web directory is. Scroll down a tad (or find) to reach the extension_dir line. After the equals sign, type, in quotes, the directory in which PHP is located. For people following along, this would be C:/PHP/. My extension_dir, for instance, reads extension_dir = "c:\php". Finally, you need to make the relevant DLL's available to the web server. Again, there are a number of different ways of doing this. I recommend the final method because it will allow you to easier upgrade PHP in the future, should you choose to do so. The DLL's are php4ts.dll and php5ts.dll depending on the version of PHP that you are installing.
1,839
Using GNOME/Main desktop. When you log in to gnome, you will appear on the main desktop.?! This chapter will explain the items you are likely to see on the desktop.
41
Using GNOME/History of GNOME. GNOME was started in late 1997 due to unhappiness over licensing problems with KDE. In 1999 the first version was released. In late 2001 the main focus switched to usability and GNOME 2.0 was released in June 2002. Gnome 2.6, was released in March 2004. Releases occur every six months. The current release is 3.22. See also: http://www.linuxvalley.com/encyclopedia/ldp/lg/issue35/icaza.html
150
Using GNOME/Differences with Windows. If you are used to Microsoft Windows, here are the differences you will find. Taskbar. There is no taskbar. The clock is located at the top of the screen. GNOME does not provide a list of buttons to switch between each window. Start Menu. By default, there is no start menu in GNOME. However, it is easy to enable an Applications menu that works very much like the Windows start menu. Programs in the Applications menu are listed by category (Graphics, Internet, Office, and so on) rather than in alphabetical order. My Documents. Like modern versions of Windows, GNOME provides four different folders for your documents: Documents, Music, Pictures, and Videos. All of these folders are contained within your “home folder.” To see your home folder, launch the Files app. A new copy of the Files app will display the contents of your home folder by default. My Computer. External disks such as thumb drives appear in the list at the left of the Files app automatically when they are plugged in. Linux doesn’t use drive letters like A: or C:. Instead, drives are given more logical names. Viewing the contents of your hard drive is more complicated. Control Panel. GNOME’s Control Panel is called Settings. You can find a few more options in the “Extensions” app as well. Recycle Bin. GNOME calls this the Trash. (In some languages, it is called the Wastebasket instead.) It works just like the Recycle Bin in Windows. You can find it on the left side of a Files app window. Windows Explorer/File Manager. In GNOME, this is called the Files app. You can find it in the Applications menu or the Activities screen.
401
OpenOffice.org/Installation. Microsoft Windows. First you need to get an OpenOffice installer package. Before downloading the Office Package, you'll need to review the system requirement. After you read it, download an appropriate installation package or use P2P download or order a CD-ROM. You might want to use a download manager if you're using direct download. For some functionality to work, OpenOffice need Java Runtime Environment (JRE) to be installed on the computer. If you're not sure you do have JRE installed, you could include JRE in your OpenOffice package. Installation of both the JRE and OpenOffice.org is as simple as following a fairly standard installation routine in Windows. The installation wizard first explains that it will allow you to install OpenOffice.org. After you click on "Next", it prompts you to read and accept the terms of service. Clicking next again allows you to choose a directory in which to install the software. You are then offered a choice between full installation of all features and a screen which would allow you to choose which components to install. After this point, the installation wizard runs for a time, with several progress bars and a running list of which file is being copied. You can now open OpenOffice.org from the task bar (if you set up quickstarter during install) or its program group. From the program group you then select the tool you need. To use quickstarter right-click on it and select the tool you need. GNU/Linux. Most GNU/Linux distributions come with OpenOffice.org preinstalled. It is simply necessary to choose the appropriate option for installation. Be aware, however, that some of the packaged files, such as RPMs, do not include some features that conflict with the distro licenses, as they are packaged by the distributors. Java support is one item frequently left out of these versions. However, some distributions configure it to run with an alternative free software Java Runtime Environment from the Free Software Foundation. It can be installed in RPM or DEB form manually by navigating to the same download page as for installing under Windows, and selecting the operating system as Linux after choosing the language, then choosing whether you want to download an RPM (For Red Hat, Fedora Core, SUSE, Mandriva, etc.) or DEB (for Debian, Ubuntu, etc.) file. This can then be installed with the package manager provided with your distribution. OpenOffice.org can also be downloaded in source code form by choosing 'Source and Solver' from the main download page, but this isn't necessary for most users not interested in development. Debian and Derivatives. Debian and its derivatives (e.g. Ubuntu) use apt-get, aptitude, and synaptic as their package manager. You can install OpenOffice.Org by running either of these in the Terminal: codice_1 or codice_2 or by using the graphical package manager. Mac OS X. In order to run OpenOffice.org on MAC OS X, X11 is required. MAC OS X versions before Tiger (10.4.x) required a separate application to be downloaded from Apple. With Tiger X11 is available on the OS X Install Disk. You can find this in the System/Installation/packages/ folder called X11User.pkg - run this package to install X11. There are community builds of Universal Binary OpenOffice.org for Intel based Macs; however the PPC versions will run in Rosetta anyway and are QA'd which the Intel builds are not at the time of writing. Once X11 is installed on the machine downloading and installing OpenOffice is like any other MAC application. Go to the Openoffice.org web site and follow the instructions to download the application. Once this has finished, double click the "OOo_2.0.1_MacOSX_install_en-US.dmg" or ("OOo_2.0.3rc3_MacOSXIntel_en-US.dmg") package and then drag the OpenOffice.org Icon to your applications folder. When the copying process has finished you can click the eject button in the finder and the dmg package can be deleted. Click the OpenOffice.org icon to run the application, optionally register and you're ready to go. Solaris. for now, see http://wiki.services.openoffice.org/wiki/Documentation/Administration_Guide/Solaris
1,024
Using GNOME/Login. When you turn on your computer, you see the operating system load. Once the system has started, the computer will open the GNOME user interface. GNOME runs atop the operating system and provides a Graphical User Interface (GUI) for the user to interact with. The login screen is the first screen a user will see when GNOME starts. Types of login screen. There are two types of login screen, standard and graphical. GNOME allows the users to select a login theme to customize their computer. Different operating systems provide different “preferred” graphical themes. A graphical login screen includes a “face browser”, which lets the user visually select their user name, while a standard login screen simply provides text boxes to enter a user name and password. There is no OK button on the login screen; instead, you press Enter once you have entered your password. If you entered your password correctly, GNOME will display the main desktop. Shutting down the computer. But suppose you want to shut down your computer instead. There is a way to do this from the login screen. In the top-right corner of the login screen you will see two icons. We will be focused on the one with the power-button icon. (The person icon, if clicked, reveals a list of accessibility-related features, such as High Contrast or Mouse Keys.) A dropdown will appear. Confusingly, all you will see there is a button labeled “Dark Style” and another power-button icon. To shut down this computer, you must click the "second" icon to display the restart and power-off (shut down) commands. Clicking Suspend will put the computer to sleep.
373
Using GNOME/Installing. If you do not have GNOME installed on your system, then there are many ways to install it With a Linux distribution. Many Linux distributions offer GNOME as part of their package selections. Please check the documentation of your distro to see if GNOME is an available desktop enviroment. Garnome. Garnome is a script that lets you compile GNOME. Live CD. If you want to try GNOME without installing, try a live CD that uses it such as Gnoppix, Gnome Morphix and Ubuntu Linux. If you want to easily compare GNOME and KDE, try a Fedora LiveCD - you can select which desktop to use at login time. Other Operating systems. Sun Microsystems' Solaris offer GNOME BSD's can install it via ports or pkg-src. Just find the GNOME directory and type make install.
215
Using GNOME/File manager computer. The Computer icon is the gateway to accessing files outside your home folder. Using Computer, you can access files stored on CD-ROMS, Floppy Disks, USB keys and on your Network. To Access it.
57
Using GNOME/Accessibility. GNOME is designed for all users, including users with disabilities that make using a computer difficult. This section will cover accessibility tools in GNOME.
42
Using GNOME/Other Jargon. Controls. "Controls" or "Widgets" are elements that make up the desktop and applications. Dialogs. Dialogs are windows that ask you a question or provide information. More Glossary
55