score
int64
50
2.08k
text
stringlengths
698
618k
url
stringlengths
16
846
year
int64
13
24
58
The Teacher Handbook includes activities for teaching the skills of doing mathematics research. Although students can, and should, study these skills individually, the skills are best understood and appreciated in the broader context of doing research. Thus, any research strand should begin with a small, accessible investigation that highlights the stages of doing research. Once students have an example of research to which they can refer, the more readily they will connect each activity to that larger understanding and retain its lessons. Starting with shorter research experiences also helps students gradually build up their persistence with open-ended projects. Many problems can be used as introductory research explorations. The main criteria for these problems is that they be attention-grabbing (either because of their aesthetic appeal or because they are interestingly strange) and readily explored with minimal technical content, and that a class can generate conjectures and new questions in an hour or two of work. A model introductory lesson is provided below (see Hilgemeiers Likeness Sequence), the format of which can be used with other appropriate questions. Some alternative problems from the Making Mathematics materials include Yucky Chocolate, Connect the Dots, and Trains. The role of problem-posing can be emphasized early on by beginning with a research setting rather than a question (see Using Research Settings in Problem Posing and the Research Setting List). Whichever activity your class undertakes, you can focus students on the ideas they have had by using the simplified diagram of the research process (see Figure 1). Give each student a copy of this handout and ask them to match their comments and decisions with the steps in the diagram. If they took any research steps that are absent from the diagram, have them draw in those additional details. Figure 1. The Research Process. This activity introduces many of the overarching ideas of mathematics research in a two- or three- period sequence. It uses a setting rather than a question so that students will be encouraged to pose a range of mathematical questions. Software is available for further explorations and testing of conjectures. Goals for This Activity 11, 12, 21* 31, 22, 11 13, 11, 22, 21 11, 13, 21, 32, 11 * Read in pairs for describing the line above, but read as 111, 22, 1 when described by the line below. Figure 2. The First Eight Terms of Hilgemeiers Likeness Sequence This lesson revolves around Hilgemeiers Likeness Sequence (see Figure 2, above), which is generated according to the following rule: The first line is an arbitrary starting point, and each line thereafter is a description of the string of digits in the prior line, read left to right. For example, the third line reads "Two 1s," which is a description of the second line. The fifth line reads "One 1, one 2, two 1s," which describes the line before it, grouping consecutive strings of identical digits. Explain to the class that you are going to present a sequence, each term of which is a sequence of numbers. Ask them to study it and try to determine how it is generated, what properties it may have, and what patterns it may exhibit. Display the first five lines (see Figure 3 below) and ask students to copy them and then spend three minutes trying to anticipate the next term in the sequence. Figure 3. The First Five Terms of Hilgemeiers Likeness Sequence As students study the terms, circulate and listen for questions or observations (e.g., "The lines are getting longer," or "Are there only 1s and 2s?"). Write these on the board. Solicit students guesses and explanations for the next term. These guesses will reveal for the class the kinds of patterns that might be noticed, for example, that there always seems to be a "1" at the end of a term. Write the next few terms on the board (see Figure 4, below), pausing after each to note any stray comments worth adding to the board (e.g., "Aha, a 3! Will a 4 come next?"). Figure 4. Terms 6 through 10 It is quite difficult to unravel the rule behind this sequence. The purpose of this gradual revelation is to bring out how the class thinks about numerical patterns and to list their initial observations and questions about the sequences behavior. It is fine if they do not figure out the rule themselves (and the mild frustration they experience is also a gentle introduction to the pace of real mathematics investigations). After 10 or so minutes of (most likely) unsuccessful searching for a rule that explains the full sequence, add a few more lines, add commas between the pairs (see Figure 5, below), and read the first few lines out loud, given this punctuation (the fifth line should be read "One one <pause> one two <pause> two one" not "eleven, twelve, twenty-one"). 11, 12, 21 31, 22, 11 13, 11, 22, 21 Figure 5. The First Seven Terms of the Sequence, Punctuated Once students have generated a few questions and observations about the sequence, you can add plurals to your reading of the list: "Three ones <pause> two twos <pause> one one." If a student thinks he or she has figured out the pattern, ask the student for the next line without telling the class the rule. (This is mostly for dramatic effect, but the drama serves to add to the surprise and outrage, both of which have value.) If the student generates a correct line, have her or him reveal the rule; otherwise, give someone else a chance. If no one figures it out, tell the class how each line is generated. Wheres the Math? Often a student will object, either vociferously or muttered, "This is supposed to be math class, not English!" This protestation is your cue to point to the list of questions and observations on the board. Is "Will a 4 appear?" an English question or a mathematical one? Acknowledge that the sequence is a rather odd mathematical object, but it is one, and it led the class to raise interesting mathematical observations and questions. The absence of an arithmetic rule for the sequence did not exhaust the range of possible mathematical explanations. Point out that mathematics is about patterns, structures, and reasoning, and that we are not limited to numbers and shapes as the subject of our studies. (In this case, the digits are playing dual roles as numbers, if they are the first part of a pair, and symbols, if they are the second part.) In addition to noting the terminal "1" of each term and wondering about the appearance of a "4", other questions students have asked during the early stage of this activity include the following: Have everyone write down the next three or more lines of the sequence. Make sure they understand the rule. Give the students seven minutes (see Assessment and the Use of Class Time for an explanation of this specific interval of time) to write additional lines, test any of the current questions, or pose some new questions about the sequence. Encourage them to check their new terms with a partner. When the class reconvenes, solicit any new questions and add them to the list. Provide the following definitions informally: The individual terms each serve as an example of what terms in the sequence might be like. They also have a relationship to one another in establishing broader patterns. The class saw the sequence and noted, "Hmm, the digits do not seem to be going beyond 3." Such a comment is an observation, based on several examples, that reflects a general pattern or property of the objects under investigation. If, after further exploration and consideration, we begin to believe an observation will always hold true, then we state it clearly and call it a conjecture. The question "Does a 4 appear?" might be turned into the conjecture "A 4 will not appear in this sequence." A conjecture is a claim that we believe is likely to be true but have not yet proven true. Once a conjecture is proven, it is called a theorem. Ask students if they have thoughts about how to answer any of the questions posed. If none are proffered, prompt them with questions such as, "What would they do to find out if a 4 appears?" or "How could you determine if the final two digits continue their 11, 21, 11, 21, pattern?" The goal of these questions is not to push students into rigorous proofs, but to distinguish between the role that different types of evidence play in mathematical thinking. If a student suggests that the class extend the sequence in order to verify a conjecture, have the class put additional terms on the board and ask whether these new terms answer any of their questions. Should a continuation of the pattern convince them that a 4 will never appear? (Many may say yes.) What would they require to be sure that the conjecture was true or false? Students may suggest that its truth merely requires a single example, a term with a 4 in it. (The truth of the conjecture might also be shown through an existence proof, which does not give a particular term but is able to show that, for some reason, a 4 must ultimately appear. This approach can be left for a later discussion.) Could examples ever prove the conjecture false? Many more terms might appear persuasive and make us ever more supportive of our conjecture, but a claim about an infinite set of elements cannot be proven with a finite sample (note that the claim that a 4 does appear may be a claim about only a single term; see the activity Conjectures Are Not Theorems in the Examples, Patterns, and Conjectures section). The first homework assignment is dependent on how far the discussion progressed during class. Students might be asked to extend the sequence. They should be asked to generate at least three new questions about the sequence. They can also be given the option of trying to answer any questions that have not yet been addressed. The sophistication of the questions students ask about the sequence tends to grow as they immerse themselves in the setting. Here are some questions that students have posed during later stages of this investigation: The last question in this list is wonderful. It is not a question about the specific sequence, but about the process involved. It is the start of a generalization of the original setting. All of the other questions can be applied to this new sequence (2, 12, 1112, 3112, 132112, etc.) and their answers in each case compared. When students take the initiative to generalize or modify a research problem, their contribution should be highlighted as an example of the broader context in which research questions can and should be viewed (see Problem Posing). Once the question list has grown, it is worth reminding the class that any initial skepticism about the mathematical nature of the problem should be gone. In fact, by this point, students are more likely to be surprised at the amount of time it would take to even try to resolve the questions. At this juncture, students can individually or in pairs choose one of the questions to investigate further. Students can study the growth of L(n) or the appearance of patterns in the terms themselves with the program SequenceGenerator. Students can turn some of their questions into conjectures and try to come up with reasons (other than a preponderance of examples) for believing them (see Sample Student Proofs, below). The emphasis at this point should not be on rigorous proof but on clear communication, group understanding of the conjectures and reasons that students produce, and understanding the relationship between data, conjectures, and theorems. Whether further time is devoted to exploring this sequence or extensions of the setting should be based on class enthusiasm and the likely productivity of the questions raised. Finding a function for L(n) by transforming familiar functions (polynomial and exponential) or with curve-fitting algorithms (such as are built into most graphing calculators) might be of interest, but an attempt to find a proof of this behavior is not a constructive use of time. If the class is enjoying the questions, spending further time exploring them will send the message that their mathematical enthusiasm is appropriate and encouraged (see Student and Teacher Affect). These proofs are approaches that inspired students developed to support their conjectures about the sequence. They are provided for your interest, rather than as examples of what a typical class produces. Note that while even arithmetic is barely necessary, the reasoning did require careful keeping track of possible cases and encouraged the introduction of symbols. See Reading Technical Literature for suggestions on wading through dense mathematical reasoning. Proofs of some of the claims about this sequence require both elaborate case-by-case schemes (see Conway's Constant and its links to Conways "periodic table") and more complex mathematics. This introductory activity need not lead to proofs and should not be used as the point to insist on perfect form and rigor. If a student does present a proof, encourage the class to study it, check its claims, and offer constructive feedback. Are the explanations clear? Convincing? Do they cover all possibilities? Use this opportunity to discuss the value of proof (see Proof and Proof Techniques) and note whether the arguments your students present offer insight into why their theorem is true (as the two below do). First Student Proof Can the original sequence produce a 4? No. The digits in a term either refer to a symbol (S) in the previous term or are a count (C) of how many times such a symbol appeared consecutively. Since there is no 4 to begin with, the first appearance of a 4 must come from there being four consecutive appearances of some other symbol. The terms are made of pairs of digits: C1S1C2S2 . . . CnSn. Any string of four of these must include two adjacent CS pairs all of the same digit (either . . . CiSiCi+1Si+1 . . . or . . . SiCi+1Si+1Ci+2 . . .). A term cannot have the same value for S twice in a row because you are counting strings of symbols until a new string begins (if 2111 was describing three symbols in a preceeding line, it should just be 31). So four identical symbols would never appear consecutively and therefore a 4 will not appear. Second Student Proof Can L(n) be less than L(n 1)? Not for n 3. There are three types of CS pairs in the terms of the sequence. The Ci can equal either 1, 2, or 3. CS pairs beginning with a 1 are one digit longer than the substring they describe. CS pairs beginning with a 2 are the same length as the substring they describe. CS pairs beginning with a 3 are one digit shorter than the substring they describe. Replacing a string of three identical digits with a pair, such as 32, is the only way a term can shrink. So L(n) can be less than L(n 1) only if there are more CS pairs beginning with a 3 than there are pairs beginning with a 1. Any string of three consecutive digits must begin at a C value (or you will have two consecutive, identical S values). We are trying to see if we can fit in more three-digit substrings than one-digit ones, so lets begin a term with a three-digit string: aaa. The fourth digit of this term must be different (aaab) and cannot be part of a three-digit string (for the two S reason), so we have aaabc . . . or aaabb . . . In the aaab case, the one b will produce a two digit descriptive CS pair and so aaab will be replaced by the equally long 3a1b substring. In the latter cases, we have one fewer digit, but we began with a five-digit term and odd length terms are only possible as our starting term (So L(2) can be shorter than L(1) but not thereafter). If n > 2, then the nth term is describing an even-length term and the aaabb substring must be followed either by a terminating single digit (which yields a descriptive pair and cancels out the "benefit" of the initial triple) or by another pair, until some final single digit. Not until a single digit has been added to get a starting substring that can begin in a C position, can a new triple be appended. So, each triple must be followed by a complementary single (with or without an intervening sequence of pairs), and it is impossible to have more triples than singles. Therefore, L(n) must be at least as long as L(n 1). Another question students may consider is, "Can three 3s appear in a sequence?" An extension students may explore is Robinsonizing, which poses this question: What if each line tells, in order, how many total 1s, 2s, 3s, etc. appear in the prior line (with no attention paid to the order of the digits). For example: 0 1s, 0 2s, 0 3s, 0 4s, 0 5s, 0 6s, 0 7s, 0 8s, 0 9s, 0 0s 1 1s, 1 2s, 1 3s, 1 4s, 1 5s, 1 6s, 1 7s, 1 8s, 1 9s, 11 0s 12 1s, 1 2s, 1 3s, 1 4s, 1 5s, 1 6s, 1 7s, 1 8s, 1 9s, 1 0s 11 1s, 2 2s, 1 3s, 1 4s, 1 5s, 1 6s, 1 7s, 1 8s, 1 9s, 1 0s How does this sequence continue? What happens if we choose a different starting string? (See Douglas Hofstadters Metamagical Themas , pages 27 and 39092 for more information.) Berzsenyi, George (1993, Sept./Oct.). Endless self description: Finding the limits in Hilgemeiers "likeness sequence." Quantum, 17. Conway's Constant (2000). Available online at http://www.mathsoft.com/asolve/constant/cnwy/cnwy.html. Conway, John (1990, Nov./Dec.). Play it again Quantum, 31, 63. Hofstadter, Douglas (1985). Metamagical themas. New York: Basic Books. Look and say sequence in Weisstein, Eric (1999). Concise encyclopedia of mathematics CD-ROM. USA: CRC Press. Also available online at http://mathworld.wolfram.com/LookandSaySequence.html. Encourage students with a programming background to create an application that generates the terms of the Likeness Sequence. Alternatively, you can provide your class with the following Logo programs to use in their explorations. A program makes it possible to study terms of the sequence that are much longer than students can reasonably study by hand. See http://el.www.media.mit.edu/groups/logo-foundation/products/software.html for more information about Logo programs, including some that can be downloaded for free. The SeeAndSay procedure accepts a list as input and produces the list that describes it. For example SeeAndSay [3 8 8 1] outputs [1 3 2 8 1 1]. The SequenceGenerator procedure takes a starting list and the number of lines to be displayed and generates the first n lines beginning with that initial list. For example, SequenceGenerator [3 2] 5 produces: 1 3 1 2 1 1 1 3 1 1 1 2 3 1 1 3 3 1 1 2 1 3 2 1 2 3 2 1 1 2 to SeeAndSay :list if :list = [output ] if (butfirst :list) = [output se "1ist first :list] if not ((first :list) = first butfirst :list) [output (se "1ist first :list SeeAndSay butfirst :list)] if and ((first :list) = first butfirst :list) ((butfirst butfirst :list) = ) [output (se "2 first :list SeeAndSay butfirst butfirst :list)] if and ((first :list) = first butfirst :list) ((first :list) = first butfirst butfirst :list) [output (se "3 first :list SeeAndSay butfirst butfirst butfirst :list)] output (se "2 first :list SeeAndSay butfirst butfirst :list) to SequenceGenerator :list :n if :n > 1 [SequenceGenerator SeeAndSay :list :n - 1] Translations of mathematical formulas for web display were created by tex4ht.
http://www2.edc.org/makingmath/handbook/Teacher/IntroductoryExplorations/IntroductoryExplorations.asp
13
56
For forty-three years, although no war between the superpowers of the United States and the Soviet Union was ever officially declared, the leaders of the democratic West and the Communist East faced off against each other in what is known as the Cold War. The war was not considered “hot” because neither superpower directly attacked the other. Nevertheless, despite attempts to negotiate during periods of peaceful coexistence and détente, these two nations fought overt and covert battles to expand their influence across the globe. Cold War scholars have devised two conflicting theories to explain what motivated the superpowers to act as they did during the Cold War. One group of scholars argues that the United States and the Soviet Union, along with China, were primarily interested in protecting and advancing their political systems—that is, democracy and communism, respectively. In other words, these scholars postulate that the Cold War was a battle over ideology. Another camp of scholars contends that the superpowers were mainly acting to protect their homelands from aggressors and to defend their interests abroad. These theorists maintain that the Cold War was fought over national self-interest. These opposing theorists have in large measure determined how people understand the Cold War, a conflict that had been a long time in the making. A History of Conflict The conflict between East and West had deep roots. Well before the Cold War, the relationship between the United States and the Soviet Union had been hostile. Although in the early 1920s, shortly after the Communist revolution in Russia, the United States had provided famine relief to the Soviets and American businesses had established commercial ties in the Soviet Union, by the 1930s the relationship had soured. By the time the United States established an official relationship with the new Communist nation in 1933, the oppressive, totalitarian nature of Joseph Stalin’s regime presented an obstacle to friendly relations with the West. Americans saw themselves as champions of the free world, and tyrants such as Stalin represented everything the United States opposed. At the same time, the Soviets, who believed that capitalism exploited the masses, saw the United States as the oppressor. Despite deep-seated mistrust and hostility between the Soviet Union and Western democracies such as the United States, an alliance was forged among them in the 1940s to fight a common enemy, Nazi Germany, which had invaded Russia in June 1941. Although the Allies—as that alliance is called—eventually defeated Germany, the Soviet Union had not been completely satisfied with how its Western Allies had conducted themselves. For example, the Soviets complained that the Allies had taken too long to establish an offensive front on Germany’s west flank, leaving the Soviets to handle alone the offensive front on Germany’s east flank. Tension between the Soviet Union and the Western Allies continued after the war. During postwar settlements, the Allies agreed to give control of Eastern Europe—which had been occupied by Germany—to the Soviet Union for its part in helping to defeat Germany. At settlement conferences among the Allies in Tehran (1943), Yalta (February 1945), and Potsdam (July/August 1945), the Soviets agreed to allow the nations of Eastern Europe to choose their own governments in free elections. Stalin agreed to the condition only because he believed that these newly liberated nations would see the Soviet Union as their savior and create their own Communist governments. When they failed to do so, Stalin violated the agreement by wiping out all opposition to communism in these nations and setting up his own governments in Eastern Europe. The Cold War had begun. During the first years of the Cold War, Soviet and American leaders divided the world into opposing camps, and both sides ac- cused the other of having designs to take over the world. Stalin described a world split into imperialist and capitalist regimes on the one hand and Communist governments on the other. The Soviet Union and the Communist People’s Republic of China saw the United States as an imperialist nation, using the resources of emerging nations to increase its own profits. The Soviet Union and China envisioned themselves as crusaders for the working class and the peasants, saving the world from oppression by wealthy capitalists. U.S. president Harry Truman also spoke of two diametrically opposed systems: one free and the other bent on subjugating struggling nations. The United States and other democratic nations accused the Soviet Union and China of imposing their ideology on emerging nations to increase their power and sphere of influence. Western nations envisioned themselves as the champions of freedom and justice, saving the world for democracy. Whereas many scholars see Cold War conflicts in these same ideological terms, others view these kinds of ideological pronouncements as ultimately deceptive. They argue that despite the superpowers’ claims that they were working for the good of the world, what they were really doing was working for their own security and economic advancement. Two Schools of Thought Ideological theorists claim that the Soviets and the Americans so believed in the superiority of their respective values and beliefs that they were willing to fight a cold war to protect and advance them. Each nation perceived itself to be in a “do-or-die” struggle between alternative ways of life. According to foreign policy scholar Glenn Chafetz, a leading proponent of the ideology theory: Ideology served as the lens through which both sides viewed the world, defined their identities and interests, and justified their actions. U.S. leaders perceived the Soviet Union as threatening not simply because the USSR was powerful but because the entire Soviet enterprise was predicated on implacable hostility to capitalism and dedicated to its ultimate destruction. From the earliest days of the Russian Revolution until the end of the cold war, Moscow viewed the United States as unalterably hostile. Even when both nations were fighting a common enemy, Nazi Germany, the Soviets were certain that the Americans were determined to destroy the Soviet Union. Other scholars argue that the United States and the Soviet Union chose actions that would promote national self-interest, not ideology. That is, the nations were not primarily motivated by a desire to defend capitalism or communism but by the wish to strengthen their position in the world. These scholars reason that the highest priority of every nation is not to promote its ideology but to protect and promote its own self-interest. Thus, these theorists claim, the superpowers advanced their sphere of influence throughout the world in order to gain advantages, such as a valuable trading partner or a strategic military ally. Moreover, these scholars argue, the superpowers aligned themselves with allies who could protect their interests against those who threatened them. Historian Mary Hampton, a champion of the national interest theory, explains: Had ideology been the sustaining force of the cold war, the stability and predictability of the relationship between the two states would not have emerged. Their mutual respect for spheres of influence, the prudent management of their nuclear relationships, and their consistent policy of checking global expansions without resort to direct confrontation are best explained by an analysis based on interest-motivated behavior. . . . From 1946 to 1990, the relationship between the United States and Soviet Union included both diverging and shared interests, and it was a combination of these interests that governed their conduct during the cold war. Although the differences between these two interpretations of Cold War motivations are fairly clear, applying the theories to explain actual events during the period is more complicated. For example, even though a nation might claim that it deposed a leader in a Latin American nation because the ruler was despotic, the real reason might be that the Latin American country had some resource such as oil that the invading nation coveted. Conversely, invading nations are always vulnerable to charges that they are acting in self-interest when in reality nations often do become involved in other countries’ affairs out of a genuine concern about human rights or other humanitarian issues. Both theories have been used to explain many U.S. and Soviet actions during the Cold War, leading to radically different interpretations of events. The Battle over Europe Both theories have been used to explain Soviet and U.S. behavior in Europe. Those who believe the Cold War was primarily an ideological battle claim that aggressive Soviet action to quell democratic movements in the nations of Eastern Europe was motivated by the Soviet belief that capitalism harms the masses whereas communism protects them. Capitalism, the Soviets believed, exploits workers, who take home only a small percentage of companies’ profits in the form of wages whereas the owners reap huge financial benefits at the workers’ expense. Under socialism, in contrast, workers own the methods of production and therefore take their fair share of the profits. Thus, ideologically, the Soviet Union believed it was protecting the oppressed workers in the nations of Eastern Europe by opposing democratic movements. Indeed, the Soviet Union’s belief in socialism as the superior economic system informed all of its foreign policy decisions. According to Chafetz, the Soviets believed that “international relations are a reflection of the class struggle in which socialist countries represent the working class and capitalist countries represent the exploiting class. Socialist internationalism referred to the common class interest of all socialist states; these concerns trumped other interests, at least in the minds of Soviet leaders.” According to those who believe ideology-motivated actions taken during the Cold War, the United States reacted negatively to Soviet actions in Eastern Europe because it disapproved of the Soviet Union’s undemocratic treatment of Eastern Europeans, who had the right to choose their own systems of governance. “Moscow’s repression of democratic movements in Eastern Europe,” Chafetz claims, “conflicted with the promises to permit elections that Stalin made at Yalta and Potsdam.” In response to Soviet aggression in Eastern Europe, U.S. leaders publicly denounced Soviet actions and increased U.S. military forces in West- ern Europe. In June 1961, for example, President John F. Kennedy took a stand against Soviet premier Nikita Khrushchev’s attempt to occupy the city of Berlin. Although Berlin was located within the borders of East Germany, a Soviet satellite, after World War II the Allies had agreed that both East and West would occupy the city (dividing it into East and West Berlin) because Berlin had strong ties with the West. Capitalism and democracy, however, appealed to many East Germans, who fled to West Berlin by the thousands. This embarrassed the Soviets and threatened their hold on Eastern Europe. In June 1961 Khrushchev threatened to forcibly take West Berlin under Communist rule. Kennedy responded to this challenge by increasing America’s combat forces in West Berlin and using billions of dollars approved by Congress to increase U.S. nuclear and conventional weapons throughout Western Europe. Khrushchev’s counterresponse was to divide the city of Berlin with a cement wall, barbed wire, and a column of army tanks that remained until November 1989. Theorists who subscribe to the position that the superpowers were motivated more by national self-interest disagree with the ideological argument used to interpret such events. Hampton maintains: Arguments that seek to explain the cold-war competition in terms of ideology . . . should anticipate that the United States would have supported democratic reform movements and uprisings throughout Eastern Europe in this period, such as those that occurred in East Germany in 1953 and in Poland and Hungary in 1956. In fact, the Soviet Union resolved these crises [repressed the movements] without intervention from the United States or its Western allies. Indeed, the United States did not intervene with overt military action in Eastern Europe, taking a more cautious approach to maintain the balance of power between the two superpowers. National interest theorists claim that this stance suggests that the United States was more interested in maintaining its interests than promoting its ideology. Whereas ideological motivation causes nations to break rules and take risks in the name of some higher principles, these theorists say, nations protecting their self-interest do not want to “rock the boat”; thus, countries motivated by selfinterest play by the rules and take fewer risks. In consequence, while the Soviet Union marched into the nations of Eastern Europe to crush democratic movements, the United States, fearing international disapproval and hoping to avoid war with the Soviets, declined to intervene. The Third World According to theorists who believe ideology drove Cold War strategy, the United States and the Soviet Union both became involved in the third world to expand their spheres of influence, but for different reasons. The Soviets, unable to control Europe, sought to spread their ideology and expand their sphere of influence elsewhere. According to Chafetz: Stalin and his successors were convinced that the legitimacy of their rule depended on validating Marxist-Leninist predictions of world revolution. The beginning of the nuclear standoff in Europe [between the United States and the Soviet Union] made it apparent that fomenting revolution in the industrialized, democratic states of the West was either impossible or too dangerous. As a result the Soviets turned their efforts to exporting revolution to less developed countries. They tended to view all anti-Western movement throughout Latin America, Asia, Africa, and the Middle East through the single lens of [Communist leader Vladimir] Lenin’s theory of imperialism. Thus, despite the diverse motives behind revolutions, coups, and civil wars in China, Laos, Cuba, Vietnam, Congo, Ethiopia, Somalia, Afghanistan, Libya, and elsewhere, [Soviet leaders] Stalin, Nikita S. Khrushchev, and Leonid I. Brezhnev characterized them all in anti-imperialist terms. U.S. involvement in the third world was more complex. Chafetz writes, “Soviet exploitation of decolonization created a painful dilemma for the United States.” Although the United States, which regarded itself as a freed colony, was empathetic toward third world nations seeking self-determination and independence from colonial powers, it also viewed many of the regimes as anti- American. Indeed, the leaders of these third world coups and rev- olutions were often rebelling against increasing U.S. dominance in world affairs. Moreover, revolutionary leaders, inspired by Communist philosophy and weary of years of oppression at the hands of capitalist, democratic powers, were often attracted to the Soviet economic model. In consequence, the United States found itself in the uncomfortable position of opposing nationalist revolutions in order to contain the spread of communism. National self-interest theorists disagree with this analysis. The fact that the United States did not support these revolutions, they say, proves that the nation was motivated more by self-interest than ideology. If the ideology theory were true, they contend, the United States would have supported revolutions against colonial oppression. The United States had once been a colony and after independence had become a champion of the principle that nations have the right to choose their own systems of governance. Despite its past, the United States did not support these revolutions. Instead, the United States opposed them in order to gain or maintain political and economic allies. Thus, in the eyes of many, U.S. behavior toward the third world was immoral and hypocritical. These theorists believe that the use of less-than-honorable strategies, such as assassinations and secret agreements with repressive regimes, to prevent the success of these national revolutions stained America’s reputation across the globe. Of particular embarrassment were some of the actions taken by the Central Intelligence Agency (CIA). The Central Intelligence Agency National self-interest theorists find support for their views when examining CIA actions during the Cold War. Since its creation in 1947, the CIA was used as an instrument to carry out U.S. Cold War strategy, particularly during the 1950s and 1960s. The CIA was initially mandated to gather, evaluate, and disseminate intelligence. However, the vaguely mandated “other functions and duties” beyond its core mission led to the expansion of the CIA’s function to include counterespionage and covert action. Some of these activities were invaluable to America’s security. Foreign policy scholar Loch K. Johnson explains: “Intelligence-collection activities provided warnings about Soviet missiles in Cuba in 1962. Counterespionage uncovered Soviet agents inside U.S. secret agencies.” Johnson adds, however, that the CIA sometimes used tactics that conflicted with traditional American values. The CIA resorted to assassination plots against foreign leaders and spied on its own citizens. The agency engaged in paramilitary operations in Southeast Asia and abandoned the native people who had helped them to imprisonment, torture, and death when the United States pulled out of the region. Even covert acts that were deemed CIA successes, in historian Benjamin Frankel’s view, were moral failures: “Its role in toppling the ostensibly democratic, though Marxist, government of Guatemala in 1954 seemed to fly in the face of America’s commitment to democracy.” The fact that the administrations of several Cold War presidents approved these tactics suggests that national self-interest, not ideology, motivated CIA action during the Cold War. The Development of Alliances National self-interest theorists also find support for their point of view in the formation of alliances among the Communist nations of the East and the democratic nations of the West over the course of the Cold War. These alliances were designed to protect common interests. “Each state began mobilizing other states,” Hampton explains, “trying to form alliances and balance against the other.” To maintain a balance of power, these theorists claim, Western nations created the North Atlantic Treaty Organization (NATO) in 1949. The alliance was created largely to discourage an attack by the Soviet Union on the non-Communist nations of Western Europe. In 1955 the Soviet Union and the Communist nations of Eastern Europe formed their own military alliance to oppose NATO, the Warsaw Pact. Whether these alliances were responsible for keeping the peace, the balance of power was in fact maintained. National interest theorists maintain that an unlikely alliance between the United States and China further supports their position. A rift between the Soviet Union and China, the world’s most powerful Communist powers, would make this alliance possible. A Rift in the East Most of the Western world viewed China and the Soviet Union as two versions of the same Communist evil, but in reality, Sino- Soviet relations, not unlike those between the Soviet Union and the United States, had been historically uneasy. The two nations shared the longest land border in the world, the source of border disputes since the seventeenth century. Moreover, during the Communist revolution in China, the Soviet Union had initially supported Chiang Kai-shek rather than Mao Tse-tung, who ultimately defeated Chiang Kai-shek and became the leader of Communist China. However, to offer the newly Communist China some security against the United States, in 1950 the Soviet Union signed the Treaty of Friendship, Alliance, and Mutual Assistance with Mao. Despite this alliance, the Soviet Union and China had different ideas about the purpose of communism and the direction it should take. The Soviet Union began to rethink its Cold War strategy, choosing less overtly aggressive means of expanding its sphere of influence to avoid directly antagonizing the United States. China, on the other hand, vigorously opposed this stance, favoring continued aggression toward “imperialist” nations. China even accused the Soviet Union of going soft on capitalism. China’s vigorous opposition to Western imperialism drove a wedge between the Soviet Union and China. The conflicts between China and the Soviet Union escalated as both vied for control of satellite states. During the late 1960s the Soviet invasion of Czechoslovakia and the buildup of forces in the Soviet Far East led China to suspect that the Soviet Union would one day try to invade it. Border clashes along the Ussuri River that separates Manchuria from the Soviet Union peaked in 1969, and for several months China and the Soviet Union teetered on the brink of a nuclear conflict. Fortunately, negotiations between Soviet premier Aleksey Kosygin and Chinese premier Zhou En-lai defused the crisis. Nevertheless, Zhou and Mao began to rethink China’s geopolitical strategy. The goal had always been to drive imperialist nations from Asia, but such a strategy had led to a hostile relationship with America, the Soviet Union’s enemy. In fact, this strategy had brought China into conflict with the United States in two of the bloodiest clashes of the Cold War, the Korean and Vietnam Wars. However, when President Richard Nixon showed signs of reducing if not eliminating the American presence in Vietnam, China began to see normalization of relations with the United States as a way of safeguarding its security against the Soviet Union. Since this relationship was forged to enhance China’s national security and was created despite ideological differences between the two nations, the alliance between China and the United States supports the claims of self-interest theorists. The Fall of the Soviet Union Whereas national self-interest theorists find support for their theory in the development of alliances during the Cold War, ideological theorists find support for their position in the circumstances surrounding the fall of the Soviet Union. When Communist ideology eventually gave way to more democratic ideals in the Soviet Union, the union dissolved and the Cold War came to an end. This change, many argue, can be traced to the efforts of one man, Mikhail Gorbachev. When Gorbachev became leader of the Soviet Union in 1985, he began a political, economic, and social program that radically altered the Soviet government, creating a limited democracy. The nation’s political restructuring began with a newly created Congress of People’s Deputies, which elected Gorbachev executive president. The new government was not without opposition, and remaining hard-line Communists tried to unseat the new government. The coup failed, however, and shortly thereafter Gorbachev dissolved the Communist Party. Gorbachev tried to create a new Union—the Commonwealth of Independent States—but, explains Chafetz, “this experiment with limited democracy . . . developed a momentum of its own and became too strong for Gorbachev, or his more hardline opponents within the Communist party, to control.” When the commonwealth itself collapsed, the new union dissolved into independent nations. Ideological theorists point to this chain of events as proof that Cold War events were largely driven by ideology. Once the Soviet political system changed, there was no longer an ideological rift between the two nations, and the Cold War ended. For over four decades the United States and the Soviet Union had tried to expand their influence worldwide and in the process came into countless conflicts with one another. Whereas the Soviet Union pressured the nations of Eastern Europe to become Communist satellites and supported Communist revolutions in Southeast Asia, the United States forged alliances with democratic nations around the world and defended many emerging nations against communism. While trying to interpret these events, Cold War scholars have become divided into two camps: those who think the Cold War powers were acting to further their own belief systems and those who believe the major powers were simply aiming to protect their interests at home and abroad. Which of these theories best explains each superpower’s behavior during the Cold War remains controversial. In Opposing Viewpoints in World History: The Cold War, scholars debate other controversies surrounding the Cold War in the following chapters: From Allies to Enemies: The Origins of the Cold War, Coexistence and Conflict, From Détente to the Cold War’s End, and Reflections: The Impact of the Cold War. The authors express diverse views about the nature of the Cold War and the efficacy and justness of U.S. and Soviet policies. As ideology and national-interest theorists make clear, evaluating the Cold War is an exceedingly complex enterprise. Did this raise a question for you?
http://www.enotes.com/cold-war-article
13
122
Variables and Data Types In the previous lesson, we used some values such as 242 or 'James Knight'. These types of values are referred to as constant because we certainly know them before their use and we don't change them in our statements. If you intend to use a certain category of value over and over again, you can reserve a section of memory for that value. This allows you to put the value in an area of the computer's memory, easily change the value for another, over and over. To use the same area of memory to store and remove values as needed, the SQL interpreter needs two primary pieces of information: a name and the desired amount of space in memory capable of storing the value. A variable is an area of memory used to store values that can be used in a program. Before using a variable, you must inform the interpreter. This is also referred to as declaring a variable. To declare a variable, use the DECLARE keyword using the following formula: The DECLARE keyword lets the interpreter know that you are making a declaration. The DECLARE keyword is followed by a name for the variable. In Transact-SQL, the name of a variable starts with the @ sign. The name of a variable allows you to identify the area of memory where the value of the variable is stored. While other languages like C/C++, Pascal, Java, C#, etc, impose strict rules to names, Transact-SQL is extremely flexible. A name can be made of digits only. Here is an example: Such a name made of digits can create confusion with a normal number. A name can also be made of one or more words. To avoid confusion, here are the rules we will use in our lessons: To declare a variable, as we will see in the next sections, after giving a name to a variable, you must also specify the amount of memory that the variable would need. The amount of memory is also called a data type. Therefore, the declaration of a variable uses the following formula: DECLARE @VariableName DataType; You can also declare more than one variable. To do that, separate them with a comma. The formula would be: DECLARE @Variable1 DataType1, @Variable2 DataType2, @Variable_n DataType_n; Unlike many other languages like C/C++, C#, Java, or Pascal, if you declare many variables that use the same data type, the name of each variable must be followed by its own data type. After declaring a variable, the interpreter reserves a space in the computer memory for it but the space doesn't necessarily hold a recognizable value. This means that, at this time, the variable is null. One way you can change this is to give a value to the variable. This is referred to as initializing the variable. Remember that a variable's name starts with @ and whenever you need to refer to the variable, you must make sure you include the @ sign. To initialize a variable, in the necessary section, type the SELECT or the SET keyword followed by the name of the variable, followed by the assignment operator "=", followed by an appropriate value. The formula used is: SELECT @VariableName = DesiredValue SET @VariableName = DesiredValue Once a variable has been initialized, you can make its value available or display it. This time, you can type the name of the variable to the right side of PRINT or SELECT. After setting the name of a variable, you must specify the amount of memory that the variable will need to store its value. Since there are various kinds of information a database can deal with, SQL provides a set of data types. A Boolean value is a piece of information stated as being true or false, On or Off, Yes or No, 1 or 0. To declare a variable that holds a Boolean value, you can use the BIT or bit keyword. Here is an example: DECLARE @IsOrganDonor bit; After declaring a Boolean variable, you can initialize it with 0 or another value. If the variable is initialized with 0, it receives the Boolean value of False. If it is initialized with any other number, it receives a True value. Here is an example of using a Boolean variable: An integer, also called a natural number, or a whole number, is a number that can start with a + or a - sign and is made of digits. Between the digits, no character other than a digit is allowed. In the real world, when a number is (very) long and becomes difficult to read, such as 79435794, you are allowed to type a symbol called the thousand separator in each thousand increment. An example is 79,435,794. In your SQL expressions, never include the thousand separator: you would receive an error. When the number starts with +, such as +44 or +8025, such a number is referred to as positive and you should omit the starting + sign. This means that the number should be written as 44 or 8025. Any number that starts with + or simply a digit is considered as greater than 0 or positive. A positive integer is also referred to as unsigned. On the other hand, a number that starts with a - symbol is referred to as negative. If a variable would hold natural numbers in the range of -2,147,483,648 to 2,147,483,647, you can declare it with the int keyword as data type. Here is an example: DECLARE @Category int SET @Category = 1450 PRINT @Category GO This would produce: (1 rows affected) 1> DECLARE @Category INT; 2> SET @Category = 1450; 3> PRINT @Category; 4> GO 1450 (1 rows affected) The length of an integer is the number of bytes its field can hold. For an int type, that would be 4 bytes. If you want to use very small numbers such as student's ages, or the number of pages of a brochure or newspaper, use the tinyint data type. A variable with the tinyint data type can hold positive numbers that range from 0 to 255. Here is an example: 1> DECLARE @StudentAge tinyint; 2> SET @StudentAge = 14; 3> SELECT @StudentAge AS [Student's Age]; 4> GO Student's Age ------------- 14 (1 rows affected) The smallint data type follows the same rules and principles as the int data type except that it is used to store smaller numbers that would range between -32,768 and 32,767. Here is an example: 1> DECLARE @NumberOfPages SMALLINT; 2> SET @NumberOfPages = 16; 3> SELECT @NumberOfPages AS [Number of Pages]; 4> GO Number of Pages --------------- 16 (1 rows affected) The bigint data type follows the same rules and principles as the int data type except that it can hold very large numbers from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Here is an example: 1> DECLARE @CountryPopulation BigInt; 2> SET @CountryPopulation = 16500000; 3> SELECT @CountryPopulation AS 'Country Population'; 4> GO Country Population -------------------- 16500000 (1 rows affected) The binary data type is used for a variable that would hold hexadecimal numbers. Examples of hexadecimal numbers are 0x7238, 0xFA36, or 0xAA48D. Use the binary data type if all values of the variable would have the exact same length (or quantity). If you anticipate that some entries would be different than others, then use the alternative varbinary data type. The varbinary type also is used for hexadecimal numbers but allows dissimilar entries, as long as all entries are hexadecimals. A decimal number is a number that can have a period (or the character used as the decimal separator as set in the Control Panel) between the digits. An example would be 12.625 or 44.80. Like an integer, a decimal number can start with a + or just a digit, which would make it a positive number. A decimal number can also start with a - symbol, which would make it a negative number. If the number represents a fraction, a period between the digits specifies what portion of 1 was cut. If you anticipate such a number for a field, specify its data type as numeric or decimal (either decimal or numeric would produce the same effect in SQL Server). Here is an example: 1> DECLARE @Distance DECIMAL; 2> SET @Distance = 648.16; 3> PRINT @Distance; 4> GO 648 A floating-point number is a fractional number, like the decimal type. Floating-point numbers can be used if you would allow the database engine to apply an approximation to the actual number. To declare such a variable, use the float or the real keyword. Here is an example: 1> DECLARE @Radius FLOAT; 2> SET @Radius = 48.16; 3> SELECT @Radius AS Radius; 4> GO Radius ------------------------ 48.159999999999997 (1 rows affected) A precision is the number of digits used to display a numeric value. For example, the number 42005 has a precision of 5, while 226 has a precision value of 3. If the data type is specified as an integer (the int and its variants) or a floating-point number (float and real), the precision is fixed by the database and you can just accept the value set by the Microsoft SQL Server interpreter. For a decimal number (decimal or numeric data types), Microsoft SQL Server allows you to specify the amount of precision you want. The value must be an integer between 1 and 38 (28 if you are using SQL Server 7). A decimal number is a number that has a fractional section. Examples are 12.05 or 1450.4227. The scale of a number if the number of digits on the right side of the period (or the character set as the separator for decimal numbers for your language, as specified in Control Panel). The scale is used only for numbers that have a decimal part, which includes currency (money and smallmoney) and decimals (numeric and decimal). If a variable is declared with the decimal or numeric data type, you can specify the amount of scale you want. The value must be an integer between 0 and 18. Here is an example: If a variable would hold monetary values, you can declare it with the money keyword. A variable with a money data type can hold positive or negative values from -922,337,203,685,477.5808 to +922,337,203,685,477.5807. Here is an example: 1> DECLARE @YearlyIncome Money; 2> SET @YearlyIncome = 48500.15; 3> SELECT @YearlyIncome AS [Yearly Income]; 4> GO Yearly Income --------------------- 48500.1500 (1 rows affected) While the money data type can be used for a variable that would hold large quantities of currency values, the smallmoney data type can be applied for a variable whose value cannot be lower than -214,748.3648 nor higher than 214,748.3647. The precision and scale of a money or smallmoney variable are fixed by Microsoft SQL Server. The scale is fixed to 4. A DATETIME data type is used for a column whose data would consist of date and/or time values. The entries must be valid date or time values but Microsoft SQL Server allows a lot of flexibility, even to display a date in a non-traditional format. The date value of a datetime field can be comprised between January 1st, 1753 and December 31, 9999. To initialize a DATETIME variable, include its value between single-quote. If the value is a date, separate the components of the value with the symbol recognized in Control Panel as the Date Separator: Here is an example: 1> DECLARE @IndependenceDay DATETIME; 2> SET @IndependenceDay = '01/01/1960'; 3> SELECT @IndependenceDay AS [Independence Day]; 4> GO Independence Day ----------------------- 1960-01-01 00:00:00.000 (1 rows affected) If the value is a time period, still include it in single-quotes. Inside of the quotes, follows the rules and formats specified in the Control Panel: Here is an example: 1> DECLARE @ArrivalTime datetime; 2> SET @ArrivalTime = '18:22'; 3> SELECT @ArrivalTime AS [Arrival Time]; 4> GO Arrival Time ----------------------- 1900-01-01 18:22:00.000 (1 rows affected) The smalldatetime data type is an alternative to datetime. It follows the same rules and principles as the datetime data type except that a date value must be comprised between January 1st, 1900 and June 6, 2079. A field of characters can consist of any kinds of alphabetical symbols in any combination, readable or not. If you want a variable to hold a fixed number of characters, such as the book shelf numbers of a library, declare it with the char data type. Here is an example: DECLARE @Gender char; By default, the char data type can be applied to a variable that would hold one character at a time. After declaring the variable, when initializing it, include its value in single-quotes. Here is an example: 1> DECLARE @Gender char; 2> SET @GENDER = 'M'; 3> SELECT @Gender AS Gender; 4> GO Gender ------ M (1 rows affected) If you include more than one character in the single-quotes, only the first (most left) character would be stored in the variable. Here is an example: 1> DECLARE @Gender char; 2> SET @Gender = 'Male'; 3> SELECT @Gender AS Gender; 4> GO Gender ------ M (1 rows affected) A string is a character or a combination of characters. If a variable will hold strings of different lengths, declare it with the varchar data type. The maximum length of text that a field of varchar type can hold is equivalent to 8 kilobytes. In some circumstances, you will need to change or specify the number of characters used in a string variable. Although a First Name and a Book Title variables should use the varchar type, both variables would not have the same length of entries. As it happens, people hardly have a first name that is beyond 20 characters and many book titles go beyond 32 characters. In this case, both variables would use the same data type but different lengths. To specify the maximum number of characters that can be stored in a string variable, on the right side of char or varchar, type an opening and a closing parentheses. Inside of the parentheses, type the desired number. To initialize the variable, if you are using the Command Prompt (SQLCMD.EXE), include its value between double-quotes. Here is an example: 1> DECLARE @ShelfNumber char(6); 2> SET @ShelfNumber = "CI-422"; 3> SELECT @ShelfNumber AS [Shelf #]; 4> GO Shelf # ------- CI-422 (1 rows affected) If you are using a query window, don't include the string value in double-quotes; otherwise, you would receive an error: Therefore, if using the query window, include the string in single-quotes: The text data type can be used on a variable whose data would consist of ASCII characters. As opposed to a varchar type of field, a text type of field can hold text that is longer than 8 kilobytes. The nchar, nvarchar, and ntext types follow the same rules as the char, varchar, and text respectively, except that they can be applied to variables that would hold international characters, that is, characters of languages other than US English. This is done following the rules of Unicode formats. |Previous||Copyright © 2007-2012 FunctionX||Next|
http://functionx.com/sqlserver2005/Lesson04.htm
13
50
A light switch is a simple object: it's either on or off. But this simple binary system is enough to illustrate Boolean algebra, on which all modern computers are based. Yutaka Nishiyama illuminates the connection between light bulbs, logic and binary arithmetic. Maths in everyday lifeMaths is hidden within the simplest things in life, and sometimes you don't even notice it. When you come home at night, for example, and it is dark, you switch on the light in the hall. Later on, when you go upstairs to bed, you switch it off again by another switch on the upstairs landing. But how does the upstairs switch know that the light is on and its job is to switch it off? Does it communicate with the switch downstairs somehow? Is it remote-controlled? The answers to questions like these serve as a good example for the binary system that underlies Boolean logic and the workings of modern computers. Figure 1: a simple circuit. Figure 2: the two circuits. Mathematical logic and truth tablesThese two tables are nothing more than the "truth tables" for the "logical operators" AND and ORfrom mathematical logic. Let's replace the statement "gate 1 is pointing upward" by any other statement P, for example "it is raining", and the statement "gate 2 is pointing upward" by any other statement Q, for example "I am wet". Now the statement P AND Q - "it is raining ANDI am wet" - is true precisely when both P and Q are true. If we write "1" for "the statement is true" and "0" for "the statement is false", then the table below tells us exactly when the statement P AND Q is true for the four different combinations of truth values for P and Q. This table is exactly the same as that for the first circuit, which makes sense, as saying "the light is on" is the same as saying "gate A AND gate B are pointing upward". |P AND Q||1||0||0||0| The second table corresponds to the OR operator. In mathematical logic, the statement P OR Q is true when one of P or Q is true or when both are true, and false when both are false. This reflects what happens in our second circuit: the light is on if gate 1 is pointing upward or gate 2 is pointing upward or if both are pointing upward. The truth table of the OR operator is exactly the same as the table for the second circuit. |P OR Q||1||1||1||0| The simple on/off switch in figure 1 also has its logical meaning: it corresponds to the negating operator NOT. If statement P - "it is raining" - is true then the statement NOT P - "it is not raining" - is false, and vice versa. The three little words AND, OR and NOT are immensely powerful. They are the basis of mathematical logic, which in turn gives rise to Boolean algebra. Mathematical logic provides a rigorous way to decide whether complicated statements, which often occur in mathematics, are true or false. It is based on the following idea: we have a number of statements, such as "it is raining", which can be either true or false. The operator NOT is used to negate a statement. Using AND and OR we can connect two statements. In this fashion, we can build up long "sentences", for example P AND ((Q AND S) OR NOT (T OR W)), where P,Q,S,T and W are statements. We can figure out whether such a sentence is true by looking at the truth values of the individual statements and consulting the truth tables of AND and OR. To see why this truth table makes sense, let us first assume that the statement P IMPLIES Q - "if it rains then I get wet" - is true, the situation indicated by a "1" in the last line of the truth table. This could happen, for example, if standing in the middle of a field without any umbrella or other protection. Now if it is indeed raining, then you know immediately that I am wet. Both the statements "it is raining" and "I am wet" are true, and we get the entry "1" for both P and Q in the truth table. If it is not raining, you don't know if I'm wet or not - I might have fallen into a pond and got soaked, or I might be dry. The statement "I am wet" can be either true or false, and this accounts for the last two columns in the truth table. The only remaining possibility is that it is raining but I am not wet. In this case the statement "if it rains then I get wet" is clearly false - I must have brought an umbrella after all. This is the second column of the truth table. |P IMPLIES Q||1||0||1||1| But this truth table is exactly the same as that for the expression NOT P OR Q "it is not raining or I am wet". This makes sense: if P IMPLIES Q is true, then I can replace the statement "it is raining" by "I am wet", as one follows directly from the other. The true statement "it is not raining or it is raining", then becomes "it is not raining or I am wet" - NOT P OR Q is true. If P IMPLIES Q is false, then I may well be dry even though it is raining - the statement "it is not raining or I am wet" is false. We see that P IMPLIES Q is true or false precisely when NOT P OR Q is true or false. The two are equivalent, and we can express the relationship P IMPLIES Q using only the operators NOT and OR. Boolean AlgebraIn the mid-nineteenth century, the English mathematician George Boole invented what is known today as "Boolean algebra". His ingenious idea was to treat the individual statements, like P and Q above, as variables and to define the logical operators as mathematical operations, similar to addition and multiplication. If, for example, we write PQ for P ANDQ and P+Q for P ORQ, then our two circuits above can be expressed by the two equations Light = A+B. Dealing with logical statements, simplifying them and finding their truth value, then just turns into manipulation of algebraic expressions, like simplifying or solving an equation. These algebraic manipulations are of course governed by strict rules, and performing them becomes a "mindless" activity - you just follow a set of rules, a machine can do it. Indeed, today, all computers are based on Boolean algebra, another example that mathematics is often a hundred years ahead of its time. Incidentally, it was Boole's work that inspired the English mathematician John Venn to establish his comprehensive theory of Venn diagrams. Venn diagrams deal with sets, their union and intersection. A Venn diagram showing two intersecting sets A and B. Binary arithmeticSo how do we get logic into a computer? How can we use the ideas set out above to get a computer to perform high-speed, high-precision calculations? A computer operates on six basic electronic circuits: the AND, ORand NOTcircuits described above, the negations of ANDand OR, known as NANDand NOR, and a circuit called XOR, or exclusive OR. XORcorresponds to the logical expression (P ANDNOTQ) OR(NOTP ANDQ), its truth table is given below. |P XOR Q||0||1||1||0| When performing binary addition, just like for addition in base ten, we need to reserve two spaces for our result. In one space, the right hand one, we write down the result of adding up the units. The other left-hand space is an overflow area: here we enter a 1 if our result exceeds two. The table below gives the rules for binary addition. If we ignore the overflow area, then what we get is exactly the truth table for XOR. Ignoring the area for the units gives the truth table for AND. Superimposing the two logical operations we can define binary addition. And once we have that, the other three arithmetical operations come for free: subtraction is just addition "backwards", multiplication is repeated addition and division is repeated subtraction. The arithmetic operations can be replaced by logical operations, which are performed using Boolean algebra. And this brings us neatly back to our light switch theme: the XOR circuit - or, to be precise, its negation - is exactly what you need to wire up a light to two switches. For this we have to abandon our simple gates for so-called "three-way switches", which connect to part of the circuit in either position (see figures 3 and 4). Figure 3: a three-way switch. Figure 4: wiring two switches to one bulb. An infinite story houseAnd what if you live in a three story house and want to have three switches for the light in the hall? For this you need two three-way switches and one four-way switch, which is shown in figure 5. You arrange the three switches in a row, with the four-way switch in the middle, as in figure 6. As before, we write "0" for "the three-way switch is pointing downward" and "1" for "the three-way switch is pointing upward". There are now eight different possibilities for the arrangement of the switches. Figure 5: the left-hand position of the switch is 0 and the right-hand position is 1. Figure 6: wiring three switches to a bulb. Switches 1 and 2 are in position 1 and switch 3 is in position 0. The switches' values add up to an even number and the light is on. As you can see in figure 6, the light is on when two or none of the three switches are in position 1. To put it in the language of logic, the statement "the light is on" is true exactly when the statement "the switch is in position 1" is true for two of the switches, or for none of the switches. It is true precisely when the values for the three switches add up to an even number. If we write "A" for the statement "switch 1 is in position 1", "B" for "switch 2 is in position 1" and "C" for "switch 3 is in position 1", then the light is on precisely when the statement (A AND B AND NOT C) OR (A AND NOT B AND C) OR (NOT A AND B AND C) OR (NOT A AND NOT B AND NOT C) And the good thing about all this is that it will work for any number of switches. No matter how tall your house is, the same principles will apply: the light will come on whenever the values of the different switches add up to an even number, and you can wire up the switches using a circuit with three-way switches at each end and four-way switches in the middle. The process works all the way up to infinity. About the author Yutaka Nishiyama is a professor at Osaka University of Economics, Japan. After studying mathematics at the University of Kyoto he went on to work for IBM Japan for 14 years. He is interested in the mathematics that occurs in daily life, and has written seven books about the subject. The most recent one, called "The mystery of five in nature", investigates, amongst other things, why many flowers have five petals. Professor Nishiyama is currently visiting the University of Cambridge.
http://plus.maths.org/content/os/issue36/features/nishiyama/index
13
104
Weightlessness, or an absence of 'weight', is in fact an absence of stress and strain resulting from externally applied forces, typically contact forces from floors, seats, beds, scales, and the like. Counterintuitively, a uniform gravitational field does not by itself cause stress or strain, and a body in free fall in such an environment experiences no g-force acceleration and feels weightless. This is also termed zero-g. When bodies are acted upon by non-gravitational forces, as in a centrifuge, a rotating space station, or within a space ship with rockets firing, a sensation of weight is produced, as the forces overcome the body's inertia. In such cases, a sensation of weight, in the sense of a state of stress can occur, even if the gravitational field were zero. In such cases, g-forces are felt, and bodies are not weightless. When the gravitational field is non-uniform, a body in free fall suffers tidal effects and is not stress-free. Near a black hole, such tidal effects can be very strong. In the case of the Earth, the effects are minor, especially on objects of relatively small dimension (such as the human body or a spacecraft) and the overall sensation of weightlessness in these cases is preserved. This condition is known as microgravity and it prevails in orbiting spacecraft. Weightlessness in Newtonian mechanics In Newtonian mechanics the term "weight" is given two distinct interpretations by engineers. - Weight1: Under this intepretation, the "weight" of a body is the gravitational force exerted on the body and this is the notion of weight that prevails in engineering. Near the surface of the earth, a body whose mass is 1 kg has a weight of approximately 10 N, independent of its state of motion, free fall, or not. Weightlessness in this sense can be achieved by removing the body far away from the source of gravity. It can also be attained by placing the body at a neutral point between two gravitating masses. - Weight2: Weight can also be interpreted as that quantity which is measured when one uses scales. What is being measured there is the force exerted by the body on the scales. In a standard weighing operation, the body being weighed is in a state of equilibrium as a result of a force exerted on it by the weighing machine cancelling the gravitational field. By Newton's 3rd law, there is an equal and opposite force exerted by the body on the machine. This force is called weight2. The force is not gravitational. Typically, it is a contact force and not uniform across the mass of the body. If the body is placed on the scales in a lift (an elevator) in free fall in pure uniform gravity, the scale would read zero, and the body said to be weightless i.e. its weight2 =0. This is the dominant notion of weightlessness in engineering discourse. It describes the condition in which the body is stress free and undeformed. This is the weightlessness in free fall in a uniform gravitational field. (The situation is more complicated when the gravitational field is not uniform, or, when a body is subject to multiple forces which may, for instance, cancel each other and produce a state of stress albeit weight2 being zero. See below.) To sum up, we have two notions of weight of which weight1 is dominant. Yet 'weightlessness' is typically examplified not by absence of weight1 but by the absence of stress associated with weight2. This is the intended sense of weightlessness in what follows below. A body is stress free, exerts zero weight2, when the only force acting on it is weight1 as when in free fall in a uniform gravitational field. Without subscripts, one ends up with the odd sounding conclusion that a body is weightless when the only force acting on it is its weight. The apocryphal apple that fell on Newton's head can be used to illustrate the issues involved. An apple weighs approximately 1 newton. This is the weight1 of the apple and is considered to be a constant even while it is falling. During that fall, its weight2 however is zero: ignoring air resistance, the apple is stress free. When it hits Newton, the sensation felt by Newton would depend upon the height from which the apple falls and weight2 of the apple at the moment of impact may be many times greater than 1 N. It was great enough—in the story—to make the great man invent the theory of gravity. It is this weight2 which distorts the apple. On its way down, the apple in its free fall does not suffer any distortion as the gravitational field is uniform. Stress during free fall - 1. In a uniform gravitational field: Consider any cross-section dividing the body into two parts. Both parts have the same acceleration and the force exerted on each is supplied by the external source of the field. There is no force exerted by one part on the other. Stress at the cross-section is zero. Weight2 is zero. - 2. In a non-uniform gravitational field: Under gravity alone, one part of the body may have a different acceleration from another part. This would tend to deform the body and generate internal stresses if the body resists deformation. Weight2 is not 0. Throughout this discussion on using stress as an indicator of weight, any pre-stress which may exist within a body caused by a force exerted on one part by another is not relevant. The only relevant stresses are those generated by external forces applied to the body. The definition and use of 'weightlessness' is difficult unless it is understood that the sensation of "weight" in everyday terrestrial experience results not from gravitation acting alone (which is not felt), but instead by the mechanical forces that resist gravity. An object in a straight free fall, or in a more complex inertial trajectory of free fall (such as within a reduced gravity aircraft or inside a space station), all experience weightlessness, since they do not experience the mechanical forces that cause the sensation of weight. Force fields other than gravity As noted above, weightlessness occurs when 1. no force acts on the object 2. uniform gravity acts solely by itself. For the sake of completeness, a 3rd minor possibility has to be added. This is that a body may be subject to a field which is not gravitational but such that the force on the object is uniformly distributed across the object's mass. An electrically charged body, uniformly charged, in a uniform electric field is a possible example. Electric charge here replaces the usual gravitational charge. Such a body would then be stress free and be classed as weightless. Various types of levitation may fall into this category, at least approximately. Weightlessness and proper acceleration A body in free fall (which by definition entails no aerodynamic forces) near the surface of the earth has an acceleration approximately equal to 10 m s−2 with respect to a coordinate frame tied to the earth. If the body is in a freely falling lift and subject to no pushes or pulls from the lift or its contents, the acceleration with respect to the lift would be zero. If on the other hand, the body is subject to forces exerted by other bodies within the lift, it will have an acceleration with respect to the freely falling lift. This acceleration which is not due to gravity is called "proper acceleration". On this approach, weightlessness holds when proper acceleration is zero. How to avoid weightlessness Weightlessness is in contrast with typical human experiences in which a non-uniform force is acting, such as: - standing on the ground, sitting in a chair on the ground, etc., where gravity is countered by the support force of the ground, - flying in a plane, where a support force is transmitted from the lift the wings provide (special trajectories which form an exception are described below), - during atmospheric reentry, or during the use of a parachute, when atmospheric drag decelerates a vehicle, - during an orbital maneuver in a spacecraft, or during the launch phase, when rocket engines provide thrust. In cases where an object is not weightless, as in the above examples, a force acts non-uniformly on the object in question. Aero-dynamic lift, drag, and thrust are all non-uniform forces (they are applied at a point or surface, rather than acting on the entire mass of an object), and thus create the phenomenon of weight. This non-uniform force may also be transmitted to an object at the point of contact with a second object, such as the contact between the surface of the Earth and one's feet, or between a parachute harness and one's body. Tidal forces Tidal forces arise when the gravitational field is not uniform and gravitation gradients, exist. Such indeed is the norm and strictly speaking any object of finite size even in free-fall is subject to tidal effects. These are impossible to remove by inertial motion, except at one single nominated point of the body. The Earth is in free fall but the presence of tides indicates that it is in a non-uniform gravitational field. This non-uniformity is more due to the moon than the sun. The total gravitational field due to the sun is much stronger than that of the moon but it has a minor tidal effect compared with that of the moon because of the relative distances involved. Weight1 of the earth is essentially due to the sun's gravity. But its state of stress and deformation, represented by the tides, is more due to non uniformity in the gravitational field of the nearby moon. When the size of a region being considered is small relative to its distance from the gravitating mass the assumption of uniform gravitational field holds to a good approximation. Thus a person is small relative to the radius of Earth and the field for a person at the surface of the earth is approximately uniform. The field is strictly not uniform and is responsible for the phenomenon of microgravity. Objects near a black hole are subject to a highly non-uniform gravitational field. Frames of reference In all inertial reference frames, while weightlessness is experienced, Newton's first law of motion is obeyed locally within the frame. Inside the frame (for example, inside an orbiting ship or free-falling elevator), unforced objects keep their velocity relative to the frame. Objects not in contact with other objects "float" freely. If the inertial trajectory is influenced by gravity, the reference frame will be an accelerated frame as seen from a position outside the gravitational attraction, and (seen from far away) the objects in the frame (elevator, etc.) will appear to be under the influence of a force (the so-called force of gravity). As noted, objects subject solely to gravity do not feel its effects. Weightlessness can thus be realised for short periods of time in an airplane following a specific parabolic flight path. It is simulated poorly, with many differences, in neutral buoyancy conditions, such as immersion in a tank of water. Zero-g, "zero gravity", accelerometers Zero-g is an alternative term for weightlessness and holds for instance in a freely falling lift. Zero-g is subtly different from the complete absence of gravity, something which is impossible due to the presence of gravity everywhere in the universe. "Zero-gravity" may also be used to mean effective weightlessness, neglecting tidal effects. Microgravity (or µg) is used to refer to situations that are substantially weightless but where g-force stresses within objects due to tidal effects, as discussed above, are around a millionth of that at the Earth's surface. Accelerometers, can only detect g-force i.e. weight2 (= mass x proper acceleration) They cannot detect free fall.[b] Sensation of weight Humans experience their own body weight as a result of this supporting force, which results in a normal force applied to a person by the surface of a supporting object, on which the person is standing or sitting. In the absence of this force, a person would be in free-fall, and would experience weightlessness. It is the transmission of this reaction force through the human body, and the resultant compression and tension of the body's tissues, that results in the sensation of weight. Because of the distribution of mass throughout a person's body, the magnitude of the reaction force varies between a person's feet and head. At any horizontal cross-section of a person's body (as with any column), the size of the compressive force being resisted by the tissues below the cross-section is equal to the weight of the portion of the body above the cross-section. In the pose adopted in the accompanying illustration, the shoulders carry the weight of the outstretched arms and are subject to a considerable torque. A common misconception A common conception about spacecraft orbiting the earth is that they are operating in a gravity free environment. Although there is a way of making sense of this within the physics of Einstein's general relativity, within Newtonian physics, this is technically inaccurate . Spacecraft are held in orbit by the gravity of the planet which they are orbiting. In Newtonian physics, the sensation of weightlessness experienced by astronauts is not the result of there being zero gravitational acceleration (as seen from the Earth), but of there being no g-force that an astronaut can feel because of the free-fall condition, and also there being zero difference between the acceleration of the spacecraft and the acceleration of the astronaut. Space journalist James Oberg explains the phenomenon this way: The myth that satellites remain in orbit because they have "escaped Earth's gravity" is perpetuated further (and falsely) by almost universal misuse of the word "zero gravity" to describe the free-falling conditions aboard orbiting space vehicles. Of course, this isn't true; gravity still exists in space. It keeps satellites from flying straight off into interstellar emptiness. What's missing is "weight", the resistance of gravitational attraction by an anchored structure or a counterforce. Satellites stay in space because of their tremendous horizontal speed, which allows them — while being unavoidably pulled toward Earth by gravity — to fall "over the horizon." The ground's curved withdrawal along the Earth's round surface offsets the satellites' fall toward the ground. Speed, not position or lack of gravity, keeps satellites in orbit around the earth. A geostationary satellite is of special interest in this context. Unlike other objects in the sky which rise and set every day, such a satellite permanently hovers up there apparently defying gravity. In actual fact, it is in an orbit with a period of 1 day. To a modern physicist working with Einstein's general theory of relativity, the situation is even more complicated than is suggested above. Einstein's theory suggests that it actually is valid to consider that objects in inertial motion (such as falling in an elevator, or in a parabola in an airplane, or orbiting a planet) can indeed be considered to experience a local loss of the gravitational field in their rest frame. Thus, in the point of view (or frame) of the astronaut or orbiting ship, there actually is nearly-zero proper acceleration (the acceleration felt locally), just as would be the case far out in space, away from any mass. It is thus valid to consider that most of the gravitational field in such situations is actually absent from the point of view of the falling observer, just as the colloquial view suggests (see equivalence principle for a fuller explanation of this point). However, this loss of gravity for the falling or orbiting observer, in Einstein's theory, is due to the falling motion itself, and (again as in Newton's theory) not due to increased distance from the Earth. However, the gravity nevertheless is considered to be absent. In fact, Einstein's realization that a pure gravitational interaction cannot be felt, if all other forces are removed, was the key insight to leading him to the view that the gravitational "force" can in some ways be viewed as non-existent. Rather, objects tend to follow geodesic paths in curved space-time, and this is "explained" as a force, by "Newtonian" observers who assume that space-time is "flat," and thus do not have a reason for curved paths (i.e., the "falling motion" of an object near a gravitational source). In the theory of general relativity, the only gravity which remains for the observer following a falling path or "inertial" path near a gravitating body, is that which is due to non-uniformities which remain in the gravitational field, even for the falling observer. This non-uniformity, which is a simple tidal effect in Newtonian dynamics, constitutes the "microgravity" which is felt by all spacially-extended objects falling in any natural gravitational field that originates from a compact mass. The reason for these tidal effects is that such a field will have its origin in a centralized place (the compact mass), and thus will diverge, and vary slightly in strength, according to distance from the mass. It will thus vary across the width of the falling or orbiting object. Thus, the term "microgravity," an overly technical term from the Newtonian view, is a valid and descriptive term in the general relativistic (Einsteinian) view. The term micro-g environment (also µg, often referred to by the term microgravity) is more or less a synonym of weightlessness and zero-G, but indicates that g-forces are not quite zero, just very small. Weightless and reduced weight environments Reduced weight in aircraft |It has been suggested that portions of this section be moved into Reduced gravity aircraft. (Discuss)| Airplanes have been used since 1959 to provide a nearly weightless environment in which to train astronauts, conduct research, and film motion pictures. Such aircraft are commonly referred by the nickname "Vomit Comet". To create a weightless environment, the airplane flies in a six-mile long parabolic arc, first climbing, then entering a powered dive. During the arc, the propulsion and steering of the aircraft are controlled such that the drag (air resistance) on the plane is cancelled out, leaving the plane to behave as it would if it were free-falling in a vacuum. During this period, the plane's occupants experience about 25 seconds of weightlessness, before experiencing about 25 seconds of 2 g acceleration (twice their normal weight) during the pull-out from the parabola. A typical flight lasts around two hours, during which 50 parabolas are flown. NASA's Reduced Gravity Aircraft Versions of such airplanes have been operated by NASA's Reduced Gravity Research Program since 1973, where the unofficial nickname originated. NASA later adopted the official nickname 'Weightless Wonder' for publication. NASA's current Reduced Gravity Aircraft, "Weightless Wonder VI", a McDonnell Douglas C-9, is based at Ellington Field (KEFD), near Lyndon B. Johnson Space Center. NASA's Microgravity University - Reduced Gravity Flight Opportunities Plan, also known as the Reduced Gravity Student Flight Opportunities Program, allows teams of undergraduates to submit a microgravity experiment proposal. If selected, the teams design and implement their experiment, and students are invited to fly on NASA's Vomit Comet. European Space Agency A300 Zero-G The European Space Agency flies parabolic flights on a specially-modified Airbus A300 B2 aircraft, in order to perform research in microgravity. ESA flies campaigns of three flights on consecutive days, each flying about 30 parabolas, for a total of about 10 minutes of weightlessness per flight. The ESA campaigns are currently operated from Bordeaux - Mérignac Airport in France by the company Novespace, while the aircraft is operated by DGA Essais en Vol. The first ESA Zero-G flights were in 1984, using a NASA KC-135 aircraft in Houston, Texas. As of May 2010[update], the ESA has flown 52 campaigns and also 9 student parabolic flight campaigns. Other aircraft it has used include the Russian Ilyushin Il-76 MDK and French Caravelle. Ecuadorian T-39 Condor The Ecuadorian Space Agency jointly operates, with the Ecuadorian Air Force, the Ecuadorian Micro Gravity Flight Program, using a T-39 Sabreliner, modified in-house to fly "cybernetically assisted" parabolas. It has been in operation since May 2008. It is the first Latin American microgravity aircraft. On June 19, 2008, the plane carried seven-year-old Jules Nader as he set the first Guinness World record for the youngest human being to fly in microgravity. Nader worked on a fluid dynamics experiment designed by his brother, Gerard Nader. The Zero Gravity Corporation, founded in 1993 by Peter Diamandis, Byron Lichtenberg, and Ray Cronise, operates a modified Boeing 727 which flies parabolic arcs to create 25–30 seconds of weightlessness. Flights may be purchased for both tourism and research purposes. Ground-based drop facilities Ground-based facilities that produce weightless conditions for research purposes are typically referred to as drop tubes or drop towers. NASA's Zero Gravity Research Facility, located at the Glenn Research Center in Cleveland, Ohio, is a 145-meter vertical shaft, largely below the ground, with an integral vacuum drop chamber, in which an experiment vehicle can have a free fall for a duration of 5.18 seconds, falling a distance of 132 meters. The experiment vehicle is stopped in approximately 4.5 meters of pellets of expanded polystyrene and experiences a peak deceleration rate of 65g. Also at NASA Glenn is the 2.2 Second Drop Tower, which has a drop distance of 24.1 meters. Experiments are dropped in a drag shield, in order to reduce the effects of air drag. The entire package is stopped in a 3.3 meter tall air bag, at a peak deceleration rate of approximately 20g. While the Zero Gravity Facility conducts one or two drops per day, the 2.2 Second Drop Tower can conduct up to twelve drops per day. Humans cannot utilize these gravity shafts, as the deceleration experienced by the drop chamber would likely kill or seriously injure anyone using them; 20g is about the highest deceleration that a fit and healthy human can withstand momentarily without sustaining injury. Other drop facilities worldwide include: - Micro-Gravity Laboratory of Japan (MGLAB) – 4.5 s free fall - Experimental drop tube of the metallurgy department of Grenoble – 3.1 s free fall - Fallturm Bremen University of Bremen in Bremen – 4.74 s free fall - Queensland University of Technology Drop Tower - 2.0 s free fall Neutral buoyancy Weightlessness can also be simulated by creating the condition of neutral buoyancy, in which human subjects and equipment are placed in a water environment and weighted or buoyed until they hover in place. NASA uses neutral buoyancy to prepare for extra-vehicular activity (EVA) at its Neutral Buoyancy Laboratory. Neutral buoyancy is also used for EVA research at the University of Maryland's Space Systems Laboratory, which operates the only neutral buoyancy tank at a college or university. Neutral buoyancy is not identical to weightlessness. Gravity still acts on all objects in a neutral buoyancy tank; thus, astronauts in neutral buoyancy training still feel their full body weight within their spacesuits, although the weight is well-distributed, similar to force on a human body in a water bed, or when simply floating in water. The suit and astronaut together are under no net force, as for any object that is floating, or supported in water, such as a scuba diver at neutral buoyancy. Water also produces drag, which is not present in vacuum. Weightlessness in a spacecraft Long periods of weightlessness occur on spacecraft outside a planet's atmosphere, provided no propulsion is applied and the vehicle is not rotating. Weightlessness does not occur when a spacecraft is firing its engines or when re-entering the atmosphere, even if the resultant acceleration is constant. The thrust provided by the engines acts at the surface of the rocket nozzle rather than acting uniformly on the spacecraft, and is transmitted through the structure of the spacecraft via compressive and tensile forces to the objects or people inside. Weightlessness in an orbiting spacecraft is physically identical to free-fall, with the difference that gravitational acceleration causes a net change in the direction, rather than the magnitude, of the spacecraft's velocity. This is because the acceleration vector is perpendicular to the velocity vector. In typical free-fall, the acceleration of gravity acts along the direction of an object's velocity, linearly increasing its speed as it falls toward the Earth, or slowing it down if it is moving away from the Earth. In the case of an orbiting spacecraft, which has a velocity vector largely perpendicular to the force of gravity, gravitational acceleration does not produce a net change in the object's speed, but instead acts centripetally, to constantly "turn" the spacecraft's velocity as it moves around the Earth. Because the acceleration vector turns along with the velocity vector, they remain perpendicular to each other. Without this change in the direction of its velocity vector, the spacecraft would move in a straight line, leaving the Earth altogether. Weightlessness at the center of a planet The net gravitational force due to a spherically symmetrical planet is zero at the center. This is clear because of symmetry, and also from Newton's shell theorem which states that the net gravitational force due to a spherically symmetric shell, e.g., a hollow ball, is zero anywhere inside the hollow space. Thus the material at the center is weightless. Human health effects Following the advent of space stations that can be inhabited for long periods of time, exposure to weightlessness has been demonstrated to have some deleterious effects on human health. Humans are well-adapted to the physical conditions at the surface of the Earth. In response to an extended period of weightlessness, various physiological systems begin to change and atrophy. Though these changes are usually temporary, long term health issues can result. The most common problem experienced by humans in the initial hours of weightlessness is known as space adaptation syndrome or SAS, commonly referred to as space sickness. Symptoms of SAS include nausea and vomiting, vertigo, headaches, lethargy, and overall malaise. The first case of SAS was reported by cosmonaut Gherman Titov in 1961. Since then, roughly 45% of all people who have flown in space have suffered from this condition. The duration of space sickness varies, but in no case has it lasted for more than 72 hours, after which the body adjusts to the new environment. NASA jokingly measures SAS using the "Garn scale", named for United States Senator Jake Garn, whose SAS during STS-51-D was the worst on record. Accordingly, one "Garn" is equivalent to the most severe possible case of SAS. The most significant adverse effects of long-term weightlessness are muscle atrophy and deterioration of the skeleton, or spaceflight osteopenia. These effects can be minimized through a regimen of exercise. Astronauts subject to long periods of weightlessness wear pants with elastic bands attached between waistband and cuffs to compress the leg bones and reduce osteopenia. Other significant effects include fluid redistribution (causing the "moon-face" appearance typical of pictures of astronauts in weightlessness), a slowing of the cardiovascular system, decreased production of red blood cells, balance disorders, and a weakening of the immune system. Lesser symptoms include loss of body mass, nasal congestion, sleep disturbance, excess flatulence, and puffiness of the face. These effects begin to reverse quickly upon return to the Earth. In addition, after long space flight missions, male astronauts may experience severe eyesight problems. Such eyesight problems may be a major concern for future deep space flight missions, including a manned mission to the planet Mars. Effects on non-human organisms Russian scientists have observed differences between cockroaches conceived in space and their terrestrial counterparts. The space-conceived cockroaches grew more quickly, and also grew up to be faster and tougher. Fowl eggs which are fertilized in microgravity may not develop properly. Technical adaptation in zero-gravity Weightlessness can cause serious problems on technical instruments, especially those consisting of many mobile parts. Physical processes that depend on the weight of a body (like convection, cooking water or burning candles) act differently without a certain amount of gravity. Cohesion and advection play a bigger role in space. Everyday work like washing or going to the bathroom are not possible without adaptation. To use toilets in space, like the one on the International Space Station, astronauts have to fasten themselves to the seat. A fan creates suction that carries the waste away. Drinking is aided with a straw or from tubes. See also - Artificial gravity - Effect of spaceflight on the human body - Microgravity University - Vomit Comet - In General Relativity, GR, a body is inertial if it is in free fall i.e. it has no forces acting on it. Gravity is not a force in GR. Inertial bodies can be in a state of acceleration with respect to each other unlike in Newtonian physics where all inertial frames move at a constant velocity with respect to each other. - Note: Accelerometers can detect a sudden change to free fall (as when a device is dropped), but they do this by measuring the change of acceleration from some value to zero. An accelerometer using a single weight or vibrating element and not measuring gradients across distances inside the accelerometer (which could be used to detect microgravity or tidal forces), cannot tell the difference between free fall in a gravity field, and weightlessness due to being far from masses and sources of gravitation. This is due to Einstein's strong equivalence principle. - Oberg, James (May 1993). "Space myths and misconceptions". Omni 15 (7). Retrieved 2007-05-02. - Reduced Gravity Research Program - NASA "Weightless Wonder" - Novespace: microgravity, airborne missions - European Space Agency. "Parabolic Flight Campaigns". ESA Human Spaceflight web site. Retrieved 2011-10-28. - European Space Agency. "A300 Zero-G". ESA Human Spaceflight web site. Retrieved 2006-11-12. - European Space Agency. "Next campaign". ESA Human Spaceflight web site. Retrieved 2006-11-12. - European Space Agency. "Campaign Organisation". ESA Human Spaceflight web site. Retrieved 2006-11-12. - EXA and FAE Develops First Zero-G Plane in Latin America - Youngest person to experience microgravity - Marshall Space Flight Center Drop Tube Facility - Kanas, Nick; Manzey, Dietrich (2008), "Basic Issues of Human Adaptation to Space Flight", Space Psychology and Psychiatry, Space Technology Library 22: 15–48, doi:10.1007/978-1-4020-6770-9_2 - http://www.jsc.nasa.gov/history/oral_histories/StevensonRE/RES_5-13-99.pdf, pg 35, Johnson Space Center Oral History Project, interview with Dr. Robert Stevenson: "Jake Garn was sick, was pretty sick. I don't know whether we should tell stories like that. But anyway, Jake Garn, he has made a mark in the Astronaut Corps because he represents the maximum level of space sickness that anyone can ever attain, and so the mark of being totally sick and totally incompetent is one Garn. Most guys will get maybe to a tenth Garn, if that high. And within the Astronaut Corps, he forever will be remembered by that." - "Health Fitness", Space Future - "The Pleasure of Spaceflight", Toyohiro Akiyama, Journal of Space Technology and Science, Vol.9 No.1 spring 1993, pp.21-23 - Mader, T. H. et al. (2011). "Optic Disc Edema, Globe Flattening, Choroidal Folds, and Hyperopic Shifts Observed in Astronauts after Long-duration Space Flight". Ophthalmology (journal) 118 (10): 2058–2069. doi:10.1016/j.ophtha.2011.06.021. PMID 21849212. - Puiu, Tibi (November 9, 2011). "Astronauts’ vision severely affected during long space missions". zmescience.com. Retrieved February 9, 2012. - News (CNN-TV, 02/09/2012) - Video (02:14) - Male Astronauts Return With Eye Problems - Space Staff (13 March 2012). "Spaceflight Bad for Astronauts' Vision, Study Suggests". Space.com. Retrieved 14 March 2012. - Kramer, Larry A. et al. (13 March 2012). "Orbital and Intracranial Effects of Microgravity: Findings at 3-T MR Imaging". Radiology (journal). doi:10.1148/radiol.12111986. Retrieved 14 March 2012. - Cherry, Jonathan D.; Frost, Jeffrey L.; Lemere, Cynthia A.; Williams, Jacqueline P.; Olschowka, John A.; O'Banion, M. Kerry. "Galactic Cosmic Radiation Leads to Cognitive Impairment and Increased Aβ Plaque Accumulation in a Mouse Model of Alzheimer’s Disease". PLOS ONE 7 (12): e53275. doi:10.1371/journal.pone.0053275. Retrieved January 7, 2013. - Staff (January 1, 2013). "Study Shows that Space Travel is Harmful to the Brain and Could Accelerate Onset of Alzheimer's". SpaceRef. Retrieved January 7, 2013. - Cowing, Keith (January 3, 2013). "Important Research Results NASA Is Not Talking About (Update)". NASA Watch. Retrieved January 7, 2013. - "Mutant super-cockroaches from space". New Scientist. January 21, 2008. - "Egg Experiment in Space Prompts Questions". New York Times. 1989-03-31. - "Space flight shown to alter ability of bacteria to cause disease". The Biodesign Institute at Arizona State University. 2007-09-31. |Look up microgravity in Wiktionary, the free dictionary.| |Look up zero gravity in Wiktionary, the free dictionary.| |Look up weightlessness in Wiktionary, the free dictionary.| |Wikimedia Commons has media related to: Weightlessness| - Microgravity Centre - Criticism of the terms "Zero Gravity" and "Microgravity" - Microgravity Flight with Zero-G aircraft - maniacworld.com "NASA Reduced Gravity Aircraft", videos of the NASA Reduced Gravity Aircraft and of participants in a flight on that aircraft. - How Weightlessness Works at HowStuffWorks - NASA - SpaceResearch - Human Physiology Research and the ISS: Staying Fit Along the Journey - "Why are astronauts weightless?" Video explanation of the fallacy of "zero gravity".
http://en.wikipedia.org/wiki/Weightlessness
13
51
How many of you use geometry terms and definitions in your daily conversations? For that matter how many of you remember geometry class, much less the words associated with the subject. You might want to brush up on geometry skills, because if you have a child they might need help. Who are they going ask for help? That is right, you! So get prepared for the wonderful world that is geometry. For all of you who would like a little assistance or a refresher course on geometry terms, then you have come to the right place. Helping with the concepts of geometry, we will leave that to the experts. The following is a list of common geometry terms and their definitions you may have all forgotten around the time you left geometry class for the last time in sophomore or junior year. So let the fun begin! Common Geometry Terms and Their Definitions - Point – This refers to a location in space or a dot on a piece of paper. - Line – A line is what connects two points together at the shortest distance, which will then continue on forever. - Line Segment – This refers to the line between two points. Hence the words segment. - Perpendicular Line Segment – This is a line segment that crosses at an angle of 90 degrees. - Parallel Line Segments – A line segment that never touches one another. However, these lines always run the same distance apart from one another. - Right Angle - An angle that measures 90 degrees. - Acute Angle – This is an angle that measures less than 90 degrees. - Obtuse Angle - This type of angle measures more than 90 degrees. - Vertex Point - This is where two line segments intersect or meet forming an angle. - Scalene Triangle –A triangle with this name is none other than a triangle that has three sides and all three of them have different lengths. - Isosceles Triangle – This triangle has two even proportioned sides, which also have two equal internal angles. - Equilateral Triangle – This type of triangle has three sides and they are all equal. The internal angles of this type of triangle all also equal, with the same angle of 60 degrees. - Radius Distance – This refers to a line segment that is coming from the center of a circle ending at any point on the circle’s circumference. - Diameter – This refers to a line segment or the length that joins two points on a circle’s circumference that passes through the circle’s center. The diameter is twice the length of the radius. - Circumference – This refers to the distance around a circle or what is known as circle’s perimeter. - Chord - A line segment that joins two points on a curve. - Arc - This refers to a part of a curve. Now that you have come in contact with some common geometry terms and their definitions do you remember any of them? Do any of them ring a bell or was a bell what you were waiting on when geometry class started so you could run away from all the shapes. Triangles, Circles, and Angles, Oh, my! If you do remember, then this may have been the refresher that you needed to be able to help your son or daughter out in their geometry class. If you still seem a little rusty then you can always pick up geometry as a hobby to relearn with your high school child. Try playing a few games from the Internet that reforce the geometry lessons. Playing together could give you and them a time to bond over something on which you are both not big fans. Therefore, you would not only be helping with geometry but with a life lesson that we all must do things from time to time that we do not particularly like.
http://grammar.yourdictionary.com/word-lists/geometry-terms-and-their-definitions.html
13
56
Investigation of a triangle formed by the medians of a given original triangle Hee Jung Kim A median of a triangle is a line segment connecting a vertex of the triangle with the midpoint of the opposite side of the vertex. There are a couple of properties related to the median. 1. Three medians intersect at a single point regardless of the kind of the triangle. The point where they meet is called the centroid of the triangle. The centroid divides each median in the ratio 1:2. See the proof. 2. Each median divides the triangle into two smaller triangle which have the same area. See GSP. 3. The medians divide the triangle into six smaller triangles inside the given triangle. These triangles have the same area. That is why the intersection point is called the (area) centroid as the center of gravity. See GSP. Let's construct a second triangle with three sides having the lengths of the three medians from the first triangle. Step 1. Construct three medians if the original triangle. Step 2. Construct a parallel line passing through the midpont P parallel with the median RC. Step3. Construct a line segment R'P which has the same length as the length of the median RC, where R' is an intersection point between the circle at center P with radius the length of RC. Step 4. In the same manner, construct AR'. Step5. We constructed a second triangle AR'P with three medians of the original triangle ABC. Let's find some relationship between the original triangle ABC and the constructed triangle AR'P. 1. The ratio of the area of the original triangle to the constructed triangle is constant (1.3333). 2. The parameter of the original triangle is greater than the constructed triangle. Therefore the ratio of the parameter of the original triangle to the constructed triangle is greater than 1. 3. They are neither congruent nor similar. I show a simple visual proof of the theorem that the ratio of the area of the original triangle to the constructed triangle is constant. (For the proof we should define a mid-segment of a triangle by a segment of joining midpoints of two sides of the triangle and prove that the mid-segment of a triangle is parallel to the third side and is congruent to one half of the third side. Moreover, four triangles formed by three midsegments of the triangles are all congruent.)
http://jwilson.coe.uga.edu/EMAT6680Fa08/KimH/assignment6hjk/assignment6.html
13
77
A-level Physics/Glossary of Terms Definitions of keywords and terms that you will need to know. - Absolute zero - Zero on the thermodynamic temperature scale, or 0 K (kelvin), where a substance has minimum internal energy, and is the coldest possible temperature. It is equal to -273.15 degrees Celsius. - Absorption spectrum - A spectrum of dark lines across the pattern of spectral colours produced when light passes through a gas and the gas absorbs certain frequencies depending on the elements in the gas. - The (instantaneous) rate of change of velocity in respects to time. - Acceleration of free fall (g) - The acceleration of a body falling under gravity (9.81ms-2 on earth). - A device used to measure the electric current in a circuit. It is connected in series with the components. - Amount of substance - A SI quantity, measured in moles (mol). - The SI unit for electric current. - The maximum displacement of a wave from its rest/mean position (measured in metres). - A point of maximum amplitude along a stationary wave caused by constructive interference. - Coulomb's Law - The force between two charges is directly proportional to the product of the charges and inversely proportional to the distance between them squared - Two equal, opposite and parallel forces which create rotational force. - A vector quantity, the distance of an object from its initial position, in a given direction - Density is the mass of a body per unit volume - Decay Constant - The probability of decay of a nucleus per unit time - Electric field strength (E) - The force that a unit charge would experience at a specified point. Measured in Volts per metre or Newtons per Coulomb - Electric potential (V) - The energy that a unit charge would have at a specified point. Measured in Volts - The stored ability to do work - Extension (x) - The change in length of an object when a force is applied to it - Faraday's Law - The emf induced in a conductor is directly proportional to the rate at which the magnetic flux changes. - A force causes a mass to change motion - The number of waves that pass a fixed point in a unit of time - Gravitational Field Strength (g) - The force that a unit mass would experience at a specified point. Measured in metres per second per second or Newtons per kilogramme - Gravitational Potential - The energy that a unit mass would have at a specified point. Measured in Joules per kilogramme - Gravitational Potential Energy - the energy an object has due to its relative position above the ground. Found by mass x gravity (or gravitational field strength) x height - or force per unit mass at a set point in a gravitational field - is a form of energy transfer, also known as 'Thermal Energy'. - Hooke's Law - an approximation that states that the extension of a spring is in direct proportion with the load added to it as long as this load does not exceed the elastic limit. - Instantaneous acceleration - acceleration at a specific time; slope of tangent to velocity- time graph. - Instantaneous position - position of an object at specific time. - Instantaneous velocity - slope of the tangent to position- time graph. - The SI unit of work done, or energy. One joule is the work done when a force of one newton moves an object one metre. - Kinetic Energy - The energy an object possesses due to its motion, given by KE = 0.5 x mass x velocity² - Lenz's Law - An induced electromotive force (emf) always gives rise to a current whose magnetic field opposes the original change in magnetic flux. - Unit in which force is measured. Symbol "N". One Newton is the force required to give a mass of 1kg an acceleration of 1ms^-2 - Period (T) - The time taken for one complete oscillation. Denoted by 'T'. T=1/f - The rate at which work is done. - The load applied to an object per unit surface area. - Potential difference - The work done in moving a unit positive charge from one point to the other. The unit is volt. - Q or q - Often used as the symbol for charge in equations - The property of a material that measures it resistance to electric current. It is defined as the resistance a wire of the material would have if it had a cross sectional area of one metre square and a length of one meter. - A radian is the angle subtended at the centre of the circle when the arc length is equal in length to the radius. - A quantity with magnitude but no direction. - A scalar quantity, speed = distance / time NB s can also mean displacement. - Stopping Distance - Stopping distance = Thinking distance + Braking distance - thinking distance (distance traveled while reacting) = time taken to react X velocity - braking distance (distance traveled while braking) - A SI quantity, measured in kelvin (K). - Tensile force The forces being applied onto a material (usually a wire) on two opposite sides in order to stretch it. Both forces' values are the same as the tensile force value. - Tensile stress - The tensile force per unit cross-sectional area. - Terminal Velocity - maximum velocity a body can travel. When resistive forces = driving force, acceleration = 0, so it cannot travel any faster. - An electrical component that changes its resistance depending on its temperature. - Thinking distance - The distance travelled from seeing the need to stop to applying the brakes. - Threshold frequency - The lowest frequency of electromagnetic radiation that will result in the emission of photoelectrons from a specified metal surface. - A type of force due to an engine (usually forward force). - Time interval (t) - A SI quantity, measured in seconds (s). - Torque / moment - Moment = force x perpendicular distance from the pivot to the line of action of the force - Torque = one of the forces x the distance between them - Transverse Wave - A progressive wave that transfers energy as a result of oscillations/vibrations. - Triangle of forces - If three forces are acting at a point that can be represented by the sides of a triange, the forces are in equilibrium. - Turning forces - More than one forces that if unbalanced will cause a rotation. - Ultimate tensile strength - The maximum tensile force that can be applied to an object before it breaks. - Ultimate tensile stress - The maximum stress that can be applied to an object before it breaks. - A form of electromagnetic wave (wavelengths 10-9-3.7x10-7m). It may cause sun tanning. Usually classified into three categeries:UV-A, UV-B and UV-C. - A force experienced due to the pressure difference of the fluid at the top and bottom of the immersed portion of the body. - A quantity with magnitude and direction. - The (instantaneous) rate of change of displacement with respect to time. Velocity is a vector. - Velocity-time graph - A motion graph which shows velocity against time for a given body. - Volt (V) - The unit of potential difference (p.d.) or electromotive force (e.m.f.) - A device used to measure the potential difference across a component. It is connected in parallel across a component. - A physical quantity representing how much 3D space an object occupies, measured in cubic metres(m3) - The unit of power. - power = energy / time - Series of vibrations that transfer energy from one place to another. - The smallest distance between one point of a wave and the identical point of the next wave, measured in metres (m). - Wave-particle duality - The theory which states that all objects can exhibit both wave and particle properties. - The gravitational force acting on a body, measured in newtons (N). weight=mass x gravitational force - Work Done - The energy transferred when an object is moved through a distance by a force. Can be calculated by multiplying the force involved by the distance moved in the direction of the force. Alternatively, [work done = transfer of energy]. i.e., work is done when energy is transferred from one form to another. [OCR do not accept this definition if asked "Define work done by a force"] - Work function energy (Φ) - The minimum energy that is required for a material to release an electron, measured in joules(J). - X rays - A form of electromagnetic wave (wavelengths:10-12-10-7m). It is used in X-ray photography. - Young's double slit experiment - An experiment to demonstrate the wave nature of light via superposition and interference. - Young Modulus - Stress per unit Strain, units: Pascals or N/m2
http://en.wikibooks.org/wiki/A-level_Physics/Glossary_of_Terms
13
185
Geometry for Nursing School Entrance Exam Study Guide (page 4) The practice quiz for this study guide can be found at: Geometry questions cover points, lines, planes, angles, triangles, rectangles, squares, and circles. You may be asked to determine the area or perimeter of a particular shape, the size of an angle, the length of a line, and so forth. Some word problems may also involve geometry. Points, Lines, and Planes What Is a Point? A point has position but no size or dimension. It is usually represented by a dot named with an uppercase letter: What Is a Line? A line consists of an infinite number of points that extend endlessly in both directions. A line can be named in two ways: - By a letter at one end (typically in lowercase): l - By two points on the line: The following terminology is frequently used on math tests: - Points are collinear if they lie on the same line. Points J, U, D, and I are collinear. - A line segment is a section of a line with two endpoints. The line segment at right is indicated as . - The midpoint is a point on a line segment that divides it into two line segments of equal length. M is the midpoint of line segment AB. Two line segments of the same length are said to be congruent. Congruent line segments are indicated by the same mark on each line segment (like the double marks shown below on ). - A line segment (or line) that divides another line segment into two congruent line segments is said to bisect it. bisects . - A ray is a section of a line that has one endpoint. The ray at the right is indicated as . What Is a Plane? A plane is like a flat surface with no thickness. Although a plane extends endlessly in all directions, it is usually represented by a four-sided figure and named by an uppercase letter in a corner of the plane: K. Points are coplanar if they lie on the same plane. Points A and B are coplanar. An angle is formed when two lines, segments, or rays meet at a point: The lines are called the sides of the angle, and the point where they meet is called the vertex of the angle. The symbol used to indicate an angle is . There are three ways to name an angle: - By the letter that labels the vertex: B - By the three letters that label the angle: ABC or CBA, with the vertex letter in the middle - By the number inside the vertex: 1 An angle's size is based on the opening between its sides. Size is measured in degrees (°). The smaller the angle, the fewer degrees it has. Angles are classified by size. Notice how the arc () shows which of the two angles is indicated: Special Angle Pairs - Congruent angles: Two angles that have the same degree measure. - Complementary angles: Two angles whose sum is 90°. - Supplementary angles: Two angles whose sum is 180°. - Vertical angles: Two angles that are opposite each other when two lines cross. Two sets of vertical angles are formed: - 1 and 4 - 2 and 3 Congruent angles are indicated by identical markings. The symbol is used to indicate that two angles are congruent: A B. ABD and DBC are complementary angles. ABD is the complement of DBC, and vice versa. ABD and DBC are supplementary angles. ABD is the supplement of DBC, and vice versa. Hint: To avoid confusion between complementary and supplementary: C comes before S in the alphabet, and 90 comes before 180. - Complementary: 90° - Supplementary: 180° Vertical angles are congruent. When two lines cross, the adjacent angles are supplementary and the sum of all four angles is 360°. Angle-pair problems tend to ask for an angle's complement or supplement. Example: If the measure of 2 = 70°, what are the measures of the other three angles? - 2 3 because they're vertical angles. - Therefore, 3 = 70°. - 1 and 2 are adjacent angles and therefore supplementary. - Thus, 1 = 110° (180° – 70° = 110°). - 1 4 because they're also vertical angles. - Therefore, 4 = 110°. Check: Add the angles to be sure their sum is 360°. To solve geometry problems more easily, draw a picture if one is not provided. Try to draw the picture to scale. As the problem presents information about the size of an angle or line segment, label the corresponding part of your picture to reflect the given information. As you begin to find the missing information, label your picture accordingly. Special Line Pairs Parallel lines lie in the same plane and never cross at any point. The arrowheads on the lines indicate that they are parallel. The symbol || is used to indicate that two lines are parallel: l || m. When two parallel lines are crossed by another line, two groups of four angles each are formed. One group consists of 1, 2, 3, and 4; the other group contains 5, 6, 7, and 8. These angles have special relationships: - The four obtuse angles are always congruent: 1 4 5 8. - The four acute angles are always congruent: 2 3 6 7. - The sum of any one acute angle and any one obtuse angle is always 180° because the acute angles lie on the same line as the obtuse angles. Don't be fooled into thinking two lines are parallel just because they look parallel. Either the lines must be marked with similar arrowheads or there must be an angle pair as just described. Perpendicular lines lie in the same plane and cross to form four right angles. The little box where the lines cross indicates a right angle. Because vertical angles are equal and the sum of all four angles is 360°, each of the four angles is a right angle and 90°. However, only one little box is needed to indicate this. The symbol is used to indicate that two lines are perpendicular: . Don't be fooled into thinking two lines are perpendicular just because they look perpendicular. The problem must indicate the presence of a right angle (by stating that an angle measures 90° or by the little right angle box in a corresponding diagram), or you must be able to prove the presence of a 90° angle. A polygon is a closed, plane (flat) figure formed by three or more connected line segments that don't cross each other. Familiarize yourself with the following polygons; they are the four most common polygons appearing on standardized tests—and in life. Perimeter is the distance around a polygon. The word perimeter is derived from peri, which means around (as in periscope and peripheral vision), and meter, which means measure. Thus perimeter is the measure around something. There are many everyday applications of perimeter. For instance, a carpenter measures the perimeter of a room to determine how many feet of ceiling molding she needs. A farmer measures the perimeter of a field to determine how many feet of fencing he needs to surround it. Perimeter is measured in length units, like feet, yards, inches, meters, etc. Example: Find the perimeter of the following polygon: Write down the length of each side and add: Note: The notion of perimeter also applies to a circle; however, the perimeter of a circle is referred to as its circumference. We will take a closer look at circles and circumference later in this chapter. Area is the total amount of space taken by a figure's surface. Area is measured in square units. For instance, a square that is 1 unit on all sides covers 1 square unit. If the unit of measurement for each side is feet, for example, then the area is measured in square feet; other possibilities are units like square inches, square miles, square meters, and so on. You could measure the area of any figure by counting the number of square units the figure occupies. The first two figures are easy to measure because the square units fit into them evenly, while the following two figures are more difficult to measure because the square units don't fit into them evenly. Because it's not always practical to measure a particular figure's area by counting the number of square units it occupies, an area formula is used. As each figure is discussed, you'll learn its area formula. Although there are perimeter formulas as well, you don't really need them (except for circles) if you understand the perimeter concept: It is merely the sum of the lengths of the sides. A triangle is a polygon with three sides, like those shown here: The symbol used to indicate a triangle is Δ. Each vertex—the point at which two lines meet—is named by a capital letter. The triangle is named by the three letters at the vertices, usually in alphabetical order: ΔABC. There are two ways to refer to a side of a triangle: - By the letters at each end of the side: AB - By the letter—typically a lowercase letter—next to the side: c Notice that the name of the side is the same as the name of the angle opposite it, except the angle's name is a capital letter. There are two ways to refer to an angle of a triangle: - By the letter at the vertex: A - By the triangle's three letters, with that angle's vertex letter in the middle: BAC or CAB Types of Triangles A triangle can be classified by the size of its angles and sides. - three congruent angles, each 60° - three congruent sides Hint to help you remember: The word equilateral comes from equi, meaning equal, and lat, meaning side. Thus, all equal sides. - two congruent angles, called base angles; the third angle is the vertex angle. - Sides opposite the base angles are also congruent. - An equilateral triangle is also isosceles, since it always has two congruent angles. - one right (90°) angle, the largest angle in the triangle - The side opposite the right angle is the hypotenuse, the longest side of the triangle. (Hint: The word hypotenuse reminds us of hippopotamus, a very large animal.) - The other two sides are called legs. Area of a Triangle To find the area of a triangle, use this formula: Although any side of a triangle may be called its base, it's often easiest to use the side on the bottom. To use another side, rotate the page and view the triangle from another perspective. A triangle's height is represented by a perpendicular line drawn from the angle opposite the base to the base. Depending on the triangle, the height may be inside, outside, or on the triangle. Notice the height of the second triangle: We extended the base to draw the height perpendicular to the base. The third triangle is a right triangle: One leg may be its base and the other its height. Hint: Think of a triangle as being half a rectangle. The area of that triangle is half the area of the rectangle. Example: Find the area of a triangle with a 2-inch base and a 3-inch height. - Draw the triangle as close to scale as you can. - Label the size of the base and height. - Write the area formula; then substitute the base and height numbers into it: - The area of the triangle is 3 square inches. The following rules tend to appear more frequently on standardized tests than other rules. A typical test question follows each rule. Example: One base angle of an isosceles triangle is 30°. Find the vertex angle. - Draw a picture of an isosceles triangle. Drawing it to scale helps: Since it is an isosceles triangle, draw both base angles the same size (as close to 30° as you can) and make sure the sides opposite them are the same length. Label one base angle as 30°. - Since the base angles are congruent, label the other base angle as 30°. - There are two steps needed to find the vertex angle: - Add the two base angles together: 30° + 30° = 60° - The sum of all three angles is 180°. To find the vertex angle, subtract the sum of the two base angles (60°) from 180°: 180° – 60° = 120° Thus, the vertex angle is 120°. Check: Add all three angles together to make sure their sum is 180°: - 30° + 30° + 120° = 180° Example: In the triangle shown at the right, which side is the shortest? - Determine the size of A, the missing angle, by adding the two known angles and then subtracting their sum from 180°: Thus, A is 44°. - Since A is the smallest angle, side BC, which is opposite A, is the shortest side. Example: What is the perimeter of the triangle shown at the right? - Since the perimeter is the sum of the lengths of the sides, we must first find the missing side. Use the Pythagorean theorem since you know this is a right triangle. - Substitute the given sides for two of the letters. Remember: Side c is always the hypotenuse. - To solve this equation, subtract 9 from both sides: - Then, take the square root of both sides. Thus, the missing side has a length of 4 units: - Adding the three sides yields a perimeter of 12: A radical is simplified if there is no perfect square factor of the radicand. For example, is simplified because 10 has no perfect square factors. But, is not simplified because 20 has a perfect square factor of 4. In order to simplify a radical, rewrite the radical as the product of two radicals, one of which is the largest perfect square factor of the radicand. The square root of a perfect square always simplifies to a rational number. Simplify the perfect square radical to get your final answer. Example: Simplify . A quadrilateral is a four-sided polygon. Following are examples of quadrilaterals that are most likely to appear on standardized tests (and in everyday life): These quadrilaterals have something in common beside having four sides: - Opposite sides are the same size and parallel. - Opposite angles are the same size. However, each quadrilateral has its own distinguishing characteristics: The naming conventions for quadrilaterals are similar to those for triangles: - The figure is named by the letters at its four consecutive corners, usually in alphabetic order: rectangle ABCD. - A side is named by the letters at its ends: side AB. - An angle is named by its vertex letter: A. The sum of the angles of a quadrilateral is 360°: A + B + C + D = 360° Add your own comment Today on Education.com WORKBOOKSMay Workbooks are Here! WE'VE GOT A GREAT ROUND-UP OF ACTIVITIES PERFECT FOR LONG WEEKENDS, STAYCATIONS, VACATIONS ... OR JUST SOME GOOD OLD-FASHIONED FUN!Get Outside! 10 Playful Activities - Kindergarten Sight Words List - The Five Warning Signs of Asperger's Syndrome - What Makes a School Effective? - Child Development Theories - Why is Play Important? Social and Emotional Development, Physical Development, Creative Development - 10 Fun Activities for Children with Autism - Test Problems: Seven Reasons Why Standardized Tests Are Not Working - Bullying in Schools - A Teacher's Guide to Differentiating Instruction - Steps in the IEP Process
http://www.education.com/reference/article/geometry4/?page=4
13
67
The task is to divide a given triangle into three regions of equal area, using line segments and points. There are several different problems that can be posed.Here are four: Problem 1. If a triangle ABC is given and a random point P on the triangle is selected, construct two lines through P to divide the triangle into three regions with equal area. At the right is ONE example but because P can be anywhere on the perimeter of the triangle there are several cases to consider. We will enumerate 7 of them below. Problem 2. Given a triangle ABC, find a point D such that line segments AD, BD, and CD trisect the area of the triangle into three regions with equal areas. Define D and prove that the triangle is divided into three regions of equal area. Show a construction for finding D. Problem 3. Given a triangle ABC, and given a point E. such that line segments AE, BE, and CE trisect the area of the triangle into three regions with equal areas. Show a construction and prove that it divides the triangle into three regions of equal area. Problem 4. Given a triangle ABC. Construct two line segments parallel to the base BC to divide the triangle into three regions with equal areas. Prove that the construction divides the triangle into three regions of equal area. If there is given a point P on the side of the triangle, then there are seven ways to construct three equal areas with lines drawn from P. The sequence below shows the seven patterns with P moving from right to left. Click here to see a GSP animation of this problem. In the first case, P is the base vertex on the right hand side of the triangle, and in the seventh case, P is the base vertex on the left hand side of the triangle. The construction in each case involves trisecting the segment on the opposite side of the triangle and connecting the vertex by segments to the two trisection points. Since we know at least five ways to construct the trisection of a line, the construction task for these two cases can be assumed. Much needs to be discovered in the other cases. For the third and the fifth cases, P is located such that a segment drawn to the opposite vertex determines a triangle that is one-third the area of the original triangle. This means P is located at a trisection point on the base of the triangle. But then the remaining two-thirds of the original triangle must be divided into two equal areas by a line from P through the opposite side, making two triangles. These triangles have the same height, so the point must be the midpoint of that side. That is, if P is a trisection point of the base, then the original triangle is divided into three equal areas by lines from P to the vertex and from P to the midpoint of the side along the two-thirds section. This leaves three constructions to be determined, the second, fourth, and sixth in the sequence above. These correspond to the cases where P is neither a vertex nor a trisection point on the base. In the second case, P is in the right third of the base; in the fourth case, P is in the center third of the base; and in the sixth case, P is in the left third of the base. Clearly, the constructions for the second and sixth cases will be virtually the same. Each case could be constructed by using the result of a slightly different construction. That is, Given a triangle ABC with a given point on the base that is neither a vertex nor a trisection point, construct a line through P that cuts off a triangle one third the area of ABC. Note that if we know this construction, we could use it to construct the shaded triangles in cases two, four, and six. Case four would be finished. Cases two and six could be completed by deteriming another triangle with the same length base opposite of P. Construction of a Triangle with one-third of the area and base AP For triangle ABC, construct the altitude and find its trisection points. Again we know at least five different constructions for trisecting a line segment. The point P is given on base AC and is neither a vertex nor a trisection point of AC. Determine which end of the base if nearest to point P. Construct a perpendiclar to AC at that end. Construct a parallel line to AC through the nearest trisection point on the altitude and construct a perpendicular segment from P to this line at Q. The areas of triangle AQC is one third the area of triangle ABC. Extend line AQ to intersect with the perpendicular to C at point D. Triangle APD has the same area as triangle AQC because AP/AC = PQ/ CD and therefore (AP)(CD) = (PQ)(AC). Each side of this latter equation is twice the area of the respective triangles. Now any point on a line parallel to AC through D could be the vertex of a triangle with base AP that has area one-third the area triangle ABC. The intersection point with side AB will determine a vertex E of the desired triangle. Draw line PE.
http://jwilson.coe.uga.edu/Texts.Folder/Halftri/trisect.html
13
60
||This article has multiple issues. Please help improve it or discuss these issues on the talk page. A simple example is a chart whose vertical or horizontal axis has equally spaced increments that are labeled 1, 10, 100, 1000, instead of 1, 2, 3, 4. Each unit increase on the logarithmic scale thus represents an exponential increase in the underlying quantity for the given base (10, in this case). Presentation of data on a logarithmic scale can be helpful when the data covers a large range of values. The use of the logarithms of the values rather than the actual values reduces a wide range to a more manageable size. Some of our senses operate in a logarithmic fashion (Weber–Fechner law), which makes logarithmic scales for these input quantities especially appropriate. In particular our sense of hearing perceives equal ratios of frequencies as equal differences in pitch. In addition, studies of young children in an isolated tribe have shown logarithmic scales to be the most natural display of numbers by humans. Definition and base Logarithmic scales are either defined for ratios of the underlying quantity, or one has to agree to measure the quantity in fixed units. Deviating from these units means that the logarithmic measure will change by an additive constant. The base of the logarithm also has to be specified, unless the scale's value is considered to be a dimensional quantity expressed in generic (indefinite-base) logarithmic units. On most logarithmic scales, small values (or ratios) of the underlying quantity correspond to negative values of the logarithmic measure. Well-known examples of such scales are: - Richter magnitude scale and moment magnitude scale (MMS) for strength of earthquakes and movement in the earth. - ban and deciban, for information or weight of evidence; - bel and decibel and neper for acoustic power (loudness) and electric power; - cent, minor second, major second, and octave for the relative pitch of notes in music; - logit for odds in statistics; - Palermo Technical Impact Hazard Scale; - Logarithmic timeline; - counting f-stops for ratios of photographic exposure; - rating low probabilities by the number of 'nines' in the decimal expansion of the probability of their not happening: for example, a system which will fail with a probability of 10−5 is 99.999% reliable: "five nines". - Entropy in thermodynamics. - Information in information theory. - Particle Size Distribution curves of soil Some logarithmic scales were designed such that large values (or ratios) of the underlying quantity correspond to small values of the logarithmic measure. Examples of such scales are: - pH for acidity and alkalinity; - stellar magnitude scale for brightness of stars; - Krumbein scale for particle size in geology. - Absorbance of light by transparent samples. Logarithmic units are abstract mathematical units that can be used to express any quantities (physical or mathematical) that are defined on a logarithmic scale, that is, as being proportional to the value of a logarithm function. In this article, a given logarithmic unit will be denoted using the notation [log n], where n is a positive real number, and [log ] here denotes the indefinite logarithm function Log(). Examples of logarithmic units include common units of information and entropy, such as the bit [log 2] and the byte 8[log 2] = [log 256], also the nat [log e] and the ban [log 10]; units of relative signal strength magnitude such as the decibel 0.1[log 10] and bel [log 10], neper [log e], and other logarithmic-scale units such as the Richter scale point [log 10] or (more generally) the corresponding order-of-magnitude unit sometimes referred to as a factor of ten or decade (here meaning [log 10], not 10 years). The motivation behind the concept of logarithmic units is that defining a quantity on a logarithmic scale in terms of a logarithm to a specific base amounts to making a (totally arbitrary) choice of a unit of measurement for that quantity, one that corresponds to the specific (and equally arbitrary) logarithm base that was selected. Due to the identity the logarithms of any given number a to two different bases (here b and c) differ only by the constant factor logc b. This constant factor can be considered to represent the conversion factor for converting a numerical representation of the pure (indefinite) logarithmic quantity Log(a) from one arbitrary unit of measurement (the [log c] unit) to another (the [log b] unit), since For example, Boltzmann's standard definition of entropy S = k ln W (where W is the number of ways of arranging a system and k is Boltzmann's constant) can also written more simply as just S = Log(W), where "Log" here denotes the indefinite logarithm, and we let k = [log e]; that is, we identify the physical entropy unit k with the mathematical unit [log e]. This identity works because Thus, we can interpret Boltzmann's constant as being simply the expression (in terms of more standard physical units) of the abstract logarithmic unit [log e] that is needed to convert the dimensionless pure-number quantity ln W (which uses an arbitrary choice of base, namely e) to the more fundamental pure logarithmic quantity Log(W), which implies no particular choice of base, and thus no particular choice of physical unit for measuring entropy. A logarithmic scale is also a graphical scale on one or both sides of a graph where a number x is printed at a distance c·log(x) from the point marked with the number 1. A slide rule has logarithmic scales, and nomograms often employ logarithmic scales. On a logarithmic scale an equal difference in order of magnitude is represented by an equal distance. The geometric mean of two numbers is midway between the numbers. Logarithmic graph paper, before the advent of computer graphics, was a basic scientific tool. Plots on paper with one log scale can show up exponential laws, and on log-log paper power laws, as straight lines (see semi-log graph, log-log graph). Comparing the scales A plot of x v. log10(x). Note two things: first, log(x) increases quickly at first: by x = 3, log(x) is almost at .5; it is useful to remember that sqrt(10) ~ 3. Second, log(x) grows ever more slowly as x approaches 10; this shows how logarithms can be used to 'tame' large numbers. Logarithmic and semi-logarithmic plots and equations of lines Log and semilog scales are best used to view two types of equations (for ease, the natural base 'e' is used): In the first case, plotting the equation on a semilog scale (log Y versus X) gives: log Y = −aX, which is linear. In the second case, plotting the equation on a log-log scale (log Y versus log X) gives: log Y = b log X, which is linear. When values that span large ranges need to be plotted, a logarithmic scale can provide a means of viewing the data that allows the values to be determined from the graph. The logarithmic scale is marked off in distances proportional to the logarithms of the values being represented. For example, in the figure below, for both plots, y has the values of: 1, 2, 3, 4, 5, 6, 7, 8, 9 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100. For the plot on the left, the log10 of the values of y are plotted on a linear scale. Thus the first value is log10(1) = 0; the second value is log10(2) = 0.301; the 3rd value is log10(3) = 0.4771; the 4th value is log10(4) = 0.602, and so on. The plot on the right uses logarithmic (or log, as it is also referred to) scaling on the vertical axis. Note that values where the exponent term is close to an integral fraction of 10 (0.1, 0.2, 0.3, etc.) are shown as 10 raised to the power that yields the original value of y. These are shown for y = 2, 4, 8, 10, 20, 40, 80 and 100. Note that for y = 2 and 20, y = 100.301 and 101.301; for y = 4 and 40, y = 100.602 and 101.602. This is due to the law that So, knowing log10(2) = 0.301, the rest can be derived: Note that the values of y are easily picked off the above figure. By comparison, values of y less than 10 are difficult to determine from the figure below, where they are plotted on a linear scale, thus confirming the earlier assertion that values spanning large ranges are more easily read from a logarithmically scaled graph. If both the vertical and horizontal axis of a plot is scaled logarithmically, the plot is referred to as a log-log plot. Semi logarithmic plots Estimating values in a diagram with logarithmic scale One method for accurate determination of values on a logarithmic axis is as follows: - Measure the distance from the point on the scale to the closest decade line with lower value with a ruler. - Divide this distance by the length of a decade (the length between two decade lines). - The value of your chosen point is now the value of the nearest decade line with lower value times 10a where a is the value found in step 2. Example: What is the value that lies halfway between the 10 and 100 decades on a logarithmic axis? Since it is the halfway point that is of interest, the quotient of steps 1 and 2 is 0.5. The nearest decade line with lower value is 10, so the halfway point's value is (100.5) × 10 = 101.5 ≈ 31.62. To estimate where a value lies within a decade on a logarithmic axis, use the following method: - Measure the distance between consecutive decades with a ruler. You can use any units provided that you are consistent. - Take the log (value of interest/nearest lower value decade) multiplied by the number determined in step one. - Using the same units as in step 1, count as many units as resulted from step 2, starting at the lower decade. Example: To determine where 17 is located on a logarithmic axis, first use a ruler to measure the distance between 10 and 100. If the measurement is 30mm on a ruler (it can vary — ensure that the same scale is used throughout the rest of the process). - [log (17/10)] × 30 = 6.9 x = 17 is then 6.9mm after x = 10 (along the x-axis). Interpolating logarithmic values is very similar to interpolating linear values. In linear interpolation, values are determined through equal ratios. For example, in linear interpolation, a line that increases one ordinate (y-value) for every two abscissa (x-value) has a ratio (also known as slope or rise-over-run) of 1/2. To determine the ordinate or abscissa of a particular point, you must know the other value. The calculation of the ordinate corresponding to an abscissa of 12 in the example below is as follows: - 1/2 = Y/12 Y is the unknown ordinate. Using cross-multiplication, Y can be calculated and is equal to 6. In logarithmic interpolation, a ratio of logarithmic values is set equal to a ratio of linear values. For example, consider a log base 10 scale graph of paper reams sold per day measuring 191⁄32 inches from 1 to 10. How many reams were sold in a day if the value on the graph is 111⁄32 between 1 and 10? To solve this problem, it is necessary to use a basic logarithmic definition: - log(A) − log(B) = log(A/B) Decade lines, those values that denote powers of the log base, are also important in logarithmic interpolation. Locate the lower decade line. It is the closest decade line to the number you are evaluating that is lower than that number. Decade lines begin at 1. The next decade line is the first power of your log base. For log base 10, the first decade line is 1, the second is 10, the third is 100, and so on. The ratio of linear values is the number of units from the lower decade line to the value of interest (111⁄32 in this example, since the lower decade line in this example is 1) divided by the total number of units between the lower decade line and the upper decade line (the upper decade line is 10 in this example). Therefore, the linear ratio is: Notice that the units (1/32 inch) are removed from the equation because both measurements are in the same units. Conversion to a single unit before calculating the ratio is required if the measurements were made in different units. The logarithmic ratio uses the same graphical measurements as the linear ratio. The difference between the log of the upper decade line (10) and the log of the lower decade line (1) represents the same graphical distance as the total number of units between the two decade lines in the linear ratio (191⁄32nds of an inch). Therefore, the lower part of the logarithmic ratio (the bottom part of the fraction) is: - log(10) − log(1) The upper part of the logarithmic ratio (the top part of the fraction) represents the same graphical distance as the number of units between the value of interest (number of reams of paper sold) and the lower decade line in linear ratio (111⁄32nds of an inch). The unknown in this ratio is the value of interest, which we will call X. Therefore, the top part of the fraction is: - log(X) − log(1) The logarithmic ratio is: - [log(X) − log(1)]/[log(10) − log(1)] The linear ratio is equal to the logarithmic ratio. Therefore, the equation required to determine the number of paper reams sold in a particular day is: - 11/19 = [log(X) − log(1)]/[log(10) − log(1)] This equation can be rewritten using the logarithmic definition mentioned above: - 11/19 = log(X/1)/log(10) log(10) = 1, therefore: - 11/19 = log(X/1) To remove the "log" from the right side of the equation, both sides must be used as exponents for the number 10, meaning 10 to the power of 11/19 and 10 to the power of log(X/1). The "log" function and the "10 to the power of" function are reciprocal and cancel each other out, leaving: - 1011/19 = X/1 Now both sides must be multiplied by 1. While the 1 drops out of this equation, it is important to note that the number X is divided by is the value of the lower decade line. If this example involved values between 10 and 100, the equation would include X/10 instead of X/1. - 1011/19 = X X = 3.793 reams of paper. Units of information Units of relative signal strength - optical density [log 10] - "Slide Rule Sense: Amazonian Indigenous Culture Demonstrates Universal Mapping Of Number Onto Space". ScienceDaily. 2008-05-30. Retrieved 2008-05-31. which references: Stanislas, Dehaene; Véronique Izard, Elizabeth Spelke, and Pierre Pica. (2008). "Log or linear? Distinct intuitions of the number scale in Western and Amazonian indigene cultures". Science 320 (5880): 1217–20. doi:10.1126/science.1156540. PMC 2610411. PMID 18511690. - (English) Why using logarithmic scale to display share prices? - Media related to Logarithmic scale at Wikimedia Commons
http://en.wikipedia.org/wiki/Logarithmic_scale
13
112
The division of a line segment whose total length is a + b into two parts a and b where the ratio of a + b to a is equal to the ratio a to b is known as the golden ratio. The two ratios are both approximately equal to 1.618..., which is called the golden ratio constant and usually notated by : The concept of golden ratio division appeared more than 2400 years ago as evidenced in art and architecture. It is possible that the magical golden ratio divisions of parts are rather closely associated with the notion of beauty in pleasing, harmonious proportions expressed in different areas of knowledge by biologists, artists, musicians, historians, architects, psychologists, scientists, and even mystics. For example, the Greek sculptor Phidias (490–430 BC) made the Parthenon statues in a way that seems to embody the golden ratio; Plato (427–347 BC), in his Timaeus, describes the five possible regular solids, known as the Platonic solids (the tetrahedron, cube, octahedron, dodecahedron, and icosahedron), some of which are related to the golden ratio. The properties of the golden ratio were mentioned in the works of the ancient Greeks Pythagoras (c. 580–c. 500 BC) and Euclid (c. 325–c. 265 BC), the Italian mathematician Leonardo of Pisa (1170s or 1180s–1250), and the Renaissance astronomer J. Kepler (1571–1630). Specifically, in book VI of the Elements, Euclid gave the following definition of the golden ratio: "A straight line is said to have been cut in extreme and mean ratio when, as the whole line is to the greater segment, so is the greater to the less". Therein Euclid showed that the "mean and extreme ratio", the name used for the golden ratio until about the 18th century, is an irrational number. In 1509 L. Pacioli published the book De Divina Proportione, which gave new impetus to the theory of the golden ratio; in particular, he illustrated the golden ratio as applied to human faces by artists, architects, scientists, and mystics. G. Cardano (1545) mentioned the golden ratio in his famous book Ars Magna, where he solved quadratic and cubic equations and was the first to explicitly make calculations with complex numbers. Later M. Mästlin (1597) evaluated approximately as . J. Kepler (1608) showed that the ratios of Fibonacci numbers approximate the value of the golden ratio and described the golden ratio as a "precious jewel". R. Simson (1753) gave a simple limit representation of the golden ratio based on its very simple continued fraction . M. Ohm (1835) gave the first known use of the term "golden section", believed to have originated earlier in the century from an unknown source. J. Sulley (1875) first used the term "golden ratio" in English and G. Chrystal (1898) first used this term in a mathematical context. The symbol (phi) for the notation of the golden ratio was suggested by American mathematician M. Barrwas in 1909. Phi is the first Greek letter in the name of the Greek sculptor Phidias. Throughout history many people have tried to attribute some kind of magic or cult meaning as a valid description of nature and attempted to prove that the golden ratio was incorporated into different architecture and art objects (like the Great Pyramid, the Parthenon, old buildings, sculptures and pictures). But modern investigations (for example, G. Markowsky (1992), C. Falbo (2005), and A. Olariu (2007)) showed that these are mostly misconceptions: the differences between the golden ratio and real ratios of these objects in many cases reach 20–30% or more. The golden ratio has many remarkable properties related to its quasi symmetry. It satisfies the quadratic equation , which has solutions and . The absolute value of the second solution is called the golden ratio conjugate, . These ratios satisfy the following relations: Applications of the golden ratio also include algebraic coding theory, linear sequential circuits, quasicrystals, phyllotaxis, biomathematics, and computer science. The constant is the most frequently encountered classical constant in mathematics and the natural sciences. Initially it was defined as the ratio of the length of a circle's circumference to its diameter. Many further interpretations and applications in practically all fields of qualitative science followed. For instance, the following table illustrates how the constant is applied to evaluate surface areas and volumes of some simple geometrical objects: Different approximations of π have been known since antiquity or before when people discovered some basic properties of circles. The design of Egyptian pyramids (c. 3000 BC) incorporated as in numerous places. The Egyptian scribe Ahmes (Middle Kingdom papyrus, c. 2000 BC) wrote the oldest known text to give an approximate value for as Babylonian mathematicians (19th century BC) were using an estimation of π as , which is within 0.53% of the exact value. (China, c. 1200 BC) and the Biblical verse I Kings 7:23 (c. 971–852 BC) gave the estimation of π as 3. Archimedes (Greece, c. 240 BC) knew that and gave the estimation of π as 3.1418…. Aryabhata (India, 5th century) gave the approximation of π as 62832/20000, correct to four decimal places. Zu Chongzhi (China, 5th century) gave two approximations of as 355/113 and 22/7 and restricted between 3.1415926 and 3.1415927. A reinvestigation of π began by building corresponding series and other calculus-related formulas for this constant. Simultaneously, scientists continued to evaluate with greater and greater accuracy and proved different structural properties of . Madhava of Sangamagrama (India, 1350–1425) found the infinite series expansion (currently named the Gregory‐Leibniz series or Leibniz formula) and evaluated π with 11 correct digits. Ghyath ad-din Jamshid Kashani (Persia, 1424) evaluated π with 16 correct digits. F. Viete (1593) represented as the infinite product . Ludolph van Ceulen (Germany, 1610) evaluated 35 decimal places of π. J. Wallis (1655) represented as the infinite product J. Machin (England, 1706) developed a quickly converging series for , based on the formula , and used it to evaluate 100 correct digits. W. Jones (1706) introduced the symbol π for notation of the Pi constant. L. Euler (1737) adopted the symbol π and made it standard. C. Goldbach (1742) also widely used the symbol π. J. H. Lambert (1761) established that π is an irrational number. J. Vega (Slovenia, 1789) improved J. Machin's 1706 formula and calculated 126 correct digits for . W. Rutherford (1841) calculated 152 correct digits for . After 20 years of hard work, W. Shanks (1873) presented 707 digits for , but only 527 digits were correct (as D. F. Ferguson found in 1947). F. Lindemann (1882) proved that π is transcendental. F. C. W. Stormer (1896) derived the formula , which was used in 2002 for the evaluation of 1,241,100,000,000 digits of . D. F. Ferguson (1947) recalculated π to 808 decimal places, using a mechanical desk calculator. K. Mahler (1953) proved that is not a Liouville number. Modern computer calculation of π was started by D. Shanks (1961), who reported 100000 digits of . This record was improved many times; Yasumasa Kanada (Japan, December 2002) using a 64-node Hitachi supercomputer evaluated 1,241,100,000,000 digits of . For this purpose he used the earlier mentioned formula of F. C. W. Stormer (1896) and the formula . Future improved results are inevitable. Babylonians divided the circle into 360 degrees (360°), probably because 360 approximates the number of days in a year. Ptolemy (Egypt, c. 90–168 AD) in Mathematical Syntaxis used the symbol sing in astronomical calculations. Mathematically, one degree has the numerical value Therefore, all historical and other information about can be derived from information about . J. Napier in his work on logarithms (1618) mentioned the existence of a special convenient constant for the calculation of logarithms (but he did not evaluate this constant). It is possible that the table of logarithms was written by W. Oughtred, who is credited in 1622 with inventing the slide rule, which is a tool used for multiplication, division, evaluation of roots, logarithms, and other functions. In 1669 I. Newton published the series , which actually converges to that special constant. At that time J. Bernoulli tried to find the limit of , when . G. W. Leibniz (1690–1691) was the first, in correspondence to C. Huygens, to recognize this limit as a special constant, but he used the notation to represent it. L. Euler began using the letter for that constant in 1727–1728, and introduced this notation in a letter to C. Goldbach (1731). However, the first use of in a published work appeared in Euler's Mechanica (1736). In 1737 L. Euler proved that ⅇ and are irrational numbers and represented through continued fractions. In 1748 L. Euler represented as an infinite sum and found its first 23 digits: D. Bernoulli (1760) used as the base of the natural logarithms. J. Lambert (1768) proved that is an irrational number, if is a nonzero rational number. In the 19th century A. Cauchy (1823) determined that ; J. Liouville (1844) proved that does not satisfy any quadratic equation with integral coefficients; C. Hermite (1873) proved that ⅇ is a transcendental number; and E. Catalan (1873) represented ⅇ through infinite products. The only constant appearing more frequently than ⅇ in mathematics is π. Physical applications of are very often connected with time-dependent processes. For example, if is a decreasing value of a quantity at time , which decreases at a rate proportional to its value with coefficient , this quantity is subject to exponential decay described by the following differential equation and its solution: where is the initial quantity at time . Examples of such processes can be found in the following: a radionuclide that undergoes radioactive decay, chemical reactions (like enzyme-catalyzed reactions), electric charge, vibrations, pharmacology and toxicology, and the intensity of electromagnetic radiation. In 1735 the Swiss mathematician L. Euler introduced a special constant that represents the limiting difference between the harmonic series and the natural logarithm: Euler denoted it using the symbol , and initially calculated its value to 6 decimal places, which he extended to 16 digits in 1781. L. Mascheroni (1790) first used the symbol for the notation of this constant and calculated its value to 19 correct digits. Later J. Soldner (1809) calculated to 40 correct digits, which C. Gauss and F. Nicolai (1812) verified. E. Catalan (1875) found the integral representation for this constant . This constant was named the Euler gamma or Euler‐Mascheroni constant in the honor of its founders. Applications include discrete mathematics and number theory. The Catalan constant was named in honor of Eu. Ch. Catalan (1814–1894), who introduced a faster convergent equivalent series and expressions in terms of integrals. Based on methods resulting from collaborations with M. Leclert, E. Catalan (1865) computed up to 9 decimals. M. Bresse (1867) computed 24 decimals of using a technique from E. Kummer's work. J. Glaisher (1877) evaluated 20 digits of the Catalan constant, which he extended to 32 digits in 1913. The Catalan constant is applied in number theory, combinatorics, and different areas of mathematical analysis. The works of H. Kinkelin (1860) and J. Glaisher (1877–1878) introduced one special constant: which was later called the Glaisher or Glaisher‐Kinkelin constant in honor of its founders. This constant is used in number theory, Bose‐Einstein and Fermi‐Dirac statistics, analytic approximation and evaluation of integrals and products, regularization techniques in quantum field theory, and the Scharnhorst effect of quantum electrodynamics. The 1934 work of A. Khinchin considered the limit of the geometric mean of continued fraction terms and found that its value is a constant independent for almost all continued fractions: The constant—named the Khinchin constant in the honor of its founder—established that rational numbers, solutions of quadratic equations with rational coefficients, the golden ratio , and the Euler number upon being expanded into continued fractions do not have the previous property. Other site numerical verifications showed that continued fraction expansions of , the Euler-Mascheroni constant , and Khinchin's constant itself can satisfy that property. But it was still not proved accurately. Applications of the Khinchin constant include number theory. The imaginary unit constant allows the real number system to be extended to the complex number system . This system allows for solutions of polynomial equations such as and more complicated polynomial equations through complex numbers. Hence and , and the previous quadratic equation has two solutions as is expected for a quadratic polynomial: The imaginary unit has a long history, which started with the question of how to understand and interpret the solution of the simple quadratic equation . It was clear that . But it was not clear how to get from something squared. In the 16th, 17th, and 18th centuries this problem was intensively discussed together with the problem of solving the cubic, quartic, and other polynomial equations. S. Ferro (Italy, 1465–1526) first discovered a method to solve cubic equations. N. F. Tartaglia (Italy, 1500–1557) independently solved cubic equations. G. Cardano (Italy, 1545) published the solutions to the cubic and quartic equations in his book Ars Magna, with one case of this solution communicated to him by N. Tartaglia. He noted the existence of so-called imaginary numbers, but did not describe their properties. L. Ferrari (Italy, 1522–1565) solved the quartic equation, which was mentioned in the book Ars Magna by his teacher G. Cardano. R. Descartes (1637) suggested the name "imaginary" for nonreal numbers like . J. Wallis (1685) in De Algebra tractates published the idea of the graphic representation of complex numbers. J. Bernoulli (1702) used imaginary numbers. R. Cotes (1714) derived the formula: which in 1748 was found by L. Euler and hence named for him. A. Moivre (1730) derived the well-known formula , which bears his name. Investigations of L. Euler (1727, 1728) gave new imputus to the theory of complex numbers and functions of complex arguments (analytic functions). In a letter to C. Goldbach (1731) L. Euler introduced the notation ⅇ for the base of the natural logarithm ⅇ⩵2.71828182… and he proved that ⅇ is irrational. Later on L. Euler (1740–1748) found a series expansion for , which lead to the famous and very basic formula connecting exponential and trigonometric functions (1748). H. Kühn (1753) used imaginary numbers. L. Euler (1755) used the word "complex" (1777) and first used the letter to represent . C. Wessel (1799) gave a geometrical interpretation of complex numbers. As a result, mathematicians introduced the use of a special symbol—the imaginary unit that is equal to : In the 19th century the conception and theory of complex numbers was basically formed. A. Buee (1804) independently came to the idea of J. Wallis about geometrical representations of complex numbers in the plane. J. Argand (1806) introduced the name modulus for , and published the idea of geometrical interpretation of complex numbers known as the Argand diagram. C. Mourey (1828) laid the foundations for the theory of directional numbers in a little treatise. The imaginary unit was interpreted in a geometrical sense as the point with coordinates in the Cartesian (Euclidean) , plane with the vertical axis upward and the origin . This geometric interpretation establishes the following representations of the complex number through two real numbers and as: where is the distance between points and and is the angle between the line connecting points and and the positive axis direction (the so-called polar representation). The last formula lead to the following basic relations: which describe the main characteristics of the complex number —the so-called modulus (absolute value) , the real part , the imaginary part , and the argument . The Euler formula allows the representation of the complex number , using polar coordinates in the more compact form: It also allows the expression of the logarithm of a complex number through the formula: Taking into account that the cosine and sine have period , it follows that has period : Generically, the logarithm function is the multivalued function: For specifying just one value for the logarithm and one value of the argument ϕ for a given complex number , the restriction is generally used for the argument ϕ. C. F. Gauss (1831) introduced the name "imaginary unit" for , suggested the term complex number for , and called the norm, but mentioned that the theory of complex numbers is quite unknown, and in 1832 published his chief memoir on the subject. A. Cauchy (1789–1857) proved several important basic theorems in complex analysis. N. Abel (1802–1829) was the first to widely use complex numbers with well-known success. K. Weierstrass (1841) introduced the notation for the modulus of complex numbers, which he called the absolute value. E. Kummer (1844), L. Kronecker (1845), Scheffler (1845, 1851, 1880), A. Bellavitis (1835, 1852), Peacock (1845), A. Morgan (1849), A. Mobius (1790–1868), J. Dirichlet (1805–1859), and others made large contributions in developing complex number theory.
http://functions.wolfram.com/Constants/I/introductions/ClassicalConstants/01/
13
101
This tutorial will try to explain what bases are and why they are useful. This isn't a complete or perfect tutorial on bases. It's just another JK tutorial for newcomers. Here's the definition for a few terms that are used a lot in this tutorial: digit: Any numeral used to express a number, or the column (e.g., ones, tens) which the numeral occupies. number: A number is an idea of an amount. You can't write a number without limiting yourself to some form of expression, but the number is the amount that exists. numeral: Numerals are characters which aid in expressing numbers. They are not numbers in themselves, but only a form of expression for a number. For example, in decimal 10, the '1' and '0' characters are the numerals. The amount of things that the 1 and 0 represent is the number. A base is a system of displaying a number with numerals. It is only a way of displaying the number, it does not change the number itself. For example, 100 in base 16, 256 in base 10, and 400 in base 8 are all the same number (numbers are "in" the base that is used to display them). The only thing that changed is the way the number appears. In each base, there is a maximum value that each digit can hold before the number must use the next highest digit. In base 10, the maximum value that can be displayed in a digit is 9. 100 in base 16 is 256 in base 10 because base 16 (as its name implies) displays more values in each digit than decimal does. Because it displays 6 more values than decimal in each digit, base 16 can display the same value with lesser and often fewer numerals. The following table gives a name for each base and lists the characters that it uses: Although these names are based on common Latin prefixes and roots, they are not all standard names. The asterisks are used to mark the names that are more invention than standard. Hexadecimal, while being fairly standard, is not the correct name for base 16 according to the Latin system. It should have been something like sexadecimal, but was changed to use "hex" for apparent reasons. Letters are used after the numerical character 9. Letters were chosen to represent numbers because they were easily typed and displayed by computers. Each base's name is one greater than its highest digit value. For example, base 2 does not use a '2' because each base starts designating numbers at 0. Base 2 is called base 2 because it can display 2 values per column. Base 10, known as decimal, is the base normally used for math. Base 10 is used instead of other bases because humans have ten fingers. And because humans learned to count with their fingers, base 10 almost seems natural. Since you have to use a base when displaying or speaking a number, decimal is used almost everywhere. Other bases are named (e.g., base 16) according to the decimal display of the number of values per digit that the base uses. Base 16 wouldn't be called "base 16" if everyone used it. It would be called "base 10" because the base's name is the max number of values per digit plus one. And in hexadecimal or any other base, that number will be 10. In the text above, this tutorial says that base 10 should almost seem natural. So why wouldn't it be completely natural? Because your hands can use base 11. Your hands can display any number from 0 to 10 which is what base 11 does. Base 10 cannot display 10 values with one digit. In mathmatics, you'll carry over to the next digit when you can't display anything greater in that digit with the base you're using. But you won't when counting on your fingers. You can display 10 by holding out all of your fingers or by holding out one finger if you are counting in tens. In decimal math, there is no inbetween '10' character for the first digit. You'll get to 9 and then carry over. But when counting on your fingers, you'll get to 9, and then you can carry over or hold out your tenth finger. Higher bases can display the same numbers as lower bases, but with fewer digits and lesser numerals. For example, 100 in base 16 is 256 in decimal. This is because digits in base 16 are worth more than digits in base 10. Why? Because base 16 has to count through more digit values before it carries over to the next digit. For example, one less than 100 in base 16 is FF. Decimal does not display so many values per digit, so it carries over when it reaches 9. Hexadecimal goes up to F before it carries over to the next greater digit. Before going on to base conversions, take a look at the decimal power of each digit. Here's a table with the more common bases: |Base||Digit 0||Digit 1||Digit 2| |Binary||1 (20)||2 (21)||4 (22)| |Octal||1 (80)||8 (81)||64 (82)| |Decimal||1 (100)||10 (101)||100 (102)| |Hexadecimal||1 (160)||16 (161)||256 (162)| The decimal value of each digit in a base is determined by finding baseNum to the power of digitNum. For example, if baseNum were 16 for hexadecimal, and digitNum were 2 for the third digit, the decimal eqivalent of a 1 in digit 2 would be 256. That is, the value of a 1 in hexadecimal's digit 2 would be the same as 256 in decimal. This provides us with a tool for conversion to and from decimal. But we're using decimal to do the math, so the conversion can only be used with decimal numbers. If we were to use other bases with this conversion system, we would have to make another table. The typical approach would be to pop out the Windows calculator to convert your number. But the problem is Real Programmers don't use calculators. So you'll just have to learn how to convert numbers between bases. When converting from one base to another, decimal (being ingrained) will be the base that is used to do the math. So you'll be converting from one base to decimal and then to another base if you need. Some simple conversions from octal to decimal using the chart above: Convert the octal number 25 to decimal: For digit 1: 2 * 8 = 16. Add 16 to our result. For digit 0: 5 * 1 = 5. Add 5 to our result. The result is: 21. Each digit was multiplied by its decimal value from the chart above. Then the products were added to find the result. To convert back to octal: Convert the decimal number 21 to octal: There are no 64's in our number. There are two 8's in our number. Add 20 to our result. There is a remainder of 21-16=5. There are five 1's left. Add 5 to our result. The result is: 25. This example went through each octal digit in our chart from greatest to least. There were two 8's in the number. Because of these two 8's, two 10's were added to the result because 10 is decimal's value for digit 1 and the 8 is octal's value for digit 1. Now to do a conversion from decimal to binary: Convert the decimal number 13 to binary: There are no 16's in our number. There is one 8 in our number. Add 1000 to our result. There is a remainder of 13-8=5. There is one 4 in our number. Add 100 to our result. There is a remainder of 5-4=1. There are no 2's in our number. There is one 1 in our number. Add 1 to our result. The result is: 1101. That example used some binary digits not on our chart, but you can find that the numbers for binary digits 3 and 4 are 8 and 16 respectively. Now to convert back to decimal: Convert the binary 1101 to decimal: For digit 3: 1 * 8 = 8. Add 8 to our result. For digit 2: 1 * 4 = 4. Add 4 to our result. No value for digit 1. For digit 0: 1 * 1 = 1. Add 1 to our result. The result is: 13. Now, to do a more complicated hexadecimal to decimal conversion in slightly different form. Convert the hexadecimal 2EA.F to decimal: result = (2 * 162) + (14 * 161) + (10 * 160) + (15 * 16-1) result = (512) + (224) + (10) + (15/16) result = 746 + 0.9375 result = 746.9375 A decimal point was used in the hexadecimal number, but floating-point numbers are covered later on. Also, 16 to the power of -1 is the same as 1/16. Now to convert the number back to hexadecimal. Convert the decimal number 746.9375 to hexadecimal: First the 746: There are two 256's in our number. Add 200 to our result. There is a remainder of 746-512=234. There are fourteen 16's in our number. Add E0 to our result. There is a remainder of 10. There are ten 1's left in our number. Add A to our result. The result is now: 2EA. Now the fractional portion: result2 = 9375/10000 result2 = 15/16 (reduced) There are fifteen 1/16's in our number. Add 0.F to our result. The result is 2EA.F. There are several important things in this example. Because a hexadecimal digit can hold a higher value than a decimal digit, having ten 1s and fourteen 16s is reasonable. Because fractions in hexadecimal are in sixteenths instead of tenths as in decimal, 15/16 is the same as 0.F. It's just like having 9/10 in decimal - it's the same as 0.9. For computing purposes, hexadecimal, octal, and binary are often used. Hexadecimal is useful in computer science because it represents numbers in a compact form and it is easily converted to and from binary. With four bits, you can store any hexadecimal character from 0 to F: Hexadecimal can represent the same value in much less space than binary and decimal. Each hexadecimal number is represented by four bits (called a nybble). This makes hexadecimal useful for displaying memory values in an efficient form. Hexadecimal is also useful for storing up to four boolean values in a digit. For example, any addition of 1, 2, 4, or 8 yields a number with known components. For example, 3=1+2, F=8+4+2+1, E=4+8+2, 9=8+1. Any combination results in a number that can be decomposed into the numbers that add up to it. If you look at the table above, you'll see why (1, 2, 4, and 8 use separate bits). This makes hexadecimal useful for cog flags. Hexadecimal can be easily converted to and from binary in a more simple method than the examples used. With the table above, you can convert a binary number of four bits into hexadecimal. For example, the number 11110011 is F3. Just seperate the binary number into nybbles and use the table above to find the hexadecimal equivalent. Octal is similar to hexadecimal in that it is used for some of the same reasons. It's maximum digit value is 7 which is represented in binary as 111. Only three bits are needed to store an octal digit. Like hexadecimal, it is easily converted to and from binary - instead of nybbles, use groups of three bits. Binary is extremely useful to computers because values of 0 and 1 - true and false - are easy to store and transfer. The downside is that they are much less efficient in terms of the space that it takes to store and display so many digits. For example, 9 in decimal is 1001 in binary. As explained above, using a different base does not change the number you're displaying. It only changes the form that it's shown in. When writing numbers, especially in your code, you must specify the base of a number if it's not in decimal. Say you used the number 100 in your code. When the interpreter/compiler/parser reads that, it will assume that the number is in decimal. To tell the parser what base the number is in, there are two standard prefixes to put before a number. These are '0x' for hexadecimal and '0' for octal. If you want to write a hexadecimal 100, you would write 0x100. Or if you wanted to write 100 in octal, you would write 0100. Other bases will have to be converted. Not all parsers support these prefixes, but Cog, being based on C++, does support them. A floating-point number (a float) uses a radix point (a decimal point in base 10) and numerals to its right to display the fractional portion of the number. A hexadecimal or octal number can have a fractional part just as base 10 numbers do, but this is hardly ever done. Since there isn't much of a reason to have a floating-point number in anything but base 10, other bases are usually restricted to being whole numbers. Not even the Windows calculator allows floating-point numbers in non-decimal bases.
http://www.jkhub.net/library/index.php?title=Tutorials:Understanding_Bases
13
62
LESSON PLAN IN MATH A. Discover the formula for finding circumference using pi and diameter B. Solve problems involving circumference of a circle C. Work cooperatively in groups II. SUBJECT MATTER A. Circumference of a Circle B. BEC PELC Math V 1; 1.1; 2; 2.1.1-1.4 C. pictures, circular/round objects, string, ruler, activity sheets A. Preparatory Activities Who is the Father of Geometry? Find the perimeter of the following plane figures/polygons to find out.(Draw your figure beside the numbers.) 1. 2. 3. 4. 5. 6. ______ _____ _____ _____ _____ _____ 3 1 5 4 2 6 Ask: How well are you familiar with different circles around you? Identify the following circular objects shown in the following pictures. · Show pictures of circular objects and let pupils raise their hand if they know the object. Ask: Do you know that circles just like polygons also have perimeter? How do we call the perimeter of a circle? How do we solve for the distance around the circle? B. Developmental Activities Group Activity: Exploration with Discs · Divide Pupils into 5 groups and let them gather in circle. · Orient the pupils on the rules and proper decorum during group activity. · Distribute the needed materials and activity sheets. Instruct pupils to read carefully the directions. · Emphasize the importance of cooperation to successfully accomplish the task. · Guide pupils while the activity is going on. Have them focus on the following questions: a. What is the distance/length around the circular object? b. What is the distance/length across the circular object? c. What is the value if we will divide the length around the circular object by the length of the circular object? Express your answer to the nearest hundredth. · Let pupils write their results in the matrix written in the chalkboard. Have them observe and compare their result with the result of the other groups. Ask: What have you noticed with your results? Are the results similar? Why do you think are they similar? · Introduce that the distance/length around the circular object is called the CIRCUMFERENCE. Relate that the CIRCUMFERENCE is actually the “PERIMETER” of a circle. The distance across the circular object is called the DIAMETER. Half the diameter is called the RADIUS. · Elaborate that long time ago, people started to notice that the Circumference of a circle is approximately 3 times the diameter. Discuss that at present, mathematicians have accurately solved this value to 3.1415926535 or simply 3.14. This value is called as pi (π). · Present the equation to the class: π = . Explain that if this equation would be rearranged, we can have C= π x d. Since the radius is half the diameter, circumference can also be solved through C= π x 2 x r. · Let pupils memorize the formula in finding the circumference through body movements. Provide the following example: Liza wants to put a lace around a circular pillow. If the pillow has a diameter of 20 dm, how long should be the lace? Let pupils analyze the problem using STAR Strategy. ü S-Search the Problem The circular pillow has 20 dm diameter. I need to use pi which is equal to 3.14. I need to find the circumference of the pillow to find the length of the lace. Or simply, d= 20; pi= 3.14 C= ? ü T-Translate the problem into an equation C= π x d C=3.14 x 20 dm ü A-Answer the Problem C=3.14 x 20 dm C= 62.8 dm The length of the lace needed is 62.8 dm. ü R-Review the Solution Since π = C/d, hence 62.8 dm/20 dm should be equal to 3.14. 62.8 dm/20 dm = 3.14 Provide other examples: a. A circular fountain has a diameter of 4 m. What is the circumference of the fountain? b. A circular aviary needs to be surrounded with screen. If the aviary measures 15 ft across, how long should be the screen needed to surround the aviary? 2. Fixing Exercises (Pair-Share Activity) Find the circumference. Use pi=3.14 3. What is the circumference of a circle with a diameter of 4.5 cm? 4. A round wooden table has a radius of 2 m. Find its circumference. 5. Give the circumference of a clock with 9 inches as its diameter. · What is the circumference of a circle? · What is pi? What is the value of pi? · How do we solve for the circumference of a circle? Imagine you’re the person in the following problems then give your solution. 1. You are a gardener. A round flower plot needs to be fenced with wire. If it measures 9 m across, how long should be the wire needed to fence around the plot? 2. The distance around a circular running field is 75 m. If you are a runner and you want to run across the field, how far would your run be? Read the problem and show you solution. 3. Give the circumference of a circle with 3.5 cm as its radius. 4. What is the diameter of a round mirror if its circumference is 35 dm? 5. A rubber tire measures 3 ft across. Find its circumference. Try to look for round/circular objects around your house. Then complete the table below. JAYLORD S. LOSABIA A. Bonifacio Elementary School
http://jaylordlosabia.blogspot.com/2012/11/lesson-plan-in-mathematics-circumference.html
13
131
Common Core Math Standards - 4th GradeMathScore aligns to the Common Core Math Standards for 4th Grade. The standards appear below along with the MathScore topics that match. If you click on a topic name, you will see sample problems at varying degrees of difficulty that MathScore generated. When students use our program, the difficulty of the problems will automatically adapt based on individual performance, resulting in not only true differentiated instruction, but a challenging game-like experience. Operations and Algebraic ThinkingUse the four operations with whole numbers to solve problems. 1. Interpret a multiplication equation as a comparison, e.g., interpret 35 = 5 × 7 as a statement that 35 is 5 times as many as 7 and 7 times as many as 5. Represent verbal statements of multiplicative comparisons as multiplication equations. 2. Multiply or divide to solve word problems involving multiplicative comparison, e.g., by using drawings and equations with a symbol for the unknown number to represent the problem, distinguishing multiplicative comparison from additive comparison.1 (Basic Word Problems 2 ) 3. Solve multistep word problems posed with whole numbers and having whole-number answers using the four operations, including problems in which remainders must be interpreted. Represent these problems using equations with a letter standing for the unknown quantity. Assess the reasonableness of answers using mental computation and estimation strategies including rounding. (Word Problems With Remainders ) Gain familiarity with factors and multiples. 4. Find all factor pairs for a whole number in the range 1–100. Recognize that a whole number is a multiple of each of its factors. Determine whether a given whole number in the range 1–100 is a multiple of a given one-digit number. Determine whether a given whole number in the range 1–100 is prime or composite. (Factoring , Prime Numbers ) Generate and analyze patterns. 5. Generate a number or shape pattern that follows a given rule. Identify apparent features of the pattern that were not explicit in the rule itself. For example, given the rule “Add 3” and the starting number 1, generate terms in the resulting sequence and observe that the terms appear to alternate between odd and even numbers. Explain informally why the numbers will continue to alternate in this way. 1 See Glossary, Table 2. Number and Operations in Base Ten¹Generalize place value understanding for multi-digit whole numbers. 1. Recognize that in a multi-digit whole number, a digit in one place represents ten times what it represents in the place to its right. For example, recognize that 700 ÷ 70 = 10 by applying concepts of place value and division. (Relative Place Value ) 2. Read and write multi-digit whole numbers using base-ten numerals, number names, and expanded form. Compare two multi-digit numbers based on meanings of the digits in each place, using >, =, and < symbols to record the results of comparisons. (Place Value , Number Comparison ) 3. Use place value understanding to round multi-digit whole numbers to any place. (Rounding Numbers , Rounding Large Numbers ) Use place value understanding and properties of operations to perform multi-digit arithmetic. 4. Fluently add and subtract multi-digit whole numbers using the standard algorithm. (Long Addition , Long Subtraction ) 5. Multiply a whole number of up to four digits by a one-digit whole number, and multiply two two-digit numbers, using strategies based on place value and the properties of operations. Illustrate and explain the calculation by using equations, rectangular arrays, and/or area models. (Multiplication By One Digit , Long Multiplication ) 6. Find whole-number quotients and remainders with up to four-digit dividends and one-digit divisors, using strategies based on place value, the properties of operations, and/or the relationship between multiplication and division. Illustrate and explain the calculation by using equations, rectangular arrays, and/or area models. (Long Division By One Digit , Division with Remainders ) 1Grade 4 expectations in this domain are limited to whole numbers less than or equal to 1,000,000. Number and Operations-Fractions¹Extend understanding of fraction equivalence and ordering. 1. Explain why a fraction a/b is equivalent to a fraction (n × a)/(n × b) by using visual fraction models, with attention to how the number and size of the parts differ even though the two fractions themselves are the same size. Use this principle to recognize and generate equivalent fractions. (Fraction Equivalence 2 , Basic Fraction Simplification ) 2. Compare two fractions with different numerators and different denominators, e.g., by creating common denominators or numerators, or by comparing to a benchmark fraction such as 1/2. Recognize that comparisons are valid only when the two fractions refer to the same whole. Record the results of comparisons with symbols >, =, or <, and justify the conclusions, e.g., by using a visual fraction model. (Fraction Comparison ) Build fractions from unit fractions by applying and extending previous understandings of operations on whole numbers. 3. Understand a fraction a/b with a > 1 as a sum of fractions 1/b. (Fraction Parts ) a. Understand addition and subtraction of fractions as joining and separating parts referring to the same whole. (Fraction Parts ) b. Decompose a fraction into a sum of fractions with the same denominator in more than one way, recording each decomposition by an equation. Justify decompositions, e.g., by using a visual fraction model. Examples: 3/8 = 1/8 + 1/8 + 1/8 ; 3/8 = 1/8 + 2/8 ; 2 1/8 = 1 + 1 + 1/8 = 8/8 + 8/8 + 1/8. (Fraction Parts ) c. Add and subtract mixed numbers with like denominators, e.g., by replacing each mixed number with an equivalent fraction, and/or by using properties of operations and the relationship between addition and subtraction. (Basic Fraction Addition , Basic Fraction Subtraction ) d. Solve word problems involving addition and subtraction of fractions referring to the same whole and having like denominators, e.g., by using visual fraction models and equations to represent the problem. (Basic Fraction Word Problems ) 4. Apply and extend previous understandings of multiplication to multiply a fraction by a whole number. (Basic Fraction Multiplication ) a. Understand a fraction a/b as a multiple of 1/b. For example, use a visual fraction model to represent 5/4 as the product 5 × (1/4), recording the conclusion by the equation 5/4 = 5 × (1/4). (Basic Fraction Multiplication ) b. Understand a multiple of a/b as a multiple of 1/b, and use this understanding to multiply a fraction by a whole number. For example, use a visual fraction model to express 3 × (2/5) as 6 × (1/5), recognizing this product as 6/5. (In general, n × (a/b) = (n × a)/b.) (Basic Fraction Multiplication ) c. Solve word problems involving multiplication of a fraction by a whole number, e.g., by using visual fraction models and equations to represent the problem. For example, if each person at a party will eat 3/8 of a pound of roast beef, and there will be 5 people at the party, how many pounds of roast beef will be needed? Between what two whole numbers does your answer lie? (Basic Fraction Word Problems 2 ) Understand decimal notation for fractions, and compare decimal fractions. 5. Express a fraction with denominator 10 as an equivalent fraction with denominator 100, and use this technique to add two fractions with respective denominators 10 and 100.2 For example, express 3/10 as 30/100, and add 3/10 + 4/100 = 34/100. (Fraction Equivalence 2 , Basic Fraction Addition ) 6. Use decimal notation for fractions with denominators 10 or 100. For example, rewrite 0.62 as 62/100; describe a length as 0.62 meters; locate 0.62 on a number line diagram. (Basic Fractions As Decimals ) 7. Compare two decimals to hundredths by reasoning about their size. Recognize that comparisons are valid only when the two decimals refer to the same whole. Record the results of comparisons with the symbols >, =, or <, and justify the conclusions, e.g., by using a visual model. (Compare Decimals ) 1 Grade 4 expectations in this domain are limited to fractions with denominators 2, 3, 4, 5, 6, 8, 10, 12, 100. 2 Students who can generate equivalent fractions can develop strategies for adding fractions with unlike denominators in general. But addition and subtraction with unlike denominators in general is not a requirement at this grade. Measurement and DataSolve problems involving measurement and conversion of measurements from a larger unit to a smaller unit. 1. Know relative sizes of measurement units within one system of units including km, m, cm; kg, g; lb, oz.; l, ml; hr, min, sec. Within a single system of measurement, express measurements in a larger unit in terms of a smaller unit. Record measurement equivalents in a two-column table. For example, know that 1 ft is 12 times as long as 1 in. Express the length of a 4 ft snake as 48 in. Generate a conversion table for feet and inches listing the number pairs (1, 12), (2, 24), (3, 36), ... 2. Use the four operations to solve word problems involving distances, intervals of time, liquid volumes, masses of objects, and money, including problems involving simple fractions or decimals, and problems that require expressing measurements given in a larger unit in terms of a smaller unit. Represent measurement quantities using diagrams such as number line diagrams that feature a measurement scale. (Time Intervals , Making Change , Unit Cost ) 3. Apply the area and perimeter formulas for rectangles in real world and mathematical problems. For example, find the width of a rectangular room given the area of the flooring and the length, by viewing the area formula as a multiplication equation with an unknown factor. (Compare Rectangle Area and Perimeter , Perimeter and Area Word Problems ) Represent and interpret data. 4. Make a line plot to display a data set of measurements in fractions of a unit (1/2, 1/4, 1/8). Solve problems involving addition and subtraction of fractions by using information presented in line plots. For example, from a line plot find and interpret the difference in length between the longest and shortest specimens in an insect collection. Geometric measurement: understand concepts of angle and measure angles. 5. Recognize angles as geometric shapes that are formed wherever two rays share a common endpoint, and understand concepts of angle measurement: a. An angle is measured with reference to a circle with its center at the common endpoint of the rays, by considering the fraction of the circular arc between the points where the two rays intersect the circle. An angle that turns through 1/360 of a circle is called a “one-degree angle,” and can be used to measure angles. b. An angle that turns through n one-degree angles is said to have an angle measure of n degrees. 6. Measure angles in whole-number degrees using a protractor. Sketch angles of specified measure. 7. Recognize angle measure as additive. When an angle is decomposed into non-overlapping parts, the angle measure of the whole is the sum of the angle measures of the parts. Solve addition and subtraction problems to find unknown angles on a diagram in real world and mathematical problems, e.g., by using an equation with a symbol for the unknown angle measure. (Angle Measurements ) GeometryDraw and identify lines and angles, and classify shapes by properties of their lines and angles. 1. Draw points, lines, line segments, rays, angles (right, acute, obtuse), and perpendicular and parallel lines. Identify these in two-dimensional figures. (Parallel and Perpendicular Lines ) 2. Classify two-dimensional figures based on the presence or absence of parallel or perpendicular lines, or the presence or absence of angles of a specified size. Recognize right triangles as a category, and identify right triangles. (Quadrilateral Types , Triangle Types ) 3. Recognize a line of symmetry for a two-dimensional figure as a line across the figure such that the figure can be folded along the line into matching parts. Identify line-symmetric figures and draw lines of symmetry. Learn more about our online math practice software.
http://www.mathscore.com/math/standards/Common%20Core/4th%20Grade/
13
58
The intercept theorem , also known as Thales' theorem (not to be confused with another theorem with that name In geometry, Thales' theorem states that if A, B and C are points on a circle where the line AC is a diameter of the circle, then the angle ABC is a right angle. Thales' theorem is a special case of the inscribed angle theorem... ), is an important theorem in elementary geometry about the ratios of various line segment In geometry, a line segment is a part of a line that is bounded by two end points, and contains every point on the line between its end points. Examples of line segments include the sides of a triangle or square. More generally, when the end points are both vertices of a polygon, the line segment... s that are created if two intersecting line The notion of line or straight line was introduced by the ancient mathematicians to represent straight objects with negligible width and depth. Lines are an idealization of such objects... s are intercepted by a pair of parallel Parallelism is a term in geometry and in everyday life that refers to a property in Euclidean space of two or more lines or planes, or a combination of these. The assumed existence and properties of parallel lines are the basis of Euclid's parallel postulate. Two lines in a plane that do not... s. It is equivalent to the theorem about ratios in similar triangles. Traditionally it is attributed to Greek mathematician Thales Thales of Miletus was a pre-Socratic Greek philosopher from Miletus in Asia Minor, and one of the Seven Sages of Greece. Many, most notably Aristotle, regard him as the first philosopher in the Greek tradition... Suppose S is the intersection point of two lines and A, B are the intersections of the first line with the two parallels, such that B is further away from S than A, and similarly C, D are the intersections of the second line with the two parallels such that D is further away from S than C. - The ratios of any two segments on the first line equals the ratios of the according segments on the second line: , , NEWLINE - The ratio of the two segments on the same line starting at S equals the ratio of the segments on the parallels: NEWLINE - The converse of the first statement is true as well, i.e. if the two intersecting lines are intercepted by two arbitrary lines and holds then the two intercepting lines are parallel. However the converse of the second statement is not true.NEWLINE - If you have more than two lines intersecting in S, then ratio of the two segments on a parallel equals the ratio of the according segments on the other parallel. An example for the case of three lines is given the second graphic below. Similarity and similar Triangles The intercept theorem is closely related to similarity Two geometrical objects are called similar if they both have the same shape. More precisely, either one is congruent to the result of a uniform scaling of the other... . In fact it is equivalent to the concept of similar triangles, i.e. it can be used to prove the properties of similar triangles and similar triangles can be used to prove the intercept theorem. By matching identical angles you can always place two similar triangles in one another so that you get the configuration in which the intercept theorem applies; and conversely In logic, the converse of a categorical or implicational statement is the result of reversing its two parts. For the implication P → Q, the converse is Q → P. For the categorical proposition All S is P, the converse is All P is S. In neither case does the converse necessarily follow from... the intercept theorem configuration always contains two similar triangles. Scalar Multiplication in Vector Spaces In a normed vector space A vector space is a mathematical structure formed by a collection of vectors: objects that may be added together and multiplied by numbers, called scalars in this context. Scalars are often taken to be real numbers, but one may also consider vector spaces with scalar multiplication by complex... , the axiom In traditional logic, an axiom or postulate is a proposition that is not proven or demonstrated but considered either to be self-evident or to define and delimit the realm of analysis. In other words, an axiom is a logical statement that is assumed to be true... s concerning the scalar multiplication In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra . In an intuitive geometrical context, scalar multiplication of a real Euclidean vector by a positive real number multiplies the magnitude of the vector without changing its direction... ) are assuring that the intercept theorem holds. You have
http://www.absoluteastronomy.com/topics/Intercept_theorem
13
71
1A1. Historical note. The periscope is the eye of the submarine. It was invented and developed solely for the purpose of providing a means to view the surface without fear of detection by surface craft. While it is primarily simple in principle, actually it is a complicated piece of apparatus. It is probable that all the navies of the world have similar instruments with only The earliest submarines were built without provision for periscopes and therefore, when submerged, were forced to grope their way blindly. In 1854 Marie Davey, a Frenchman, designed a sight tube for a submarine. This tube contained two mirrors, one above the other, held at a 45 degree angle and facing in opposite directions. These, while providing some degree of sight to the submerged vessel, were faulty at best and, in 1872, prisms were substituted for mirrors. Before the War Between the States, the submarine had not had a place among the ships of naval warfare. An American, Thomas H. Doughty, USN, was the inventor of the original periscope. Doughty's invention was not the result of study and research but rather the result of necessity. During the campaign of the Red River, while he was serving aboard the monitor Osage, Confederate cavalry, from the banks of the river, kept up a steady series of surprise attacks upon the Union vessels which had no way of seeing over the banks. This led Doughty to seek some new method of watching the shores. He took a piece of lead pipe, fitted it with mirrors at either end, and ran it up through the turret. This makeshift periscope provided sight for the crew of the Osage, enabled them to annihilate approaching Confederates, and practically freed her from further attack. The earliest periscope, other than a collapsible one designed late in the nineteenth century by Simon Lake and known as an omniscope or skalomniscope, was a fixed tube. Soon, however, provision was made to allow the tube to be raised and turned by hand. This was fairly satisfactory when the boat was traveling at a low rate of speed but, with increased speed, the pressure was apt to bend the tube and throw the image out of line. Improved design resulted in a double tube, the outer to resist pressure and the inner to house the lens systems. One of the biggest difficulties with the periscope in its infancy was that the rotation of the upper prism caused the image to be seen upside down. This has been corrected in the design of The Germans were responsible in large measure for the improvement of the modern periscope but, in spite of the advances made in the development of the instrument, the basic principle is still the same, the reflection of objects through mirrors or prisms arranged in a tube. 1A2. Periscope function. The essential function of a periscope is to give an officer conning a submarine a view of the surrounding horizon while his vessel remains submerged. To accomplish this, it is necessary that the periscope be long enough to extend beyond the surface, and that means be provided to deflect the horizontal rays of light first in a downward direction, and then horizontally to the eye of the observer. In addition, the part of the periscope which is to be above water must be as inconspicuous and streamlined as possible; for this reason the periscope is made in the form of a long narrow tube. 1A3. Periscope nomenclature. To insure a uniform method of designating periscopes on submarines, a standard system of nomenclature is used in all correspondence, specifications, and plans relating to such instruments. The periscope nearest the bow is called No. 1 Periscope, regardless of whether it is of the altiscope type or whether it is installed in the conning tower. The next periscope aft of No. 1 Periscope is called No. 2 Periscope, and the next periscope aft of No. 2 is called No. 3 Periscope. The terms forward, middle, and after periscopes or 1st, 2nd, and 3rd periscopes are 1A4. Useful definitions. The term periscope is used generally to designate all types of instruments. However, it is used specifically to designate instruments that are designed for horizontal The term altiscope is applied to a periscope from which the upper prism has been omitted and the view is directly upward toward the The term altiperiscope is applied to instruments having the combined qualities of altiscopes and periscopes, sometimes called altiscope-periscopes and sometimes alti-azimuth instruments. The terms unifocal and bifocal are used to refer to instruments of single and double power, The term night periscope is used to designate a periscope having both high light transmission and an exit pupil of large diameter. The term attack periscope is applied to a periscope with a minimum diameter of head at the sacrifice of light transmission and diameter of exit pupil. The term metrescope is used to designate a periscope designed primarily for determining ranges of objects. The term azimuth circle refers to the graduated circle used for taking bearings with the The term stabilized azimuth device refers to a device in which a vertical wire in the field of the periscope is held gyroscopically in a fixed position in azimuth. The device is used in estimating the speed of an enemy ship. 1A5. Design designations of periscopes. Each separate or modified design of periscope is assigned a design designation, which is used in all correspondence relating to the periscope, in addition to the registry number of the periscope. The design designation is assigned by the Bureau of Ships and consists of the following parts in the 1. A serial number for each design, assigned by the Bureau. 2. A letter indicating the manufacturer. |E||Keuffel & Esser| |B||Bausch & Lomb| |S||Barr & Stroud| |Z||Nederlandsche Instrumentim Compagnie (Nedinsco)| 3. A letter indicating the type of periscope. |N||Night or low visibility periscope| 4. A number indicating the optical length of the instrument in feet to the nearest foot. 5. For a period, the letter T was added to indicate that the optics of the instrument had been treated to increase light transmission and improve definition. Since all periscopes in service have been so treated and new periscopes are furnished treated, this letter is not being included in recent design designations. 6. If the outer diameter of the upper portion of the reduced head section is less than 2 inches, a number representing the outer diameter of the upper part of the reduced head section in inches is added, separated from the preceding character by a diagonal mark. 7. If the instrument is an altiperiscope designed to permit view at any angle from the zenith to a point below the horizon, the letters HA 8. As an example the following is quoted: |40||(optical length in feet to nearest foot)| |1.414||(outside diameter of upper part of reduced head section in inches)| Combined, this design designation reads as follows: 1A6. Marking of periscopes. The registry number of the periscope is conspicuously cut, or impress stamped, on the eyepiece end of each periscope. It is also stamped on detachable external fittings, such as the training handles. An etched or engraved name plate of suitable corrosion-resistant material is secured by screws to the eyepiece box of each periscope, and contains the following data: |U.S.N. BU. OF SHIPS| |REGISTRY NO. ____________| |MAGNIFICATION FIELD OF VIEW||______||_____| |SMALL DIVISION OF RETICLE EQUALS (ELEV.)||______||_____| |LINE OF SIGHT (DEP.) INSPECTOR||______||_____| |MFG. _________ by| The inspector's stamp appears on the name 1A7. Principles of modern periscopes. Everyone has looked through the wrong end of a telescope, that is, an inverted telescope, and viewed a normal scene much reduced in apparent size. This apparent reduction takes place because the inverted telescope takes a wide angle of vision and reduces it into a narrower one in the eyepiece. This principle is employed in periscopes. Essentially, a periscope consists of a vertical tube with a head prism inclined to the horizon at an angle of 45 degrees, a reducing telescope, and, at the bottom of the tube, an enlarging telescope and a lower prism facing the head prism and parallel to and below it. The objectives of the two telescopes face Suppose that a periscope is to be constructed with a field of 40 degrees. If, at the upper end of the tube, a telescope is installed with a reduction of 20x, or 1/20, the field angle is narrowed by lenses to 2 degrees. This field angle passes through a 5-inch tube for a distance of 12 feet. Now, if at the lower end a magnifying telescope of 20x is installed, the lenses of this telescope take the field angle of 2 degrees and expand it to 40 degrees. If astronomical telescopes are used, the upper telescope inverts the image and the lower telescope reinverts it, so that the image appears erect to the observer. The distance between the objectives, about 12 feet, plus the lengths of the two telescope systems enable the periscope to attain sufficient length, for example, 27, 30, 34, or 40 feet. If the periscope is to magnify the image, it is necessary either to decrease the reduction of the image by the upper telescope or to increase the magnification of the lower telescope. For example, if a magnification of 2x is desired, the upper telescope may be so changed that the field angle is reduced to only 1/10 of the original field angle, while the lower telescope remains unchanged; the magnification would then be 1/10 X 20, or 2x. Or the upper telescope may remain unchanged at 1/20 and the magnification of the lower may be increased to 40x: Then the final magnification is 1/20 X 40, or 2x, as before. However, the latter plan has the disadvantage of reducing the illumination. Since the size of the exit pupil is equal to the diameter of the objective divided by the magnification, the exit pupil is reduced if the magnification is increased. 1A8. Limits of periscope design. It is seen from the preceding section that there are definite limits in periscope design. The vital factors, as in a telescope, are: 1) length of tube, 2) diameter, 3) illumination, 4)magnification, and 5) size of field. If a periscope favoring any one of these factors is to be produced, such favoring can be only at the expense of the other factors; hence, the final design generally is a compromise. 1A9. Examples of periscope design. The following requirements are for periscopes which have been used in submarines: field, at least 40 degrees to 45 degrees; magnification, between 1.2x and 1.5x; exit pupil, at least 5 millimeters in diameter; length, not specified; external diameter, 5 inches; thickness of walls, about 1/4 inch. Let us find possible periscope lengths under these conditions for the two magnifications given, 1.2x and 1.5x. The inside diameter of the tube is 5 inches minus 1/2 inch, or 4 1/2 inches. The lens, lens-holding ring, supporting tube, and so forth take up another 1/2 inch of diameter, leaving about 4 inches free for the objective. 4 inches = 101.6 mm, which is close to 100 mm In order to obtain an exit pupil of 5 millimeters, the magnification of the telescope must be: Diameter of objective / Diameter of exit pupil = 100 / 5 = 20x Figure 1-1. Section through submarine with periscope elevated. If the magnification of the final periscope is to be 1.2x, the reduction of the upper telescope 20 / 1.2 = 16.67, or 16.67x Since the field must be 40 degrees / 16.67, or 2.4 degrees = 2 degrees 24', this limits the length between the objectives of the two telescopes, since the entire beam of light must fall on the lower objective. From Figure 1-3, it can be seen that the permissible length equals 2 / tan θ, where 2 is half the diameter of the lower objective lens in inches and θ is half the angle of beam. θ equals 2 degrees 24' / 2, or 1 degrees 12'. log 2 = 10.30103 - 10 log tan 1 degree 12' = (8.32112 / 1.97991) - 10 antilog 1.97991 = 95.58 inches = 7 feet 11 1/2 inches The upper and lower telescope systems enter into the total length, and if it were possible to increase the focal length of their objective lenses Figure 1-2. Detail of encircled section in Figure 1-1. indefinitely, the periscope could be lengthened. Increasing this is limited, however, by the same considerations of diameter and cannot exceed the same length; that is, about 7 feet 11 1/2 inches for each telescope system. Hence, the total possible length is roughly 3 times 7 feet 11 1/2 inches, or about 23 feet 10 1/2 inches. Since this length is greater than is required, the diameter of the periscope may be reduced, the magnification increased, or the size of the exit pupil increased If the magnification is to be 1.5x, the reduction of the upper telescope must be: 20 / 1.5 = 13 1/3x For a field of 40 degrees, the angle of beam is: 40 / 13 1/3 = 30 degrees The inter-objective distance is: log 2 = 10.30103 - 10 log tan 1 degrees 30' = (8.41807 / 1.88296) - 10 antilog 1.88296 = 76.37 inches = 6 feet 4.4 The total length possible is 3 times 6 feet 4.4 inches, or 19 feet 1.2 inches. To increase the length of tube beyond these limits, more telescopes may be placed in the tube. If astronomical telescopes are used, two more must be employed to keep the image erect, making a total of four telescope systems. One Galilean telescope could be used. The objection to adding more telescopes lies in the fact that each lens through which the beam must pass absorbs light, and if more are added, the illumination is seriously reduced. Figure 1-4 shows a periscope designed as a straight instrument, and Figure 1-5 shows it with prisms introduced. The prisms may be placed at any point where the angle of the rays does not exceed the critical angle which results in total reflection. In this particular case, the prisms are placed at the focal planes. Both periscopes produce an erect image, since the two astronomical telescopes and the two prisms counteract each other in inverting the object. Prisms should not be placed exactly in a focal plane. Doing so is faulty design, since any minute imperfections Figure 1-3. Example of periscope design. Figure 1-4. Example of periscope design. Figure 1-5. Example of periscope design. that may be present in or on the reflecting surface are reproduced as part of the final image, whereas a lens or glass plate which is not in a focal plane, or near one, may be dirty without affecting the resulting image. Periscope specifications often state that no lens or glass plate should be in or near a focal plane except the crosswire reticle, which must of necessity be placed in a focal plane. Since the backs of the prisms, which are the reflecting surfaces, are silvered, the critical angle for reflection is raised to more than 20 degrees; thus the two eyepieces may be placed between the prisms and the objectives. Both forms of construction are used in various periscopes. However, the best position for a prism is at a point at which the rays are approximately parallel; in erecting telescopes, this point lies between the two erecting The chief function of a telescope system in a periscope is to take an object appearing from the point of vision under narrow angular view, and produce it to the eye at a wide angle. The ratio of these two angles is the magnification of the telescope. 1A10. Altiscopes. The only difference between a periscope and an altiscope is that in an altiscope the upper prism is omitted and the view is directly upward toward the zenith. The field of an altiscope is 100 degrees. To obtain this field, some sacrifice must be made in other characteristics. The magnification is necessarily less than unity. The only type of periscope used in the Navy today which permits observation of the zenith is the Type II design (Design Designations 89KA40T/1.414HA, 91KA40T/1.414HA, and 92KA40T/1.4HA built by the Kollmorgen Optical Corp., Brooklyn, N.Y., which is of the high-angle type. The prism has a maximum elevation of the line of sight above horizontal of 74.5 degrees. The entire sky is observed with the line of sight set respectively at 14 degrees, 44 degrees, and 74.5 degrees or full elevation, giving complete zenith at the edge of the field in low power. The periscope is rotated 360 degrees in each zone with a minimum of overlap between 1A11. Types of periscopes. Periscopes under Bureau of Ships Specifications R20 P5 of 15 June 1940, are of the following types: 1. Type I. Outer diameter of taper section, 1.414 inches. The line of sight can be moved through all angles between 10 degrees depression and 45 degrees elevation. 2. Type II. Outer diameter of taper section, 1.414 inches. The line of sight can be moved through all angles between 10 degrees depression and 74 degrees elevation. 3. Type III. Outer diameter of taper section, 1.99 inches. The line of sight can be moved through all angles between 10 degrees depression and 45 degrees elevation. 4. Type IV. Outer diameter of taper section, 3.750 inches. The line of sight can be moved through all angles between 10 degrees depression and 45 degrees elevation. The periscope is designed for night use with an installed antenna array and waveguide for the attachment of an electronic range B. MATERIALS AND WORKMANSHIP| 1B1. General description. a. The materials and workmanship of both mechanical and optical features of Navy periscopes are the best throughout. Particular attention is devoted to the accuracy, durability, ruggedness, especially as regards ability to withstand excessive vibration, and finish of the periscope and of each of its component parts. In deciding whether to reject flawed, improperly or inaccurately finished, or otherwise defective optical parts in which the flaws or defects are of such nature that they do not offer any possibility of more than very slightly reducing optical efficiency and durability of the instrument, the state of advancement of the manufacturing of optical parts at the time the parts in question were manufactured is taken into consideration. However, the final decision always rests with the Navy Department. b. Metals used in the construction of periscopes, except where otherwise specified, are brass, bronze, nickel-copper alloy, or corrosion-resisting steel. The balls of the hoisting yoke are made of stainless or corrosion-resisting steel. Carbon steel may be used for ball-bearing races and balls, springs, and small parts which must be hardened. Carbon steel is not used for parts exposed to salt water. Carbon steel parts external to the sealed portion of the periscope are cadmium plated. Aluminum or aluminum alloys are used only in parts where lightness is essential, provided such parts are within the sealed portion of the periscope, and specific approval has been given by the Bureau of Ships. c. The highest standards of mechanical construction are required, especially with respect to the hermetical tightness of the instrument and the arrangements for rangefinding, changing the magnification, operating the altiscope attachment, and focusing. Sharp corners or points which might be sources of chips or metal shavings during assembly and adjustment, or from vibration of the periscope, are avoided. d. The construction of the periscope is such that the optics and internal mechanism may be easily disassembled and correctly reassembled, and the hermetical tightness of the instrument may be maintained. 1B2. General requirements for periscopes. When delivered to the Government, periscopes are completely assembled, including all parts and fittings. By means of the tests described below and by such other tests as the Government representative may require or conduct during the manufacture and after completion of the periscope, it must be demonstrated that the periscope meets the provisions of the specifications set up for its manufacture. The following requirements apply to all types of periscopes: a. Hermetical tightness. The complete optics of the periscope, except rayfilters, are contained in a hermetically sealed tubular casing. Only the first surface of the head window and the last surface of the eyepiece window used in the optical system, are external to the hermetically sealed easing. The external casing is, in so far as possible, capable of withstanding without leakage the shocks, vibrations, and bending to which the instrument is subjected in service. b. Tests of castings. The external casing and all castings forming part of the hermetically sealed portion of the periscope are given an internal air pressure test. When practicable, each casting is subjected separately to an internal air pressure test after the completion of all machine work. A part that shows signs of porosity on this test is rejected, unless, after effective steps have been taken by brazing, peening, and tinning or other means to remedy permanently the porous condition, and after the defective part has passed a successful internal pressure test, the acceptance of such part is specifically authorized by the c. Cracking of metal under stress. In the selection of the material and method of manufacture of the various parts of the external casing, due regard is given to the danger of the development of porosity as a result of minute cracks that may occur in the metal when it is subjected to the stresses and vibrations encountered in d. Joints in the external casing. All joints, in the external casing for the passage of moving parts, such as the operating gear for the power shift, altiscope, and focusing mechanism, are located below the hoisting yoke. All joints in the external casing which must be broken for overhaul, cleaning, or renewal of the optics or internal mechanism of the periscope, or for drying out the periscope, are located below the hoisting yoke, except in the Type I and Type II periscopes where one such joint is permitted at the upper end of the taper section. 1. The joints between the main body tube and the eyepiece box casting and taper section, and the joint between the taper section and head section are in accordance with Bureau of Ships Plans Nos. 306508 and 318815. Special provision is made in the case of screwed joints or joints held by screws to insure that the joint is not loosened by continued vibration. Setscrews and tap bolts, with lock washers or other locks, are used as necessary for this purpose. In installing such setscrews or tap bolts, special care must be taken not to drill entirely through the wall of the external casing of the periscope. 2. If it is necessary to drill screw holes completely through the wall of the external casing, the screws used in such holes are fitted with the utmost accuracy and, when practicable, are tinned and sweated in place. The threads of such screws engage only in threads in the wall of the external casing. However, this construction is avoided if possible. No holes are drilled through the main body tube or taper section. 3. Permanent joints which are not broken for overhaul, cleaning, or renewal of the optics or internal mechanism of the periscope are screwed joints. Before setting up, the screw threads are coated with a mixture of litharge and glycerin. Screwed joints are designed to provide an external shoulder about 0.20 inch in width. Such a shoulder requires a true and smooth finish. Gaskets for permanent joints are usually of soft annealed copper 1/32 inch thick. At the joint between the lower end of the main body tube and the eyepiece box, there is a triangular annular ridge on the shoulder 1/64 inch in height and approximately 1/16 inch in width at the base. The angles, including the apex, of this ridge are filleted. There is a corresponding triangular annular groove in the other face of the joint. In addition to the threaded part of the overlap of the permanent screwed joint between the main body tube and the taper section of the external casing, there is an unthreaded overlapping part. The latter part is located farther from the external seam of the joint than the threaded part, and the exterior surface of the inner overlapping part and the interior surface of the outer overlapping part are finish machined or bored to give the closest and tightest practicable fit. When practicable, these surfaces are slightly conical. This part of the joint is tinned and sweated, or coated with litharge and glycerin. 4. Joints which must be broken for overhaul, cleaning, or renewal of the optics or internal mechanism of the periscope are either screwed joints provided with a shoulder that seats against a gasket, or are secured by flush, fillister head screws of a noncorrosive material. The width of the shoulder of such a joint is at least 3/16 inch. Rubber gaskets of suitable thickness and at least 3/16 inch in width are inserted in all such joints. A triangular annular ridge is provided on one face of each such joint, and a corresponding triangular annular groove is provided in the opposite face of the joint. The faces of each such joint have a smooth and true finish, and a ground or scraped fit is preferred. In Type I and Type II periscopes, an exception to the foregoing may be made for one such joint at the upper end of the taper section, in which the width of the shoulder and the gasket width may be less than 3/16 inch, and the faces of the joint may be normal to the axis instead of finished with triangular grooves. The use of any such joint is subject to the specific approval of the Bureau of Ships. 5. Cover plates and retaining rings of joints secured by screws are of such thickness and the screw spacing is sufficiently close to guard effectively against any possibility of lack of tightness of the joint caused by springing of the metal between securing screws. However, screwed cover plates and retaining rings are preferred to cover plates and retaining rings secured by screws, especially in the case of joints which must be broken for overhaul, cleaning, and removal of the optics and internal mechanism of the periscope. 6. In the case of each joint which must be broken for overhaul, cleaning, or renewal of the optics or internal mechanism of a periscope, provision as far as practicable is made to enable the joint to be broken without undue difficulty. To prevent seepage of water between the threads of screwed joints of this character, the hermetically tight part of the joint is, when practicable, external to the threaded part. Special provision is made to guard against freezing of the threads of a screwed joint, resulting from corrosion of the metal caused by the seepage of salt water between the threaded parts of the joint. To provide for the easy removal of screwed cover plates, a hexagonal base is provided when practicable. This base conforms to the size of a United States standard hexagonal nut. 7. Joints in the eyepiece box casting of a periscope for the passage of moving parts, such as the operating gear for the power shift, altiscope, or focusing mechanism, are made in the form of stuffing boxes. Only motion of revolution is transmitted through a joint in the external casing. 8. Packed joints in the external casing of a periscope are thoroughly worked in before making the internal 150-pound test that must be made after assembly of the instrument. No further adjustments of these stuffing boxes are made after the successful completion of this test. Copyright (C) 2004 Historic Naval Ships Association All Rights Reserved Version 1.10, 22 Oct 04
http://hnsa.org/doc/fleetsub/pscope/chap1.htm
13
63
In relativity, proper time is the elapsed time between two events as measured by a clock that passes through both events. The proper time depends not only on the events but also on the motion of the clock between the events. An accelerated clock will measure a smaller elapsed time between two events than that measured by a non-accelerated (inertial) clock between the same two events. The twin paradox is an example of this effect. In terms of four-dimensional spacetime, proper time is analogous to arc length in three-dimensional (Euclidean) space. By convention, proper time is usually represented by the Greek letter τ (tau) to distinguish it from coordinate time represented by t or T. By contrast, coordinate time is the time between two events as measured by a distant observer using that observer's own method of assigning a time to an event. In the special case of an inertial observer in special relativity, the time is measured using the observer's clock and the observer's definition of simultaneity. The formal definition of proper time involves describing the path through spacetime that represents a clock, observer, or test particle, and the metric structure of that spacetime. Proper time is the pseudo-Riemannian arc length of world lines in four-dimensional spacetime. From the mathematical point of view, coordinate time is assumed to be predefined and we require an expression for proper time as a function of coordinate time. From the experimental point of view, proper time is what is measured experimentally and then coordinate time is calculated from the proper time of some inertial clocks. In special relativity, proper time can be defined as If t, x, y, and z are all parameterised by a parameter λ, this can be written as In differential form it can be written as the line integral where P is the path of the clock in spacetime. To make things even easier, inertial motion in special relativity is where the spatial coordinates change at a constant rate with respect to the temporal coordinate. This further simplifies the proper time equation to where Δ means "the change in" between two events. The special relativity equations are special cases of the general case that follows. Using tensor calculus, proper time is more rigorously defined in general relativity as follows: Given a spacetime which is a pseudo-Riemannian manifold mapped with a coordinate system and equipped with a corresponding metric tensor , the proper time experienced in moving between two events along a timelike path P is given by the line integral (Note: the Einstein summation convention is used in the above. The expression AμBμ is shorthand for A0B0 + A1B1 + A2B2 + A3B3, and the μ in Bμ denotes an index, not a power.) For any spacetime, there is an incremental invariant interval ds between events with an incremental coordinate separation dxμ of This is referred to as the line element of the spacetime. s may be spacelike, lightlike, or timelike. Spacelike paths cannot be physically traveled (as they require moving faster than light). Lightlike paths can only be followed by light beams, for which there is no passage of proper time. Only timelike paths can be traveled by massive objects, in which case the invariant interval becomes the proper time . So for our purposes Taking the square root of each side of the line element gives the above definition of . After that, take the line integral of each side to get as described by the first equation. This spacetime and mapping are described with the Minkowski metric: In special relativity, the proper time equation becomes For a twin "paradox" scenario, let there be an observer A who moves between the coordinates (0,0,0,0) and (10 years, 0, 0, 0) inertially. This means that A stays at for 10 years of coordinate time. The proper time for A is then So we find that being "at rest" in a special relativity coordinate system means that proper time and coordinate time are the same. Let there now be another observer B who travels in the x direction from (0,0,0,0) for 5 years of coordinate time at 0.866c to (5 years, 4.33 light-years, 0, 0). Once there, B accelerates, and travels in the other spatial direction for 5 years to (10 years, 0, 0, 0). For each leg of the trip, the proper time is So the total proper time for observer B to go from (0,0,0,0) to (5 years, 4.33 light-years, 0, 0) to (10 years, 0, 0, 0) is 5 years. Thus it is shown that the proper time equation incorporates the time dilation effect. In fact, for an object in a SR spacetime traveling with a velocity of v for a time , the proper time experienced is which is the SR time dilation formula. An observer rotating around another inertial observer is in an accelerated frame of reference. For such an observer, the incremental () form of the proper time equation is needed, along with a parameterized description of the path being taken, as shown below. Let there be an observer C on a disk rotating in the xy plane at a coordinate angular rate of and who is at a distance of r from the center of the disk with the center of the disk at x=y=z=0. The path of observer C is given by , where is the current coordinate time. When r and are constant, and . The incremental proper time formula then becomes So for an observer rotating at a constant distance of r from a given point in spacetime at a constant angular rate of ω between coordinate times and , the proper time experienced will be As v=rω for a rotating observer, this result is as expected given the time dilation formula above, and shows the general application of the integral form of the proper time formula. The difference between SR and general relativity (GR) is that in GR you can use any metric which is a solution of the Einstein field equations, not just the Minkowski metric. Because inertial motion in curved spacetimes lacks the simple expression it has in SR, the line integral form of the proper time equation must always be used. An appropriate coordinate conversion done against the Minkowski metric creates coordinates where an object on a rotating disk stays in the same spatial coordinate position. The new coordinates are The t and z coordinates remain unchanged. In this new coordinate system, the incremental proper time equation is With r, θ, and z being constant over time, this simplifies to which is the same as in Example 2. Now let there be an object off of the rotating disk and at inertial rest with respect to the center of the disk and at a distance of R from it. This object has a coordinate motion described by dθ = -ω dt, which describes the inertially at-rest object of counter-rotating in the view of the rotating observer. Now the proper time equation becomes So for the inertial at-rest observer, coordinate time and proper time are once again found to pass at the same rate, as expected and required for the internal self-consistency of relativity theory.2 The Schwarzschild solution has an incremental proper time equation of - t is time as calibrated with a clock distant from and at inertial rest with respect to the Earth, - r is a radial coordinate (which is effectively the distance from the Earth's center), - ɸ is a co-latitudinal coordinate, the angular separation from the north pole in radians. - θ is a longitudinal coordinate, analogous to the longitude on the Earth's surface but independent of the Earth's rotation. This is also given in radians. - m is the geometrized mass of the Earth, m = GM/c2, - M is the mass of the Earth, - G is the gravitational constant. To demonstrate the use of the proper time relationship, several sub-examples involving the Earth will be used here. The use of the Schwarzschild solution for the Earth is not entirely correct for the following reasons: - Due to its rotation and tidal deformation, the Earth is an oblate spheroid instead of being a true sphere. This results in the gravitational field also being oblate instead of spherical. - In GR, a rotating object also drags spacetime along with itself. This is described by the Kerr solution. However, the amount of frame dragging that occurs for the Earth is so small that it can often be ignored. For the Earth, M = 5.9742 × 1024 kg, meaning that m = 4.4354 × 10−3 m. When standing on the north pole, we can assume (meaning that we are neither moving up or down or along the surface of the Earth). In this case, the Schwarzschild solution proper time equation becomes . Then using the polar radius of the Earth as the radial coordinate (or meters), we find that At the equator, the radius of the Earth is r = 6,378,137 meters. In addition, the rotation of the Earth needs to be taken into account. This imparts on an observer an angular velocity of of 2π divided by the sidereal period of the Earth's rotation, 86162.4 seconds. So . The proper time equation then produces This should have been the same as the previous result, but as noted above the Earth is not spherical as assumed by the Schwarzschild solution. Even so this demonstrates how the proper time equation is used. - Lorentz transformation - Minkowski space - Proper length - Proper acceleration - Proper mass - Proper velocity - Clock hypothesis - Minkowski, Hermann (1908), "Die Grundgleichungen für die elektromagnetischen Vorgänge in bewegten Körpern", Nachrichten von der Königlichen Gesellschaft der Wissenschaften und der Georg-August-Universität zu Göttingen (Göttingen): 53–111 - cf. R. J. Cook (2004) Physical time and physical space in general relativity, Am. J. Phys. 72:214–219 Return to Fuhz Home - This article covering Proper time is enhanced for the visually impaired. The text of this Fuhz article is released under the GNU Free Documentation License
http://www.fuhz.com/Proper_time
13
54
Science Fair Project Encyclopedia A siphon is a continuous tube that allows liquid to drain from a reservoir through a intermediate point that is higher than the reservoir. Once started, a siphon requires no additional energy to keep the liquid flowing up and out of the reservoir. The siphon works because the ultimate drain point is lower than the reservoir and the flow of liquid out the drain point creates a vacuum in the tube such that liquid is drawn up out of the reservoir. The maximum height of the intermediate point (the crest) is limited by atmospheric pressure and the density of the liquid. At the high point of the siphon, gravity tends to draw the liquid down in both directions creating a vacuum. Atmospheric pressure on the top surface of the higher reservoir is transmitted through the liquid in the reservoir and up the siphon tube and prevents a vacuum from forming. When the pressure exerted by the weight of the height of the column of liquid equals that of atmospheric pressure, a vacuum will form at the high point and the siphon effect ended. For water at standard pressure, the maximum height is approximately 33 feet (10 m); for mercury it is 30 inches (76 cm). An analogy to understand siphons is to imagine a long, frictionless train extending from a plain, up a hill and then down the hill into a valley below the plain. So long as part of the train extends into the valley below the plain, it is "intuitively obvious" that the portion of the train sliding into the valley can pull the rest of the train up the hill and into the valley. What is not obvious is what holds the train together when the train is a liquid in a tube. In this analogy, atmospheric pressure holds the train together. Once the force of gravity on the couplings between the cars of the train going up the hill exceeds that of atmospheric pressure, the coupling breaks and the train falls apart. The train analogy is demonstrated in a "siphon-chain model" where a long chain on a pulley flows between two beakers. A plain tube can be used as a siphon. An external pump has to be applied to start the liquid flowing and prime the siphon. This can be a human mouth and lungs. This is sometimes done with any leak-free hose to siphon gasoline from a motor vehicle's gasoline tank to an external tank. If the tube is flooded with liquid before part of the tube is raised over the intermediate high point and care is taken to keep the tube flooded while it is being raised, no pump is required. Devices sold as siphons come with a siphon pump to start the siphon process. Large siphons may be used in municipal waterworks and industry. Their size requires control via valves at the intake, outlet and crest of the siphon. The siphon may be primed by closing the intake and outlets and filling the siphon at the crest. If intakes and outlets are submerged, a vacuum pump may be applied at the crest to prime the siphon. Alternatively the siphon may be primed by a pump at either the intake or outlet. Gas in the liquid is a concern in large siphons. The gas tends to accumulate at the crest and if enough accumulates to break the flow of liquid, the siphon stops working. The siphon itself will exacerbate the problem because as the liquid is raised through the siphon, the pressure drops, causing dissolved gases within the liquid to be "degassed". Higher temperature accelerates the release of gas from liquids so maintaining a constant, low temperature helps. The longer the liquid is in the siphon, the more gas is released, so a shorter siphon overall helps. Local high points will trap gas so the intake and outlet legs should have continuous slopes without intermediate high points. The flow of the liquid moves bubbles thus the intake leg can have a shallow slope as the flow will push the gas bubbles to the crest. Conversely, the outlet leg needs to have a steep slope to allow the bubbles to move against the liquid flow. At the crest the gas can be trapped in a chamber above the crest. The chamber needs to be occasionally primed again with liquid to remove the gas. Among some physicists there is some dispute as to what causes the siphon to lift liquid from the upper reservoir to the crest of the siphon. They argue that theoretically, internal molecular cohesion is sufficient to pull the liquid up the intake leg of the siphon to the crest. Furthermore, some argue that theoretically a siphon will operate in a vacuum. In practice atmospheric pressure is required. The term self-siphon is used in a number of ways. Liquids that are composed of long polymers can "self-siphon" and these liquids do not depend on atmospheric pressure. Self-siphoning polymer liquids work the same as the siphon-chain model where the lower part of the chain pulls the rest of the chain up and over the crest. This phenomenon is also called a tubeless siphon. "Self-siphon" is also often used in sales literature by siphon manufacturers to describe portable siphons that contain a pump. With the pump, no external suction (e.g. from a person's mouth/lungs) is required to start the siphon and thus the product is inaccurately described as a "self-siphon". If the upper reservoir is such that the liquid there can rise above the height of the siphon crest, the rising liquid in the reservoir can "self-prime" the siphon and the whole apparatus be described as a "self-siphon". Once primed, such a siphon will continue to operate until the level of the upper reservoir falls below the intake of the siphon. Such self-priming siphons are useful in some rain gauges and dams. The siphon was first used as a weapon by the Byzantine Navy, and the most common method of deployment was to emit Greek fire, a formula of burning oil, through a large bronze tube onto enemy ships. Usually the mixture would be stored in heated, pressurized barrels and projected through the tube by some sort of pump while the operators were sheltered behind large iron shields. Bowl siphons are part of flush toilets. Siphon action in the bowl siphon siphons out the contents of the toilet bowl and creates the characteristic toilet "sucking" sound. Some toilets also use a siphon for the actual flush from the storage tank. An inverted siphon is not a siphon but a term applied to pipes that must dip below an obstruction to form a "U" shaped flow path. At no point does the siphon effect come into play; an inverted siphon will work fine in the absence of atmospheric pressure. Liquid flowing in one end simply forces liquid up and out the other end. Engineers must ensure that the flow rate in such a channel is fast enough to keep suspended solids from settling. Otherwise, the inverted siphon tends to act as a debris trap. This is especially important in sewage systems which must be routed under rivers or other deep obstructions. Back siphonage is a plumbing term applied to clean water pipes that connect directly into a reservoir without an air gap. As water is delivered to other areas of the plumbing system at a lower level, the siphon effect will tend to siphon water back out of the reservoir. This may result in contamination of the water in the pipes. Back siphonage is not to be confused with backflow. Back siphonage is a result of liquids at a lower level drawing water from a higher level. Backflow is driven entirely by pressure in the reservoir itself. Backflow cannot occur through an intermediate high-point. Back siphonage can flow through in intermediate high-point and is thus much more difficult to guard against. Anti-siphon valves are required in such designs. Building codes often contain specific sections on back siphonage and especially for external faucets. (See Exhibit A.) The reason is that external faucets may be attached to hoses which may be immersed in an external body of water, such as a swimming pool, aquarium or washing machine. Should the pressure within the water supply system fall, the external water may be siphoned back into the drinking water system through the faucet . Another possible contamination point is the water intake in the toilet tank. An anti-siphon valve is also required here to prevent pressure drops in the water supply line from siphoning water out of the toilet tank (which may contain additives such as "toilet blue") and contaminating the water system. Anti-siphon valves are also used medically. Hydrocephalus, or excess fluid in the brain, maybe treated with a shunt which drains cerebrospinal fluid from the brain. All shunts have a valve to relieve excess pressure in the brain. The shunt may lead into the abdominal cavity such that the shunt outlet is significantly lower than the shunt intake when the patient is standing. Thus a siphon effect may take place and instead of simply relieving excess pressure, the shunt may act as a siphon, completely draining cerebrospinal fluid from the brain. The valve in the shunt may be designed to prevent this siphon action so that negative pressure on the drain of the shunt does not result in excess drainage. Only excess positive pressure from within the brain should result in drainage. Note that the anti-siphon valve in medical shunts is preventing excess forward flow of liquid. In plumbing systems, the anti-siphon valve is preventing backflow. A siphon barometer is the term sometimes applied to the simplest of mercury barometers. A continuous U-shaped tube of the same diameter throughout is sealed on one end and filled with mercury. When placed into the upright position, mercury will flow away from the sealed end, forming a vacuum, until balanced by atmospheric pressure on the other end. The term "siphon" is used because the same principle of atmospheric pressure acting on a fluid is applied. The difference in height of the fluid between the two arms of the U-shaped tube is the same as the maximum intermediate height of a siphon. When used to measure pressures other than atmospheric pressure, a siphon barometer is sometimes called a siphon gauge and not to be confused with a siphon rain gauge. Siphon pressure gauges are rarely used today. A siphon bottle is a pressurized bottle with a vent and a valve. Pressure within the bottle drives the liquid up and out a tube. It is a siphon in the sense that pressure drives the liquid through a tube. A siphon bottle is sometimes called a gasogene or even more rarely, a siphoid. A siphon cup is the (hanging) reservoir of paint attached to a spray gun. This is to distinguish it from gravity-fed reservoirs. An archaic use of the term is a cup of oil in which the oil is siphoned out of the cup via a cotton wick or tube to a surface to be lubricated. A siphon rain gauge is a rain gauge that can record rainfall over an extended period. A siphon is used to automatically empty the gauge. It is often simply called a "siphon gauge" and is not to be confused with a siphon pressure gauge. Heron's siphon is a siphon that works on positive air pressure and at first glance appears to be a perpetual motion machine. In a slightly differently configuration, it is also known as Hero's fountain . Biologists debate whether the siphon mechanism plays a role in blood circulation . It is theorized that veins form a continuous loop with arteries such that blood flowing down veins help siphon blood up the arteries, especially in giraffes and snakes. Some have concluded that the siphon mechanism aids blood circulation in giraffes . Many others dispute this and experiments show no siphon effects in human circulation. The term "siphon" is also used with a number biological object either because flowing liquids are involved or because the object is shaped like a siphon. In all cases, no actual siphon effect is occurring. - Mollusks have an organ called a siphon which sucks water in and out for the purpose of filter-feeding. - Mosquito larvae and other insect larvae (e.g. Tabanidae,Belostomatidae ) live in the water and breathe through a so-called siphon which is functionally a snorkel - Some adult insects which spend considerable time underwater, such as the water scorpion , have an abdominal breathing tube that is also called a siphon. - A siphon gourd has a long curved neck, shaped like a siphon - A portion of the human internal carotid artery running through the cavernous sinus is called the carotid siphon because of its shape. Bernoulli's equation may be applied to a siphon to derive the flow rate and maximum height of the siphon. - Let the surface of the upper reservoir be the reference elevation. - Let point A be the start point of siphon, immersed within the higher reservoir and at a depth -d below the surface of the upper reservoir. - Let point B be the intermediate high point on the siphon tube at height +hB above the surface of the upper reservoir. - Let point C be the drain point of the siphon at height -hC below the surface of the upper reservoir. - = fluid velocity along the streamline - = gravitational acceleration downwards - = elevation in gravity field - = pressure along the streamline - = fluid density Apply Bernoulli's equation to the surface of the upper reservoir. The surface is technically falling as the upper reservoir is being drained. However, for this example we may assume the reservoir to be infinite and the velocity of the surface may be set to zero. Furthermore, the pressure at the surface is atmospheric pressure. Thus: - (Equation 1.) Apply Bernoulli's equation to point A at the start of the siphon tube in the upper reservoir where P=PA, v=vA and y=-d - (Equation 2.) Apply Bernoulli's equation to point B at the intermediate high point of the siphon tube where P=PB, v=vB and y=hB - (Equation 3.) Apply Bernoulli's equation to point C where the siphon empties. Where v=vC and y=-hC. Furthermore, the pressure at the exit point is atmospheric pressure. Thus: - (Equation 4.) As the siphon is a single system, the constant in all 4 equations are the same. Setting equations 1 and 4 equal to each other gives: Solving for vC: - Velocity of siphon: The velocity of the siphon is thus driven solely by the height difference between the surface of the upper reservoir and the drain point. The height of the intermediate high point, hB, does not affect the velocity of the siphon. However, as the siphon is a single system, vB=vC and the intermediate high point does limit the maximum velocity. The drain point cannot be lowered indefinitely to increase the velocity. Equation 3 will limit the velocity to a positive pressure at the intermediate high point to prevent cavitation. The maximum velocity may be calculated by combining equations 1 and 3: Setting PB=0 and solving for vmax: - Maximum velocity of siphon: The depth, -d, of the initial entry point of the siphon in the upper reservoir, does not affect the velocity of the siphon. No limit to the depth of the siphon start point is implied by Equation 2 as pressure PA increases with depth d. Both these facts imply the operator of the siphon may bottom skim or top skim the upper reservoir without impacting the siphon's performance. Note that this equation for the velocity is the same as that of any object falling height hC. Note also that this equation assumes PC is atmospheric pressure. If the end of the siphon is below the surface, the height to the end of the siphon cannot be used; rather the height difference between the reservoirs should be used. Setting equations 1 and 3 equal to each other gives: Maximum height of the intermediate high point occurs when it is so high that the pressure at the intermediate high point is zero. Setting PB=0: Solving for hB: - General height of siphon: This means that the height of the intermediate high point is limited by velocity of the siphon. Faster siphons result in lower heights. Height is maximized when the siphon is very slow and vB=0: - Maximum height of siphon: This is the maximum height that a siphon will work. It is simply when the weight of the column of liquid to the intermediate high point equates to atmospheric pressure. Substituting values for water will give 10 metres for water and 0.84 metres for mercury. Exhibit A: Sample building code regulations regarding back siphonage - 220.127.116.11.Back Siphonage - (1) Every potable water system that supplies a fixture or tank that is not subject to pressures above atmospheric shall be protected against back-siphonage by a backflow preventer. - (2) Where a potable water supply is connected to a boiler, tank, cooling jacket, lawn sprinkler system or other device where a non-potable fluid may be under pressure that is above atmospheric or the water outlet may be submerged in the non-potable fluid, the water supply shall be protected against backflow by a backflow preventer. - (3) Where a hose bibb is installed outside a building, inside a garage, or where there is an identifiable risk of contamination, the potable water system shall be protected against backflow by a backflow preventer. - 18.104.22.168.Back Siphonage - Sewer for details of an accident involving a siphon. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Siphon
13
54
The equivalence principle, an extension of the mass-independent acceleration of Newtonian gravity, provides general relativity with its fundamental effects on light propagation. In addition to the effects associated with the equivalence principle, general relativity adds three more features not found in Newtonian gravity: under the principle that gravity appears as the curvature of space-time, the radius-volume and radius-circumference relationships of Euclidean geometry no longer hold in a gravitational field; gravitational fields in general relativity can propagate as waves moving at the speed of light; and the static solution of general relativity is a black hole, meaning that it has a static event horizon. Usually these traits of general relativity are invisible in astrophysics; for most problem, Newtonian gravity is entirely adequate. There are only six cases in astrophysics where the effects of general relativity appear. Three of these are generally weak effects that are observable within the Solar System: the drift of a planet's perihelion, the gravitational bending and focusing of light, and the gravitational Doppler shift of light. The remaining cases are on the edge of detection: gravitational waves, black holes, and the curvature of space-time within our expanding universe. In special relativity, an observer accelerating at a constant rate sees light falling to him as blue-shifted in frequency, and he sees light rising to him as red-shifted in frequency. The equivalence principle says that acceleration in a gravitational field produces the same effect, so an observer standing on Earth should see light falling from above as blue-shifted. This test was first performed in 1960, and the results are as expected under the equivalence principle. In astronomy, the gravitational redshift is a mild effect that is usually lost in the Doppler shift from the motion of the gas emitting the light and from the thermal motion of the electrons and ions producing the light. The gravitational redshift has been seen in some white dwarfs, although the effect is a redshift of less than a factor of 10-4 in frequency of the spectrum. The redshift expected for radiation from neutron stars is larger, about 15% in frequency. For neutron stars in binary systems, this redshift is mixed with the red-shift from orbital and free-fall motion, but for isolated neutron stars, the gravitational redshift should be untainted. So far, lines have not been seen from the surface emission of isolated neutron stars, but the hope persists. An early test of general relativity was the bending of light rays by the Sun. From the equivalence principle we know that light must travel on a curved path by a gravitating body, but the magnitude of the effect depends in part on the curvature of space around that body. The Sun is massive enough to produce an observable bending of starlight; under general relativity, starlight passing the limb of the Sun is deflected 1.75." Astronomers sought to see this effect by measuring the positions of stars around the Sun during a solar eclipse. This effect was measured by several teams of astronomers during the May 29, 1919 eclipse, although the accuracy of that experiment is now questioned. Today the same effect can be done by measuring the path followed by radio waves reflected off of Venus or emitted from a spacecraft on the far side of the Sun. The weak gravitational fields of stars, planets, and galaxies bend light by very small angles. Under most circumstances, we cannot see this effect, but for very distant objects, where this small angle is comparable to the angles separating objects on the sky, we see this effect with regularity. The two instances where the gravitational bending of light is important is in the appearance of the most distant galaxies, which can be affected by galaxies and clusters of galaxies between us and them, and in the appearance of stars in nearby galaxies, which can be affected by intervening brown dwarf stars in our own galaxy. If a cluster of galaxies lies between us and a high-redshift galaxy, that cluster will act as a lens, splitting the image of the distant galaxy into multiple images. The number of images depends on how the mass in the cluster is distributed. For example, a spherically-symmetric galaxy cluster can produce three images if the the cluster is slightly off the line running from the lensed galaxy to us. One image will come from the light that passes almost undeflected through the center of the cluster, a second image will come from light deflected towards us as it passes to the right of the cluster, and a third image will come from light deflected towards us as it passes to the left of the cluster. Many multiple-image galaxies are known. By studying these galaxies, we can gather information about the structure of the intervening galaxy cluster. Dwarf stars within our own galaxy can also cause multiple images, but they are too close to the lensing star to separate with a telescope. But gravity also can cause the images to appear more luminous. A distant star can therefore appear to grow luminous as an dim intervening star passes between it and ourselves. This effect has been observed by several groups, and it is being used to estimate the density of dim stars in our galaxy. A second early test of general relativity is influence of the curvature of space on the orbit of a planet. The curvature of space by gravity causes a breakdown in the relationship between area and circumference. This effect changes the orbit of a planet from the closed ellipse of Newtonian gravity to an orbit that does not close on itself. In effect, in the time it takes a planet to travel 360° around the Sun, it does not complete the full excursion from aphelion to perihelion and back. The strength of this effect depends on distance from the Sun; the closer a planet is to the Sun, the stronger the perihelion drift. Mercury, the planet deepest in the Sun's gravitational potential, exhibits this effect most strongly. With every orbit around the Sun, Mercury's perihelion shifts by 0.1038". This effect is so strong that it was observed before general relativity was developed. Venus and Earth also have perihelion shifts that are strong enough to observe, predicted at 0.058" and 0.038" per orbit, as does the asteroid Icarus, with a perihelion sift of 0.115" in general relativity. All of these perihelion shifts are observed. When a gravitating body is accelerated in general relativity, its gravitational field must change throughout space. But the change to the field can only propagate at the speed of light. This is a property that general relativity shares with electromagnetism, and as with the electromagnetic field, the change in the gravitational field propagates outward across space as a wave. There is one place in this universe where we clearly see the radiation of gravitational wave: compact binary star systems. As a two stars orbit each other, the binary system emits gravitational waves that carry energy way from the system. Over time, the energy loss causes the stars to spiral closer together. While not the only mechanism for losing energy, gravitational radiation is the dominant mechanism in very compact systems containing two neutron stars. In binary pulsar systems, this energy loss has been measured and shown to be consistent with the losses expected from gravitational radaition. The other side of the gravitational wave problem, the detection of gravitational waves at Earth, has yet to be accomplished. Several machines of various design are currently attempting to detect these waves, and several new machines of greater sensitivity are under development. (Continue on the Gravitational Waves survey path.) The event horizon is not a consequence of general relativity, but of special relativity. If I accelerate at a constant rate in a rocket, an event horizon forms behind me. The event horizon is simply an abstract boundary that separates light that can reach me from light that cannot; no special physics occurs there. Inevitably in any theory of gravity, we must have a static, spherically-symmetric gravitational field. After all, that is what we have here on Earth. For an observer sitting at a fixed point in such a field, where his length is small enough to make the tidal force negligible, the acceleration he experiences is identical to the constant acceleration he would experience in special relativity. But in special relativity, an observer has an event horizon below him; must this event horizon also exist in general relativity? If the distance to the event horizon is much shorter than the distance to the center of the gravitational field, so that the distance to the event horizon is much shorter than the distance that the tidal force is strong, there must be a static event horizon. This occurs in general relativity, giving the theory a static solution that is a black hole. An event horizon implies a second feature not found in Newtonian gravity: a radius of last stable orbit. In special relativity, when an observer is accelerating at a constant rate, the light he emits inevitably falls to the event horizon. In particular, if he shines light parallel to the event horizon, it will bend and fall onto the event horizon. But far from the event horizon, where the gravitational field resembles the Newtonian gravitational field, light will travel in a nearly straight line. These two limits imply that at some radius, light emitted perpendicular to the event horizon orbits bends just enough as it travels to keep a constant distance from the center of the black hole. Such a radius exists around the black hole of general relativity, and it is called the last stable orbit; at larger radii, objects with mass can orbit the black hole, but inside this radius, light objects with mass fall to the event horizon. The most striking feature of a black hole is the infinite number of images that they create of all the objects surrounding them. The reason for this is that light from a point in space can orbit the black hole many times before escaping to the observer. Depending on the angle of emission, light from an object can orbit the black hole once before escaping, it can orbit twice before escaping, or it can orbit a dozen times before escaping. Each photon path will appear as a distinct image to the observer. The images created after many orbits are quite dim, so most of the light that reaches the observer comes from only several images. Black holes are the ultimate end in general relativity for the most massive stars. Black hole candidates are found in compact stellar binaries, were they are several times the mass of the sun, and on a much larger scale at the centers of galaxies, where they can be a billion times the mass of the sun. These objects are massive and compact, and assuming that general relativity is correct, they must be black holes. But are they black holes? The difficulty in proving the existence of black holes is in developing a diagnostic for the event horizon. The problem is that nothing happens physically at the event horizon. The only physics that is unique to the black hole is at the last stable orbit and the creation of an infinite number of images. The last stable orbit will have an orbital period associated with it that is dependent on the mass of the black hole. The multiple images created when light is bent by the black hole, is more a complication that must be accounted for than a diagnostic. Over short distances within our expanding universe, we do not see the effects of general relativity. We can describe the expansion of the universe near our galaxy with Newtonian gravity alone. As we look farther away from our galaxy, however, we begin to see the curvature of space-time caused by the mass within the universe. At its most extreme, gravity provided by the matter in the universe can cause the universe to stop expanding and than collapse upon itself. The curvature of space-time in such a universe is so severe that the volume of space in the universe is finite. At the opposite extreme, a universe with virtually no mass, the universe expands forever. In such a universe, there is no curvature of space-time, so the universe expands to infinity, and the volume within a fixed radius is set by Euclidean geometry. Between these two extremes is the universe with just enough mass to slow the expansion of the universe for all time without causing collapse. The galaxies moving outward from us in such a universe are similar to a spacecraft leaving Earth at precisely the escape velocity: as the galaxy moves to infinity, its velocity away from us goes to zero. This universe with the closure density does not have a volume at a fixed radius that obeys Euclidean geometry. The curvature of space-time within our universe can be observed if we are able to see far enough out, where the gravity of the universe is sufficiently strong to decelerate the galaxies. This effect is seen in the curvature of space in two ways: in the number of galaxies out at high redshift, which is a proxy measurement of the volume versus radius, and in the apparent diameter of a galaxy at a given redshift, which is a proxy measurement of circumference versus radius. These effects are difficult to separate from other characteristics displayed by galaxies, such as the changes in their character with age of the universe, so they are more effects that must be included in an analysis of galaxies than diagnostics of general relativity.
http://www.astrophysicsspectator.com/topics/generalrelativity/Astrophysics.html
13
88
Pulse-Doppler is a 4D radar system capable of detecting a target 3D location and its radial velocity (range-rate). The radar transmit short pulses of radio frequency which are partially bounced back by airborne objects or spacecraft. In a typical operation, the energy returned from a dozen or more pulses are combined using Pulse-Doppler signal processing, based on the Doppler effect, to extract the information. The use of short pulses instead of a continuous wave avoids the risk of overloading computers and operators as well as reducing power consumption. In particular, it reduces enough the microwave power emission and the weight of the radar to make it safe and effective for use on aircraft. Pulse-Doppler radar has fundamental characteristics that differentiate it from conventional pulse radar, and continuous-wave Doppler radar, which makes it ideal for different applications: improved detection in high-clutter environments, greater track reliability using feedback, passive vehicle type classification, and unattended operation. In meteorological radars, pulse-Doppler measures instantaneous speed of precipitations at discrete range intervals as the beam is slewed across the sky. Pulse-Doppler radar is also the basis of synthetic aperture radar used in radar astronomy, remote sensing and mapping. In air traffic control, they are used for discriminating aircraft from clutter. This type of radar is crucial for military applications such as airspace surveillance, targeting and look-down/shoot-down, which allows small fast-moving objects to be detected near terrain and weather. It permits detection of targets while eliminating hostile environmental influences, such as reflections from weather, the surface of the earth, and biological objects like birds, and electronic interference, which hide reflected signals from aircraft but which move much slower. A secondary advantage in military radars is to reduce transmit power while achieving acceptable performance for improved safety of stealthy radar. Besides above conventional surveillance applications, pulse Doppler radar has been successfully applied in healthcare, such as fall risk assessment and fall detection, for nursing or clinic purpose. The earliest radar systems failed to operate as expected. The reason was traced to Doppler effects that degrade performance of systems not designed to account for moving objects. Fast-moving objects cause a phase-shift on the transmit pulse that can produce signal cancellation. Doppler has maximum detrimental effect on moving target indicator systems, which must use reverse phase shift for Doppler compensation in the detector. Doppler weather effects (precipitation) were also found to degrade conventional radar and moving target indicator radar, which can mask aircraft reflections. This phenomenon was adapted for use with weather radar in the 1950s after declassification of some World War II systems. Pulse-Doppler radar was developed during World War II to overcome limitations by increasing pulse repetition frequency. This required the development of klystron, the traveling wave tube, and solid state devices. Pulse-Doppler is incompatible with other high power microwave amplification devices that are not Coherent. Early examples of military systems include the AN/SPG-51B developed during the 1950s specifically for the purpose of operating in hurricane conditions with no performance degradation. It became possible to use pulse-Doppler radar on aircraft after digital computers were incorporated in the design. Pulse-Doppler provided look-down/shoot-down capability to support air-to-air missile systems in most modern military aircraft by the mid 1970s. Pulse-Doppler radar is based on the Doppler effect, where movement in range produces frequency shift on the signal reflected from the target. Radial velocity is essential for pulse-Doppler radar operation. As the reflector moves between each transmit pulse, the returned signal has a phase difference or phase shift from pulse to pulse. This causes the reflector to produce Doppler modulation on the reflected signal. Pulse-Doppler radars exploit this phenomenon to improve performance. The amplitude of the successively returning pulse from the same scanned volume is: This allows the radar to separate the reflections from multiple objects located in the same volume of space by separating the objects using a spread spectrum to segregate different signals. - where is the phase shift induced by range motion. Rejection speed is selectable on pulse-Dopper aircraft-detection systems so nothing below that speed will be detected. A one degree antenna beam illuminates millions of square feet of terrain at 10 miles (16 km) range, and this produces thousands of detections at or below the horizon if Doppler is not used. Pulse-Doppler radar uses the following signal processing criteria to exclude unwanted signals from slow-moving objects. This is also known as clutter rejection. Rejection velocity is usually set just above the prevailing wind speed (10 to 100 mile/hour or 15 to 150 km/hour). The velocity threshold is much lower for weather radar. In airborne pulse-Doppler radar, the velocity threshold is offset by the speed of the aircraft relative to the ground. - Where is the angle offset between the antenna position and the aircraft flight trajectory. Surface reflections appear in almost all radar. Ground clutter generally appears in a circular region within a radius of about 25 miles near ground-based radar. This distance extends much further in airborne and space radar. Clutter results from radio energy being reflected from the earth's surface, buildings, and vegetation. Clutter includes weather in radar intended to detect and report aircraft and spacecraft. Clutter creates a vulnerability region in pulse-amplitude time-domain radar. Non-Doppler radar systems cannot be pointed directly at the ground due to excessive false alarms, which overwhelm computers and operators. Sensitivity must be reduced near clutter to avoid overload. This vulnerability begins in the low-elevation region several beam widths above the horizon, and extends downward. This also exists throughout the volume of moving air associated with weather phenomenon. Pulse-Doppler radar corrects this as follows. - Allows the radar antenna to be pointed directly at the ground without overwhelming the computer and without reducing sensitivity. - Fills in the vulnerability region associated with pulse-amplitude time-domain radar for small object detection near terrain and weather. - Increases detection range by 300% or more in comparison to Moving target indication (MTI) by improving sub-clutter visibility. Clutter rejection capability of about 60dB is needed for look-down/shoot-down capability, and pulse-Doppler is the only strategy that can satisfy this requirement. This eliminates vulnerabilities associated with the low-elevation and below-horizon environment. Pulse compression, and moving target indicator (MTI) provide up to 25dB sub-clutter visibility. MTI antenna beam is aimed above horizon to avoid excessive false alarm rate, which renders systems vulnerable. Aircraft and some missiles exploit this weakness using a technique called flying below the radar to avoid detection (Nap-of-the-earth). This flying technique is ineffective against pulse-Doppler radar. Pulse-Doppler provides an advantage when attempting to detect missiles and low observability aircraft flying near near terrain, sea surface, and weather. Medium PRF reflected microwave signals fall between 1,500 and 15,000 cycle per second, which is audible. This means a helicopter sounds like a helicopter, a jet sounds like a jet, and propeller aircraft sound like propellers. Aircraft with no moving parts produce a tone. The actual size of the target can be calculated using the audible signal. Ambiguity processing is required when target range is above the red line in the graphic, which increases scan time. Scan time is a critical factor for some systems because vehicles moving at or above the speed of sound can travel one mile (1.6 km) every few seconds, like the Exocet, Harpoon, Kitchen, and Air-to-air missile. The maximum time to scan the entire volume of the sky must be on the order of a dozen seconds or less for systems operating in that environment. Pulse-Doppler radar by itself can be too slow to cover the entire volume of space above the horizon unless fan beam is used. This approach is used with the AN/SPS 49(V)5 Very Long Range Air Surveillance Radar, which sacrifices elevation measurement to gain speed. Pulse-Doppler antenna motion must be slow enough so that all the return signals from at least 3 different PRF can be processed out to the maximum anticipated detection range. This is known as dwell time. Antenna motion for pulse-Doppler must be as slow as radar using MTI. Search radar that include pulse-Doppler are usually dual mode because best overall performance is achieved when pulse-Doppler is used for areas with high false alarm rates (horizon or below and weather), while conventional radar will scan faster in free-space where false alarm rate is low (above horizon with clear skies). The antenna type is an important consideration for multi mode radar because undesirable phase shift introduced by the radar antenna can degrade Measure of Performance for sub-clutter visibility. The signal processing enhancement of pulse-Doppler allows small high-speed objects to be detected in close proximity to large slow moving reflectors. To achieve this, the transmitter must be coherent and should produce low phase noise during the detection interval, and the receiver must have large instantaneous dynamic range. Pulse-Doppler signal processing also includes ambiguity resolution to identify true range and velocity. The received signals from multiple PRF are compared to determine true range using the range ambiguity resolution process. The received signals are also compared using the frequency ambiguity resolution process. The range resolution is the minimum range separation between two objects traveling at the same speed before the radar can detect two discrete reflections. The velocity resolution is the minimum radial velocity difference between two objects traveling at the same range before the radar can detect two discrete reflections. Pulse-Doppler radar has special requirements that must be satisfied to achieve acceptable performance. Pulse repetition frequency Pulse-Doppler typically uses medium Pulse repetition frequency (PRF) from about 3 kHz to 30 kHz. The range between transmit pulses is 5 km to 50 km. Range and velocity cannot be measured directly using medium PRF, and a technique called ambiguity resolutions is required to identify true range and speed. Doppler signals are generally above 1 kHz, which is audible, so audio signals from medium-PRF systems can be used for passive target classification. Tracking radar systems use angle error to improve accuracy by producing measurements perpendicular to the radar antenna beam. Angular measurements are averaged over a span of time and combined with radial movement to develop information suitable to predict target position for a short time into the future. The two angle error techniques used with tracking radar are monopulse and conical scan. Cavity magnetron and crossed-field amplifier are not appropriate because noise introduced by these devices interfere with detection performance. The only amplification devices suitable for pulse-Doppler are klystron, traveling wave tube, and solid state devices. Pulse Doppler signal processing introduces a phenomenon called scalloping. The name is associated with a series of holes that are scooped-out of the detection performance. Scalloping for pulse-Doppler radar involves blind velocities created by the clutter rejection filter. Every volume of space must be scanned using 3 or more different PRF. A two PRF detection scheme will have detection gaps with a pattern of discrete ranges, each of which has a blind velocity. Ringing artifacts pose a problem with search, detection, and ambiguity resolution in pulse-Doppler radar. Ringing is reduced in two ways. First, the shape of the transmit pulse is adjusted to smooth the leading edge and trailing edge so that RF power is increased and decreased without an abrupt change. This creates a transmit pulse with smooth ends instead of a square wave, which reduces ringing phenomenon that is otherwise associated with target reflection. Second, the shape of the receive pulse is adjusted using a window function that minimizes ringing that occurs any time pulses are applied to a filter. In a digital system, this adjusts the phase and/or amplitude of each sample before it is applied to the Fast Fourier Transform. The Dolph-Chebychev window is the most effective because it produces a flat processing floor with no ringing that would otherwise cause false alarms. Pulse-Doppler radar is generally limited to mechanically aimed antennas and active phase array. Mechanical RF components, such as wave-guide, can produce Doppler modulation due to phase shift induced by vibration. This introduces a requirement to perform full spectrum operational tests using shake tables that can produce high power mechanical vibration across all anticipated audio frequencies. Doppler is incompatible with most electronically steered phase-array antenna. This is because the phase-shifter elements in the antenna are non-reciprocal and the phase shift must be adjusted before and after each transmit pulse. Spurious phase shift is produced by the sudden impulse of the phase shift, and settling during the receive period between transmit pulses places Doppler modulation onto stationary clutter. That receive modulation corrupts Measure of Performance for sub-clutter visibility. Phase shifter settling time on the order of 50ns is required. Start of receiver sampling needs to be postponed at least 1 phase-shifter settling time-constant (or more) for each 20dB of sub-clutter visibility. Most antenna phase shifters operating at PRF above 1 kHz introduce spurious phase shift unless special provisions are made, such as reducing phase shifter settling time to a few dozen nanoseconds. The following gives the maximum permissible settling time for antenna phase shift modules. - T = phase shifter settling time - SCV = sub-clutter visibility in dB - S = number of range samples between each transmit pulse - PRF = maximum design pulse repetition frequency The antenna type and scan performance is a practical consideration for multi-mode radar systems. Choppy surfaces, like waves and trees, form a diffraction grating suitable for bending microwave signals. Pulse-Doppler can be so sensitive that diffraction from mountains, buildings or wave tops can be used to detect fast moving objects otherwise blocked by solid obstruction along the line of sight. This is a very lossy phenomenon that only becomes possible when radar has significant excess sub-clutter visibility. Refraction and ducting use transmit frequency at L-band or lower to extend the horizon, which is very different from diffraction. Refraction for over-the-horizon radar uses variable density in the air column above the surface of the earth to bend RF signals. An inversion layer can produce a transient troposphere duct that traps RF signals in a thin layer of air like a wave-guide. Subclutter visibility involves the maximum ratio of clutter power to target power, which is proportional to dynamic range. This determines performance in heavy weather and near the earth surface. Subclutter visibility is the ratio of the smallest signal that can be detected in the presence of a larger signal. A small fast-moving target reflection can be detected in the presence of larger slow-moving clutter reflections when the following is true. The pulse-Doppler radar equation can be used to understand trade-offs between different design constraints, like power consumption, detection range, and microwave safety hazards. This is a very simple form of modeling that allows performance to be evaluated in a sterile environment. The theoretical range performance is as follows. - R = Distance to the target - Pt = Transmitter power - Gt = Gain of the transmitting antenna - Ar = Effective aperture (area) of the receiving antenna - σ = Radar cross section, or scattering coefficient, of the target - F = Antenna pattern propagation factor - D = Doppler filter size (transmit pulses in each Fast Fourier transform) - Kb = Boltzmann's constant - T = Temperature (kelvin) - B = Receiver Bandwidth (band pass filter) - N = Noise figure This equation is derived by combining the Radar equation with the Noise equation and accounting for in-band noise distribution across multiple detection filters. The value D is added to the standard radar range equation to account for both Pulse-Doppler signal processing and transmitter FM noise reduction. Detection range is increased proportional to the square root of the number of filters. Power consumption is reduced by the square of the number of filers. For example, pulse-Doppler signal processing with 1,024 filters will reduce noise contribution in each filter 60dB below the level of electronic noise being sampled in the receiver. Each filter holds only a small amount of the total noise arriving at the receiver. This means a system with a receiver bandwidth of 1 mHz would have an effective bandwidth of 1 kHz in each of the 1,024 filters where detection takes place. In addition, Pulse-Doppler signal processing integrates all of the energy from all of the individual reflected pulses that enter the filter. This means a Pulse-Doppler signal processing system with 1,024 elements provides 60dB of improvement due to the type of signal processing that must be used with pulse-Doppler radar. The energy of all of the individual pulses from the object are added together by the filtering process. Signal processing for a 1,024 point filter improves performance by 120dB, assuming compatible transmitter and antenna. This corresponds to the following potential improvements. - 3,000% increase in maximum distance. This can increase range from 500 km to 10% of the distance from earth to sun - 1,000,000 fold reduction in microwave transmit power, which makes radar safe for aircraft and practical for spacecraft - 1,000,000 fold reduction in target size, from 1 square meter to 1 square millimeter, to eliminate stealth aircraft advantage These improvements are the reason pulse-Doppler is essential for military and astronomy. Aircraft tracking uses Pulse-Doppler radar for aircraft detection has two modes. Scan mode involves frequency filtering, amplitude thresholding, and ambiguity resolution. Once a reflection has been detected and resolved, the pulse-Doppler radar automatically transitions to tracking mode for the volume of space surrounding the track. Track mode works like a phase-locked loop, where Doppler velocity is compared with the range movement on successive scans. Lock indicates the difference between the two measurements is below a threshold, which can only occur with an object that satisfies Newtonian mechanics. Other types of electronic signals cannot produce a lock. Lock exists in no other type of radar. The lock criteria needs to be satisfied during normal operation. Lock eliminates the need for human intervention with the exception of helicopters and electronic jamming. Pulse-Doppler signal processing selectively excludes low-velocity reflections so that no detections occurs below a threshold velocity. This eliminates terrain, weather, biologicals, and mechanical jamming with the exception of decoy aircraft. The target Doppler signal from the detection is converted from frequency domain back into time domain sound for the operator in track mode on some radar systems. The operator uses this sound for passive target classification, such as recognizing helicopters and electronic jamming. Special consideration is required for aircraft with large moving parts because pulse-Doppler radar operates like a phase-locked loop. Blade tips moving near the speed of sound produce the only signal that can be detected when a helicopter is moving slow near terrain and weather. Helicopters appears like a rapidly pulsing noise emitter except in a clear environment free from clutter. An audible signal is produced for passive identification of the type of airborne object. Microwave Doppler frequency shift produced by reflector motion falls into the audible sound range for human beings (20-20,000 Hz), which is used for target classification in addition to the kinds of conventional radar display used for that purpose, like A-Scope, B-Scope, C-Scope, and RHI indicator. The human ear may be able to tell the difference better than electronic equipment. A special mode is required because the Doppler velocity feedback information must be unlinked from radial movement so that the system can transition from scan to track with no lock. Similar techniques are required to develop track information for jamming signals and interference that cannot satisfy the lock criteria. Pulse-Doppler radar must be multi-mode to handle aircraft turning and crossing trajectory. Once in track mode, pulse-Doppler radar must include a way to modify Doppler filtering for the volume of space surrounding a track when radial velocity falls below the minimum detection velocity. Doppler filter adjustment must be linked with a radar track function to automatically adjust Doppler rejection speed within the volume of space surrounding the track. Tracking will cease without this feature because the target signal will otherwise be rejected by the Doppler filter when radial velocity approaches zero. Multi-mode operation may also include continuous wave illumination for semi-active radar homing. - Radar signal characteristics (fundamentals of the radar signal) - Doppler radar (non pulsed; used for navigation systems) - Weather radar (pulsed with Doppler processing) - Continuous-wave radar (non-pulsed, pure Doppler processing) - Fm-cw radar (non-pulsed, swept frequency, range and Doppler processing) - Aliasing - the reason for ambiguous velocity estimates - Doppler sonography - velocity measurements in medical ultrasound. Based on the same principle - Tartar Guided Missile Fire Control System - AN/SPS-49 surface search radar (US) - AN/SPG-51D, the MK-74 Guided Missile Fire Control System (US) - MK-74 Guided Missile Fire Control System (GMFCS) (US) - McDonnell Douglas F-15 Eagle radar system (US) - General Dynamics F-16 Fighting Falcon radar system (US) - Close-in weapon system (US) - Mirage (French) - MIG-25 radar (Soviet) - MIG-31 radar (Soviet) - Type 345 Radar (Chinese) - CLC-1 Radar (Chinese) - SLC-2 Radar (Chinese) - YLC-15 Radar (Chinese) - JL-10A (Chinese) - KLJ-7 Radar (Chinese) - Doppler On Wheels meteorological - NEXRAD meteorological - Terminal Doppler Weather Radar meteorological - ARMOR Doppler Weather Radar meteorological - Doppler radar presentation, which highlights the advantages of using the autocorrelation technique - Pulse-Doppler radar handouts from Introduction to Principles and Applications of Radar course at University of Iowa - Modern Radar Systems by Hamish Meikle (ISBN 1-58053-294-2) - Advanced Radar Techniques and Systems edited by Gaspare Galati (ISBN 0-86341-172-X) - "AN/APQ-174/186 Multi-Mode Radar". Raytheon. - "Clutter Rejection (Pulse Doppler), Radar Systems Engineering". IEEE New Hampshire Section, University of New Hampshire. - "Path to Nexrad, Doppler Radar Development at the National Severe Storm Laboratory". National Oceanic and Atmospheric Administration, National Severe Storm Laboratory. - "How does Doppler Radar Work?". Weather Beacon Doppler Radar. - "Subclutter Visibility and Improvement Factor". Retrieved January 29, 2011. - "AN/SPS-49 Very Long-Range Air Surveillance Radar". Federation of American Scientists. - "Dwell Time and Hits Per Scan". Radartutorial. - "Side Lobe Supression". Radartutorial.eu. - "Side Lobe Supresson". Massachusetts Institute of Technology. - "Dolph-Chebyshev Window". Stanford University. Retrieved January 29, 2011. - "High Power L Band Fast Phase Shifter". Retrieved August 2, 2011. - "AWACS Surveillance Radar". Norhrop Grummond.
http://en.m.wikipedia.org/wiki/Pulse-Doppler_radar
13
61
Cladistics, or phylogenetic systematics, is a system of classifying living and extinct organisms based on evolutionary ancestry as determined by grouping taxa according to "derived characters," that is characteristics or features shared uniquely by the taxa and their common ancestor. Cladistics places heavy emphasis on objective, quantitative analysis and emphasizes evolution and genealogy in contrast to more traditional biological taxonomy with its focus on physical similarities between species. Emphasizing no particular mechanism of evolution, cladistics as a classification schema lies largely separate from much of the debate between those who favor natural selection and those who favor intelligent design. Cladistics generates diagrams, called "cladograms," that represent the evolutionary tree of life. DNA (deoxyribonucleic acid) and RNA (ribonucleic acid) sequencing data are used in many important cladistic efforts. Cladistics originated in the field of biology by a German entomologist, but in recent years cladistic methods have found application in other disciplines. The word cladistics, created in 1950, is derived from the ancient Greek κλάδος, klados, or "branch." Although the emphasis of cladistics on biological lineage through millions of years is metaphorically similar to the human convention of tracing genealogical lineage through multiple generations, the two are quite different in substance, as one traces lineage of species while the other traces lineage of specific members of a species. The trend of cladistics toward mapping a connectedness between all species of organisms, based on the theory of descent with modification, shows metaphorical similarity with views of some religions that humans are all connected because of a common origin. The history of the various schools or research groups that developed around the concept of biological classification was often filled with disputes, competitions, and even bitter opposition (Hull 1988). This is frequently the history of new ideas that challenge the existing paradigm, as cladism has done in presenting a strong alternative to Linnaean taxonomy. Systematics is the branch of biology that strives to discover the genealogical relationships underlying organic diversity and also constructs classifications of living things (Sober 1988, 7). There is a diversity of opinion on how genealogy and taxonomy are related. Two prominent research groups taking very different approaches from each other emerged in the mid-twentieth century (Hull 1988). One, the Sokol-Sneath school, proposed to improve on the methods of traditional Linnaean taxonomy by introducing "numerical taxonomy," which aimed to ascertain the overall similarity among organisms using objective, quantitative, and numerous characters (Hull 1988). A second group, led by the German biologist Willi Hennig (1913-1976), proposed a fundamentally new approach that emphasized classifications representing phylogeny focused on the sister-group relationship: Two taxa are sister groups if they are more related to each other than to a third taxa, and the evidence for this is the presence of characters that the sister groups exhibit but the third group does not exhibit (Hull 1988). That is, the sister groups share a more recent common ancestor with each other than with the third group (Hull 1988). The method emphasizes common ancestry and descent more than chronology. Hennig's 1950 work, Grundzüge einer Theorie der Phylogenetischen Systematik, published in German, began this area of cladistics. The German-American biologist, Ernst Mayr, in a 1965 paper termed the Sokol-Sneath school "phenetic" because its aim in classifications was to represent the overall similarities exhibited by organisms regardless of descent (Hull 1988). He also coined the term "cladistics" ("branch") for Hennig's system because Hennig wished to represent branching sequences (Hull 1988). Mayr thought his own view to be "evolutionary taxonomy" because it reflected both order of branching (cladistics) and degrees of divergence (phenetics) (Hull 1988). In Mayr's terms then there would be three notable schools of biological taxonomy: cladists who insist that only genealogy should influence classification; pheneticists who hold that overall similarity, rather than descent, should determine classification; and evolutionary taxonomists (the heirs of traditional Linnaean taxonomists) who hold that both evolutionary descent and adaptive similarity should be used in classification (Sober 1988). Hennig referred to his approach as phylogenetic systematics, which is the title of his 1966 book. Hennig's major book, even the 1979 version, does not contain the term "cladistics" in the index. A review paper by Dupuis observes that the term clade was introduced in 1958, by Julian Huxley, cladistic by Cain and Harrison in 1960, and cladist (for an adherent of Hennig's school) by Mayr in 1965 (Dupuis 1984). The term "phylogenetics" is often used synonymously with "cladistics." Computer programs are widely used in cladistics, due to the highly complex nature of cladogram-generation procedures. Cladists construct cladograms, branching diagrams, to graphically depict the groups of organisms that share derived characters. Key to cladistics analysis is identifying monophyletic groups, that is, groups comprising a given species, all of that species' descendants, and nothing else (Sober 1988). In phylogenetics, a group of species is said to be paraphyletic (Greek para meaning near and phyle meaning race) if the group contains its most recent common ancestor, but does not contain all the descendants of that ancestor. For instance, the traditional class Reptilia excludes birds even though they are widely considered to have evolved from an ancestral reptile. Similarly, the traditional invertebrates are paraphyletic because vertebrates are excluded, although the latter evolved from an invertebrate. A group comprising members from separate evolutionary lines is called polyphyletic. For instance, the once-recognized Pachydermata order was found to be polyphyletic because elephants and rhinoceroses arose separately from non-pachyderms. Evolutionary taxonomists consider polyphyletic groups to be errors in classification, often occurring because convergence or other homoplasy was misinterpreted as homology. Cladistic taxonomy requires taxa to be clades (monophyletic groups). Cladists argue, therefore, that the prevailing classification system, Linnaean taxonomy, should be reformed to eliminate all non-clades. Others, such as those in the school of evolutionary taxonomy, often use cladistic techniques and require that groups reflect phylogenies, but they also allow both monophyletic and paraphyletic groups as taxa. Following Hennig, cladists argue that paraphyly is as harmful as polyphyly. The idea is that monophyletic groups can be defined objectively through identifying synapomorphies, that is, features shared uniquely by a group of species and their most immediate common ancestor. This cladistic approach is claimed to be more objective than the alternative approach of defining paraphyletic and polyphyletic groups based on a set of key characteristics determined by researchers. Making such determinations, cladists argue, is an inherently subjective process highly likely to lead to "gradistic" thinking that groups advance from "lowly" grades to "advanced" grades, which can in turn lead to teleological thinking. A cladistic analysis organizes a certain set of information by making a distinction between characters and character states. Consider feathers, whose color may be blue in one species but red in another. In this case, "feather-color" is a character and "red feathers" and "blue feathers" are two character states. In the "old days," before the introduction of computer analysis into cladistics, the researcher would assign the selected character states as being either plesiomorphies, character states present before the last common ancestor of the species group, or synapomorphies, character states that first appeared in the last common ancestor. Usually the researcher would make this assignment by considering one or more outgroups (organisms considered not to be part of the group in question, but nonetheless related to the group). Then, as now, only synapomorphies would be used in characterizing cladistic divisions. Next, different possible cladograms were drawn up and evaluated by looking for those having the greatest number of synapomorphies. The hope then, as now, was that the number of true synapomorphies in the cladogram would be large enough to overwhelm any unintended symplesiomorphies (homoplasies) caused by convergent evolution, that is, characters that resemble each other because of environmental conditions or function, but not because of common ancestry. A well-known example of homoplasy due to convergent evolution is wings. Though the wings of birds and insects may superficially resemble one another and serve the same function, each evolved independently. If a dataset contained data on a bird and an insect that both scored "POSITIVE" for the character "presence of wings," a homoplasy would be introduced into the dataset, which could cause erroneous results. When two alternate possible cladograms were evaluated to be equally probable, one was usually chosen based on the principle of parsimony: The most compact arrangement was likely the best hypothesis of relationship (a variation of Occam's razor, which states that the simplest explanation is most often the correct one). Another approach, particularly useful in molecular evolution, involved applying the statistical analysis of maximum likelihood to select the most likely cladogram based on a specific probability model of changes. Of course, it is no longer done this way: researcher selection, and hence bias, is something to be avoided. These days much of the analysis is done by software: Besides the software to calculate the trees themselves, there is sophisticated statistical software to provide a more objective basis. As DNA sequencing has become easier, phylogenies are increasingly constructed with the aid of molecular data. Computational systematics allows the use of these large data sets to construct objective phylogenies. These can more accurately distinguish some true synapomorphies from homoplasies that are due to parallel evolution. Ideally, morphological, molecular, and possibly other (behavioral, etc.) phylogenies should be combined. Cladistics does not assume any particular theory of evolution, but it does assume the pattern of descent with modification. Thus, cladistic methods can be, and recently have been, usefully applied to mapping descent with modification in non-biological systems, such as language families in historical linguistics and the filiation of manuscripts in textual criticism. The starting point of cladistic analysis is a group of species and the molecular, morphological, or other data that characterizes those species. The end result is a tree-like relationship-diagram called a cladogram. The cladogram graphically represents a hypothetical evolutionary process. Cladograms are subject to revision as additional data becomes available. In a cladogram, all organisms lie at the leaves, and each inner node is ideally binary (two-way). The two taxa on either side of a split are called "sister taxa" or "sister groups." Each subtree is called a "clade," and by definition is a natural group, all of whose species share a common ancestor. Each clade is set off by a series of characteristics that appear in its members, but not in the other forms from which it diverged. These identifying characteristics of a clade are its synapomorphies (shared, derived characters). For instance, hardened front wings (elytra) are a synapomorphy of beetles, while circinate vernation, or the unrolling of new fronds, is a synapomorphy of ferns. Synonyms—The term "evolutionary tree" is often used synonymously with cladogram. The term phylogenetic tree is sometimes used synonymously with cladogram (Singh 2004), but others treat phylogenetic tree as a broader term that includes trees generated with a non-evolutionary emphasis. Subtrees are clades—In a cladogram, all species lie at the leaves (Albert 2006). The two taxa on either side of a split are called sister taxa or sister groups. Each subtree, whether it contains one item or a hundred thousand items, is called a clade. Two-way versus three-way Forks—Many cladists require that all forks in a cladogram be 2-way forks. Some cladograms include 3-way or 4-way forks when the data is insufficient to resolve the forking to a higher level of detail, but nodes with more than two branches are discouraged by many cladists. Depth of a Cladogram—If a cladogram represents N species, the number of levels (the "depth") in the cladogram is on the order of log2(N) (Aldous 1996). For example, if there are 32 species of deer, a cladogram representing deer will be around 5 levels deep (because 25=32). A cladogram representing the complete tree of life, with about 10 million species, would be about 23 levels deep. This formula gives a lower limit: In most cases the actual depth will be a larger value because the various branches of the cladogram will not be uniformly deep. Conversely, the depth may be shallower if forks larger than two-way forks are permitted. Number of Distinct Cladograms—For a given set of species, the number of distinct rooted cladograms that can in theory be drawn (ignoring which cladogram best matches the species characteristics) is (Lowe 2004): |Number of Species||2||3||4||5||6||7||8||9||10||N| |Number of Cladograms||1||3||15||105||945||10,395||135,135||2,027,025||34,459,425||1*3*5*7*...*(2N-3)| This exponential growth of the number of possible cladograms explains why manual creation of cladograms becomes very difficult when the number of species is large. Extinct Species in Cladograms—Cladistics makes no distinction between extinct and non-extinct species (Scott-Ram 1990), and it is appropriate to include extinct species in the group of organisms being analyzed. Cladograms based on DNA/RNA generally do not include extinct species because DNA/RNA samples from extinct species are rare. Cladograms based on morphology, especially morphological characteristics preserved in fossils, are more likely to include extinct species. Time Scale of a Cladogram—A cladogram tree has an implicit time axis (Freeman 1998), with time running forward from the base of the tree to the leaves of the tree. If the approximate date (for example, expressed as millions of years ago) of all the evolutionary forks were known, those dates could be captured in the cladogram. Thus, the time axis of the cladogram could be assigned a time scale (for example 1 cm = 1 million years), and the forks of the tree could be graphically located along the time axis. Such cladograms are called scaled cladograms. Many cladograms are not scaled along the time axis, for a variety of reasons: - Many cladograms are built from species characteristics that cannot be readily dated (for example, morphological data in the absence of fossils or other dating information) - When the characteristic data is DNA/RNA sequences, it is feasible to use sequence differences to establish the relative ages of the forks, but converting those ages into actual years requires a significant approximation of the rate of change (Carrol 1997). - Even when the dating information is available, positioning the cladogram's forks along the time axis in proportion to their dates may cause the cladogram to become difficult to understand or hard to fit within a human-readable format Summary of terminology - A clade is an ancestor species and all of its descendants - A monophyletic group is a clade - A paraphyletic group is an ancestor species and most of its descendants, usually with a specific group of descendants excluded (for example, reptiles are all the sauropsids (members of the class Sauropsida) except for birds). Most cladists discourage the use of paraphyletic groups. - A polyphyletic group is a group consisting of members from two non-overlapping monophyletic groups (for example, flying animals). Most cladists discourage the use of polyphyletic groups. - An outgroup is an organism considered not to be part of the group in question, although it is closely related to the group. - A characteristic present in both the outgroups and the ancestors is called a plesiomorphy (meaning "close form," as in close to the root ancestor; also called an ancestral state). - A characteristic that occurs only in later descendants is called an apomorphy (meaning "separate form" or "far from form," as in far from the root ancestor; also called a "derived" state) for that group. Note: The adjectives plesiomorphic and apomorphic are often used instead of "primitive" and "advanced" to avoid placing value judgments on the evolution of the character states, since both may be advantageous in different circumstances. It is not uncommon to refer informally to a collective set of plesiomorphies as a ground plan for the clade or clades they refer to. - A species or clade is basal to another clade if it holds more plesiomorphic characters than that other clade. Usually a basal group is very species-poor as compared to a more derived group. It is not a requirement that a basal group be extant. For example, palaeodicots are basal to flowering plants. - A clade or species located within another clade is said to be nested within that clade. Cladistics compared with Linnaean taxonomy Prior to the advent of cladistics, most taxonomists limited themselves to using Linnaean taxonomy for organizing lifeforms. That traditional approach used several fixed levels of a hierarchy, such as Kingdom, Phylum, Class, Order, and Family. Cladistics does not use those terms because one of its fundamental premises is that the evolutionary tree is very deep and very complex, and it is not meaningful to use a fixed number of levels. Linnaean taxonomy insists that groups reflect phylogenies, but in contrast to cladistics allows both monophyletic and paraphyletic groups as taxa. Since the early twentieth century, Linnaean taxonomists have generally attempted to make genus and lower-level taxa monophyletic. Cladistics originated in the work of Willi Hennig, and since that time there has been a spirited debate (Wheeler 2000) about the relative merits of cladistics versus Linnaean classification and other Linnaean-associated classification systems, such as the evolutionary taxonomy advocated by Mayr (Benton 2000). Some of the debates that the cladists engaged in had been running since the nineteenth century, but they entered these debates with a new fervor (Hull 1988), as can be learned from the Foreword to Hennig (1979) in which Rosen, Nelson, and Patterson wrote the following—not about Linnaean taxonomy but about the newer evolutionary taxonomy: Encumbered with vague and slippery ideas about adaptation, fitness, biological species and natural selection, neo-Darwinism (summed up in the "evolutionary" systematics of Mayr and Simpson) not only lacked a definable investigatory method, but came to depend, both for evolutionary interpretation and classification, on consensus or authority (Foreword, page ix). Proponents of cladistics enumerate key distinctions between cladistics and Linnaean taxonomy as follows (Hennig 1975): |Treats all levels of the tree as equivalent.||Treats each tree level uniquely. Uses special names (such as Family, Class, Order) for each level.| |Handles arbitrarily-deep trees.||Often must invent new level-names (such as superorder, suborder, infraorder, parvorder, magnorder) to accommodate new discoveries. Biased towards trees about 4 to 12 levels deep.| |Discourages naming or use of groups that are not monophyletic||Accepts naming and use of paraphyletic groups| |Primary goal is to reflect actual process of evolution||Primary goal is to group species based on morphological similarities| |Assumes that the shape of the tree will change frequently, with new discoveries||Often responds to new discoveries by re-naming or re-levelling of Classes, Orders, and Kingdoms| |Definitions of taxa are objective, hence free from personal interpretation||Definitions of taxa require individuals to make subjective decisions. For example, various taxonomists suggest that the number of Kingdoms is two, three, four, five, or six (see Kingdom).| |Taxa, once defined, are permanent (e.g. "taxon X comprises the most recent common ancestor of species A and B along with its descendants")||Taxa can be renamed and eliminated (e.g. Insectivora is one of many taxa in the Linnaean system that have been eliminated).| Proponents of Linnaean taxonomy contend that it has some advantages over cladistics, such as: |Limited to entities related by evolution or ancestry||Supports groupings without reference to evolution or ancestry| |Does not include a process for naming species||Includes a process for giving unique names to species| |Difficult to understand the essence of a clade, because clade definitions emphasize ancestry at the expense of meaningful characteristics||Taxa definitions based on tangible characteristics| |Ignores sensible, clearly-defined paraphyletic groups such as reptiles||Permits clearly-defined groups such as reptiles| |Difficult to determine if a given species is in a clade or not (for example, if clade X is defined as "most recent common ancestor of A and B along with its descendants," then the only way to determine if species Y is in the clade is to perform a complex evolutionary analysis)||Straightforward process to determine if a given species is in a taxon or not| |Limited to organisms that evolved by inherited traits; not applicable to organisms that evolved via complex gene-sharing or lateral transfer||Applicable to all organisms, regardless of evolutionary mechanism| How complex is the Tree of Life? One of the arguments in favor of cladistics is that it supports arbitrarily complex, arbitrarily deep trees. Especially when extinct species are considered (both known and unknown), the complexity and depth of the tree can be very large. Every single speciation event, including all the species that are now extinct, represents an additional fork on the hypothetical, complete cladogram representing the full tree of life. Fractals can be used to represent this notion of increasing detail: As a viewpoint zooms into the tree of life, the complexity remains virtually constant (Gordon 1999). This great complexity of the tree and its associated uncertainty is one of the reasons that cladists cite for the attractiveness of cladistics over traditional taxonomy. Proponents of non-cladistic approaches to taxonomy point to punctuated equilibrium to bolster the case that the tree-of-life has a finite depth and finite complexity. According to punctuated equilibrium, generally a species comes into the fossil record very similar to when it departs the fossil record, as contrasted with phyletic gradualism whereby a species gradually changes over time into another species. If the number of species currently alive is finite, and the number of extinct species that we will ever know about is finite, then the depth and complexity of the tree of life is bounded, and there is no need to handle arbitrarily deep trees. Applying Cladistics to other disciplines The processes used to generate cladograms are not limited to the field of biology (Mace 2005). The generic nature of cladistics means that cladistics can be used to organize groups of items in many different realms. The only requirement is that the items have characteristics that can be identified and measured. For example, one could take a group of 200 spoken languages, measure various characteristics of each language (vocabulary, phonemes, rhythms, accents, dynamics, etc.) and then apply a cladogram algorithm to the data. The result will be a tree that may shed light on how, and in what order, the languages came into existence. Thus, cladistic methods have recently been usefully applied to non-biological systems, including determining language families in historical linguistics, culture, history (Lipo 2005), and filiation of manuscripts in textual criticism. - ↑ Ernst Mayr, Evolution and the Diversity of Life (Selected Essays) (Cambridge, MA: Harvard Univ. Press, 1976). ISBN 0-674-27105-X - Albert, V. Parsimony, Phylogeny, and Genomics. Oxford University Press. ISBN 0199297304 - Aldous, D. 1996. Probability distributions on cladograms. In D. J. Aldous, and R. Pemantle, Random Discrete Structures. New York: Springer. ISBN 0387946233 - Ashlock, P. D. 1974. The uses of cladistics. Annual Review of Ecology and Systematics 5: 81-99. - Benton, M. 2000. Stems, nodes, crown clades, and rank-free lists: Is Linnaeus dead? Biological Reviews 75(4): 633-648. - Carrol, R. 1997. Patterns and Processes of Vertebrate Evolution. Cambridge University Press. ISBN 052147809X - Cavalli-Sforza, L. L. and A. W. F. Edwards. 1967. Phylogenetic analysis: Models and estimation procedures. Evol. 21(3): 550-570. - Cuénot, L. 1940. Remarques sur un essai d'arbre généalogique du règne animal. Comptes Rendus de l'Académie des Sciences de Paris 210: 23-27. - de Queiroz, K. and J. A. Gauthier. 1992. Phylogenetic taxonomy. Annual Review of Ecology and Systematics 23: 449–480. - de Queiroz, K. and J. A. Gauthier. 1994. Toward a phylogenetic system of biological nomenclature. Trends in Research in Ecology and Evolution 9(1): 27-31. - Dupuis, C. 1984. Willi Hennig's impact on taxonomic thought. Annual Review of Ecology and Systematics 15: 1-24. - Felsenstein, J. 2004. Inferring Phylogenies. Sunderland, MA: Sinauer Associates. ISBN 0878931775 - Freeman, S. 1998. Evolutionary Analysis. Prentice Hall. ISBN 0135680239 - Gordon, R. 1999. The Hierarchical Genome and Differentiation Waves. World Scientific. ISBN 9810222688 - Hamdi, H., H. Nishio, R. Zielinski, and A. Dugaiczyk. 1999. Origin and phylogenetic distribution of Alu DNA repeats: Irreversible events in the evolution of primates. Journal of Molecular Biology 289: 861–871. - Hennig, W. 1950. Grundzüge einer Theorie der Phylogenetischen Systematik. Berlin: Deutscher Zentralverlag. - Hennig, W. 1966. Phylogenetic Systematics. Urbana: University of Illinois Press. - Hennig, W. and W. Hennig. 1982. Phylogenetische Systematik. Berlin: Parey. ISBN 3489609344 - Hennig, W. 1975. Cladistic analysis or cladistic classification: A reply to Ernst Mayr. Systematic Zoology 24: 244-256. - Hennig, W. 1979. Phylogenetic Systematics. Urbana: University of Illinois Press. ISBN 0252068149 - Hull, D. L. 1979. The limits of cladism. Systematic Zoology 28: 416-440. - Hull, D. L. 1988. Science as a Process: An Evolutionary Account of the Social and Conceptual Development of Science. Chicago: The University of Chicago Press. - Kitching, I. J., P. L. Forey, C. J. Humphries, and D. M. Williams. 1998. Cladistics: The Theory and Practice of Parsimony Analysis. Systematics Association Special Volume. 11 (REV): ALL. ISBN 0198501382 - Lipo, C. 2005. Mapping Our Ancestors: Phylogenetic Approaches in Anthropology and Prehistory. Aldine Transaction. ISBN 0202307514 - Lowe, A. 2004. Ecological Genetics: Design, Analysis, and Application. Blackwell Publishing. ISBN 1405100338 - Luria, S., S. J. Gould, and S. Singer. 1981. A View of Life. Menlo Park, CA: Benjamin/Cummings. ISBN 0805366482 - Letunic, I. 2007. Interactive Tree Of Life (iTOL): an online tool for phylogenetic tree display and annotation. Bioinformatics 23(1): 127-128. - Mace, R. 2005. The Evolution of Cultural Diversity: A Phylogenetic Approach. London: Routledge Cavendish. ISBN 1844720993 - Mayr, E. 1982. The Growth of Biological Thought: Diversity, Evolution and Inheritance. Cambridge, MA: Harvard Univ. Press. ISBN 0674364465 - Mayr, E. 1976. Evolution and the Diversity of Life (Selected essays). Cambridge, MA: Harvard Univ. Press. ISBN 067427105X - Mayr, E. 1965. Numerical phenetics and taxonomic theory. Systematic Zoology 14:73-97. - Patterson, C. 1982. Morphological characters and homology. In K. A. Joysey and A. E. Friday, eds., Problems in Phylogenetic Reconstruction. London: Academic Press. ISBN 0123912504 - Rosen, D., G. Nelson, and C. Patterson. 1979. Phylogenetic Systematics. Urbana, IL: University of Illinois Press. ISBN 0252068149 - Scott-Ram, N. R. 1990. Transformed Cladistics, Taxonomy and Evolution. Cambridge University Press. ISBN 0521340861 - Shedlock, A. M., and N. Okada. 2000. SINE insertions: Powerful tools for molecular systematics. Bioessays 22: 148–160. - Singh, G. 2004. Plant Systematics: An Integrated Approach. Enfield, N.H.: Science. ISBN 1578083516 - Sober, E. 1988. Reconstructing the Past: Parsimony, Evolution, and Inference. Cambridge, MA: The MIT Press. ISBN 026219273X - Sokal, R. R. 1975. Mayr on cladism—and his critics. Systematic Zoology 24: 257-262. - Swofford, D. L., G. J. Olsen, P. J. Waddell, and D. M. Hillis. 1996. Phylogenetic inference. In D. M. Hillis, C. Moritz, and B. K. Mable, eds., Molecular Systematics. Sunderland, MA: Sinauer Associates. ISBN 0878932828 - Wheeler, Q. 2000. Species Concepts and Phylogenetic Theory: A Debate. Columbia University Press. ISBN 0231101430 - Wiley, E. O. 1981. Phylogenetics: The Theory and Practice of Phylogenetic Systematics. New York: Wiley Interscience. ISBN 0471059757 - Zwickl, D. J., and D. M. Hillis. 2002. Increased taxon sampling greatly reduces phylogenetic error. Systematic Biology 51: 588-598. All links retrieved May 23, 2013. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
http://www.newworldencyclopedia.org/entry/Cladistics
13
51
In electronics, a digital-to-analog converter (DAC or D-to-A) is a device that converts a digital (usually binary) code to an analog signal (current, voltage, or electric charge). An analog-to-digital converter (ADC) performs the reverse operation. Signals are easily stored and transmitted in digital form, but a DAC is needed for the signal to be recognized by human senses or other non-digital systems. A common use of digital-to-analog converters is generation of audio signals from digital information in music players. Digital video signals are converted to analog in televisions and mobile phones to display colors and shades. Digital-to-analog conversion can degrade a signal, so conversion details are normally chosen so that the errors are negligible. Due to cost and the need for matched components, DACs are almost exclusively manufactured on integrated circuits (ICs). There are many DAC architectures which have different advantages and disadvantages. The suitability of a particular DAC for an application is determined by a variety of measurements including speed and resolution. A DAC converts an abstract finite-precision number (usually a fixed-point binary number) into a physical quantity (e.g., a voltage or a pressure). In particular, DACs are often used to convert finite-precision time series data to a continually varying physical signal. A typical DAC converts the abstract numbers into a concrete sequence of impulses that are then processed by a reconstruction filter using some form of interpolation to fill in data between the impulses. Other DAC methods (e.g., methods based on delta-sigma modulation) produce a pulse-density modulated signal that can then be filtered in a similar way to produce a smoothly varying signal. As per the Nyquist–Shannon sampling theorem, a DAC can reconstruct the original signal from the sampled data provided that its bandwidth meets certain requirements (e.g., a baseband signal with bandwidth less than the Nyquist frequency). Digital sampling introduces quantization error that manifests as low-level noise added to the reconstructed signal. Practical operation Instead of impulses, usually the sequence of numbers update the analog voltage at uniform sampling intervals. These numbers are written to the DAC, typically with a clock signal that causes each number to be latched in sequence, at which time the DAC output voltage changes rapidly from the previous value to the value represented by the currently latched number. The effect of this is that the output voltage is held in time at the current value until the next input number is latched resulting in a piecewise constant or 'staircase' shaped output. This is equivalent to a zero-order hold operation and has an effect on the frequency response of the reconstructed signal. The fact that DACs output a sequence of piecewise constant values (known as zero-order hold in sample data textbooks) or rectangular pulses causes multiple harmonics above the Nyquist frequency. Usually, these are removed with a low pass filter acting as a reconstruction filter in applications that require it. Most modern audio signals are stored in digital form (for example MP3s and CDs) and in order to be heard through speakers they must be converted into an analog signal. DACs are therefore found in CD players, digital music players, and PC sound cards. Specialist standalone DACs can also be found in high-end hi-fi systems. These normally take the digital output of a compatible CD player or dedicated transport (which is basically a CD player with no internal DAC) and convert the signal into an analog line-level output that can then be fed into an amplifier to drive speakers. In VoIP (Voice over IP) applications, the source must first be digitized for transmission, so it undergoes conversion via an Analog-to-Digital Converter, and is then reconstructed into analog using a DAC on the receiving party's end. Video sampling tends to work on a completely different scale altogether thanks to the highly nonlinear response both of cathode ray tubes (for which the vast majority of digital video foundation work was targeted) and the human eye, using a "gamma curve" to provide an appearance of evenly distributed brightness steps across the display's full dynamic range - hence the need to use RAMDACs in computer video applications with deep enough colour resolution to make engineering a hardcoded value into the DAC for each output level of each channel impractical (e.g. an Atari ST or Sega Genesis would require 24 such values; a 24-bit video card would need 768...). Given this inherent distortion, it is not unusual for a television or video projector to truthfully claim a linear contrast ratio (difference between darkest and brightest output levels) of 1000:1 or greater, equivalent to 10 bits of audio precision even though it may only accept signals with 8-bit precision and use an LCD panel that only represents 6 or 7 bits per channel. Video signals from a digital source, such as a computer, must be converted to analog form if they are to be displayed on an analog monitor. As of 2007, analog inputs were more commonly used than digital, but this changed as flat panel displays with DVI and/or HDMI connections became more widespread. A video DAC is, however, incorporated in any digital video player with analog outputs. The DAC is usually integrated with some memory (RAM), which contains conversion tables for gamma correction, contrast and brightness, to make a device called a RAMDAC. A device that is distantly related to the DAC is the digitally controlled potentiometer, used to control an analog signal digitally. DAC types The most common types of electronic DACs are: - The pulse-width modulator, the simplest DAC type. A stable current or voltage is switched into a low-pass analog filter with a duration determined by the digital input code. This technique is often used for electric motor speed control, but has many other applications as well. - Oversampling DACs or interpolating DACs such as the delta-sigma DAC, use a pulse density conversion technique. The oversampling technique allows for the use of a lower resolution DAC internally. A simple 1-bit DAC is often chosen because the oversampled result is inherently linear. The DAC is driven with a pulse-density modulated signal, created with the use of a low-pass filter, step nonlinearity (the actual 1-bit DAC), and negative feedback loop, in a technique called delta-sigma modulation. This results in an effective high-pass filter acting on the quantization (signal processing) noise, thus steering this noise out of the low frequencies of interest into the megahertz frequencies of little interest, which is called noise shaping. The quantization noise at these high frequencies is removed or greatly attenuated by use of an analog low-pass filter at the output (sometimes a simple RC low-pass circuit is sufficient). Most very high resolution DACs (greater than 16 bits) are of this type due to its high linearity and low cost. Higher oversampling rates can relax the specifications of the output low-pass filter and enable further suppression of quantization noise. Speeds of greater than 100 thousand samples per second (for example, 192 kHz) and resolutions of 24 bits are attainable with delta-sigma DACs. A short comparison with pulse-width modulation shows that a 1-bit DAC with a simple first-order integrator would have to run at 3 THz (which is physically unrealizable) to achieve 24 meaningful bits of resolution, requiring a higher-order low-pass filter in the noise-shaping loop. A single integrator is a low-pass filter with a frequency response inversely proportional to frequency and using one such integrator in the noise-shaping loop is a first order delta-sigma modulator. Multiple higher order topologies (such as MASH) are used to achieve higher degrees of noise-shaping with a stable topology. - The binary-weighted DAC, which contains individual electrical components for each bit of the DAC connected to a summing point. These precise voltages or currents sum to the correct output value. This is one of the fastest conversion methods but suffers from poor accuracy because of the high precision required for each individual voltage or current. Such high-precision components are expensive, so this type of converter is usually limited to 8-bit resolution or less. - Switched resistor DAC contains of a parallel resistor network. Individual resistors are enabled or bypassed in the network based on the digital input. - Switched current source DAC, from which different current sources are selected based on the digital input. - Switched capacitor DAC contains a parallel capacitor network. Individual capacitors are connected or disconnected with switches based on the input. - The R-2R ladder DAC which is a binary-weighted DAC that uses a repeating cascaded structure of resistor values R and 2R. This improves the precision due to the relative ease of producing equal valued-matched resistors (or current sources). However, wide converters perform slowly due to increasingly large RC-constants for each added R-2R link. - The Successive-Approximation or Cyclic DAC, which successively constructs the output during each cycle. Individual bits of the digital input are processed each cycle until the entire input is accounted for. - The thermometer-coded DAC, which contains an equal resistor or current-source segment for each possible value of DAC output. An 8-bit thermometer DAC would have 255 segments, and a 16-bit thermometer DAC would have 65,535 segments. This is perhaps the fastest and highest precision DAC architecture but at the expense of high cost. Conversion speeds of >1 billion samples per second have been reached with this type of DAC. - Hybrid DACs, which use a combination of the above techniques in a single converter. Most DAC integrated circuits are of this type due to the difficulty of getting low cost, high speed and high precision in one device. - The segmented DAC, which combines the thermometer-coded principle for the most significant bits and the binary-weighted principle for the least significant bits. In this way, a compromise is obtained between precision (by the use of the thermometer-coded principle) and number of resistors or current sources (by the use of the binary-weighted principle). The full binary-weighted design means 0% segmentation, the full thermometer-coded design means 100% segmentation. - Most DACs, shown earlier in this list, rely on a constant reference voltage to create their output value. Alternatively, a multiplying DAC takes a variable input voltage for their conversion. This puts additional design constraints on the bandwidth of the conversion circuit. DAC performance DACs are very important to system performance. The most important characteristics of these devices are: - The number of possible output levels the DAC is designed to reproduce. This is usually stated as the number of bits it uses, which is the base two logarithm of the number of levels. For instance a 1 bit DAC is designed to reproduce 2 (21) levels while an 8 bit DAC is designed for 256 (28) levels. Resolution is related to the effective number of bits which is a measurement of the actual resolution attained by the DAC. Resolution determines color depth in video applications and audio bit depth in audio applications. - Maximum sampling rate - A measurement of the maximum speed at which the DACs circuitry can operate and still produce the correct output. As stated in the Nyquist–Shannon sampling theorem defines a relationship between the sampling frequency and bandwidth of the sampled signal. - The ability of a DAC's analog output to move only in the direction that the digital input moves (i.e., if the input increases, the output doesn't dip before asserting the correct output.) This characteristic is very important for DACs used as a low frequency signal source or as a digitally programmable trim element. - Total harmonic distortion and noise (THD+N) - A measurement of the distortion and noise introduced to the signal by the DAC. It is expressed as a percentage of the total power of unwanted harmonic distortion and noise that accompany the desired signal. This is a very important DAC characteristic for dynamic and small signal DAC applications. - Dynamic range - A measurement of the difference between the largest and smallest signals the DAC can reproduce expressed in decibels. This is usually related to resolution and noise floor. Other measurements, such as phase distortion and jitter, can also be very important for some applications, some of which (e.g. wireless data transmission, composite video) may even rely on accurate production of phase-adjusted signals. Linear PCM audio sampling usually works on the basis of each bit of resolution being equivalent to 6 decibels of amplitude (a 2x increase in volume or precision). Non-linear PCM encodings (A-law / μ-law, ADPCM, NICAM) attempt to improve their effective dynamic ranges by a variety of methods - logarithmic step sizes between the output signal strengths represented by each data bit (trading greater quantisation distortion of loud signals for better performance of quiet signals) DAC figures of merit - Static performance: - Differential nonlinearity (DNL) shows how much two adjacent code analog values deviate from the ideal 1 LSB step. - Integral nonlinearity (INL) shows how much the DAC transfer characteristic deviates from an ideal one. That is, the ideal characteristic is usually a straight line; INL shows how much the actual voltage at a given code value differs from that line, in LSBs (1 LSB steps). - Noise is ultimately limited by the thermal noise generated by passive components such as resistors. For audio applications and in room temperatures, such noise is usually a little less than 1 μV (microvolt) of white noise. This limits performance to less than 20~21 bits even in 24-bit DACs. - Frequency domain performance - Spurious-free dynamic range (SFDR) indicates in dB the ratio between the powers of the converted main signal and the greatest undesired spur. - Signal-to-noise and distortion ratio (SNDR) indicates in dB the ratio between the powers of the converted main signal and the sum of the noise and the generated harmonic spurs - i-th harmonic distortion (HDi) indicates the power of the i-th harmonic of the converted main signal - Total harmonic distortion (THD) is the sum of the powers of all HDi - If the maximum DNL error is less than 1 LSB, then the D/A converter is guaranteed to be monotonic. However, many monotonic converters may have a maximum DNL greater than 1 LSB. - Time domain performance: - Glitch impulse area (glitch energy) - Response uncertainty - Time nonlinearity (TNL) See also Further reading - Kester, Walt, The Data Conversion Handbook, ISBN 0-7506-7841-0 - S. Norsworthy, Richard Schreier, Gabor C. Temes, Delta-Sigma Data Converters. ISBN 0-7803-1045-4. - Mingliang Liu, Demystifying Switched-Capacitor Circuits. ISBN 0-7506-7907-7. - Behzad Razavi, Principles of Data Conversion System Design. ISBN 0-7803-1093-4. - Phillip E. Allen, Douglas R. Holberg, CMOS Analog Circuit Design. ISBN 0-19-511644-5. - Robert F. Coughlin, Frederick F. Driscoll, Operational Amplifiers and Linear Integrated Circuits. ISBN 0-13-014991-8. - A Anand Kumar, Fundamentals of Digital Circuits. ISBN 81-203-1745-9, ISBN 978-81-203-1745-1.
http://en.wikipedia.org/wiki/Digital-to-analog_converter
13
67
In the previous section we saw how we could use the first derivative of a function to get some information about the graph of a function. In this section we are going to look at the information that the second derivative of a function can give us a about the graph of a function. Before we do this we will need a couple of definitions out of the way. The main concept that we’ll be discussing in this section is concavity. Concavity is easiest to see with a graph (we’ll give the mathematical definition in a bit). So a function is concave up if it “opens” up and the function is concave down if it “opens” down. Notice as well that concavity has nothing to do with increasing or decreasing. A function can be concave up and either increasing or decreasing. Similarly, a function can be concave down and either increasing or It’s probably not the best way to define concavity by saying which way it “opens” since this is a somewhat nebulous definition. Here is the mathematical definition of To show that the graphs above do in fact have concavity claimed above here is the graph again (blown up a little to make things So, as you can see, in the two upper graphs all of the tangent lines sketched in are all below the graph of the function and these are concave up. In the lower two graphs all the tangent lines are above the graph of the function and these are concave Again, notice that concavity and the increasing/decreasing aspect of the function is completely separate and do not have anything to do with the other. This is important to note because students often mix these two up and use information about one to get information about the other. There’s one more definition that we need to get out of the A point is called an inflection point if the function is continuous at the point and the concavity of the graph changes at that point. Now that we have all the concavity definitions out of the way we need to bring the second derivative into the mix. We did after all start off this section saying we were going to be using the second derivative to get information about the graph. The following fact relates the second derivative of a function to its concavity. The proof of this fact is in the Proofs From Derivative Applications section of the Extras chapter. Notice that this fact tells us that a list of possible inflection points will be those points where the second derivative is zero or doesn’t exist. Be careful however to not make the assumption that just because the second derivative is zero or doesn’t exist that the point will be an inflection point. We will only know that it is an inflection point once we determine the concavity on both sides of it. It will only be an inflection point if the concavity is different on both sides of the point. Now that we know about concavity we can use this information as well as the increasing/decreasing information from the previous section to get a pretty good idea of what a graph should look like. Let’s take a look at an example of that. Example 1 For the following function identify the intervals where the function is increasing and decreasing and the intervals where the function is concave up and concave down. Use this information to sketch the graph. Okay, we are going to need the first two derivatives so let’s get those first. Let’s start with the increasing/decreasing information since we should be fairly comfortable with that after the last section. There are three critical points for this function : , and . Below is the number line for the So, it looks like we’ve got the following intervals of increasing and decreasing. Note that from the first derivative test we can also say that is a relative maximum and that is a relative minimum. Also is neither a relative minimum or maximum. Now let’s get the intervals where the function is concave up and concave down. If you think about it this process is almost identical to the process we use to identify the intervals of increasing and decreasing. This only difference is that we will be using the second derivative instead of the first derivative. The first thing that we need to do is identify the possible inflection points. These will be where the second derivative is zero or doesn’t exist. The second derivative in this case is a polynomial and so will exist everywhere. It will be zero at the following points. As with the increasing and decreasing part we can draw a number line up and use these points to divide the number line into regions. In these regions we know that the second derivative will always have the same sign since these three points are the only places where the function may change sign. Therefore, all that we need to do is pick a point from each region and plug it into the second derivative. The second derivative will then have that sign in the whole region from which the point came from Here is the number line for this second derivative. So, it looks like we’ve got the following intervals of This also means that are all inflection points. All this information can be a little overwhelming when going to sketch the graph. The first thing that we should do is get some starting points. The critical points and inflection points are good starting points. So, first graph these points. Now, start to the left and start graphing the increasing/decreasing information as we did in the previous section when all we had was the increasing/decreasing information. As we graph this we will make sure that the concavity information matches up with what we’re graphing. Using all this information to sketch the graph gives the We can use the previous example to illustrate another way to classify some of the critical points of a function as relative maximums or Notice that is a relative maximum and that the function is concave down at this point. This means that must be negative. Likewise, is a relative minimum and the function is concave up at this point. This means that must be positive. As we’ll see in a bit we will need to be very careful with . In this case the second derivative is zero, but that will not actually mean that is not a relative minimum or maximum. We’ll see some examples of this in a bit, but we need to get some other information taken care of first. It is also important to note here that all of the critical points in this example were critical points in which the first derivative was zero and this is required for this to work. We will not be able to use this test on critical points where the derivative doesn’t exist. Here is the test that can be used to classify some of the critical points of a function. The proof of this test is in the Proofs From Derivative Applications section of the Extras chapter. The third part of the second derivative test is important to notice. If the second derivative is zero then the critical point can be anything. Below are the graphs of three functions all of which have a critical point at , the second derivative of all of the functions is zero at and yet all three possibilities are exhibited. The first is the graph of . This graph has a relative minimum at . Next is the graph of which has a relative maximum at . Finally, there is the graph of and this graph had neither a relative minimum or a relative maximum at . So, we can see that we have to be careful if we fall into the third case. For those times when we do fall into this case we will have to resort to other methods of classifying the critical point. This is usually done with the first derivative test. Let’s go back and relook at the critical points from the first example and use the Second Derivative Test on them, if possible. Let’s work one more example. Example 3 For the following function find the inflection points and use the second derivative test, if possible, to classify the critical points. Also, determine the intervals of increase/decrease and the intervals of concave up/concave down and sketch the graph of the function. We’ll need the first and second derivatives to get us The critical points are, Notice as well that we won’t be able to use the second derivative test on to classify this critical point since the derivative doesn’t exist at this point. To classify this we’ll need the increasing/decreasing information that we’ll get to sketch the graph. We can however, use the Second Derivative Test to classify the other critical point so let’s do that before we proceed with the sketching work. Here is the value of the second derivative at . So, according to the second derivative test is a relative maximum. Now let’s proceed with the work to get the sketch of the graph and notice that once we have the increasing/decreasing information we’ll be able to classify . Here is the number line for the first derivative. So, according to the first derivative test we can verify that is in fact a relative maximum. We can also see that is a relative minimum. Be careful not to assume that a critical point that can’t be used in the second derivative test won’t be a relative extrema. We’ve clearly seen now both with this example and in the discussion after we have the test that just because we can’t use the Second Derivative Test or the Test doesn’t tell us anything about a critical point doesn’t mean that the critical point will not be a relative extrema. This is a common mistake that many students make so be careful when using the Second Okay, let’s finish the problem out. We will need the list of possible inflection points. These are, Here is the number line for the second derivative. Note that we will need this to see if the two points above are in fact inflection points. So, the concavity only changes at and so this is the only inflection point for Here is the sketch of the graph. The change of concavity at is hard to see, but it is there it’s just a very subtle change in concavity.
http://tutorial.math.lamar.edu/classes/calcI/ShapeofGraphPtII.aspx
13
64
Statistical manipulation is often necessary to order, define and/or organize raw data. A full analysis of statistics is beyond the scope of this work, but there are some standard analyses that anyone working in a cell biology laboratory should be aware of, and know how to perform. After data is collected, it must be ordered, or grouped according to the information which is to be sought. Data is collected in the forms: |Type of Data ||Type of Entry ||yes or no ||+, ++, +++ ||0, 1, 1.3, etc. When collected, the data may appear to be a mere collection of numbers, with little apparent trends. It is first necessary to order those numbers. One method is to count the times a number falls within a range increment. For example, in tossing a coin, one would count the number of Heads and Tails (eliminating the possibility of it landing on its edge). Coin flipping is nominal data, and thus would only have two alternatives. Should we flip the coin 100 times, we could count the number of times it lands Heads and the number of Tails. We would thus accumulate data relative to the categories available. A simple table of the grouping would be known as a frequency distribution , for example: Similarly, if we examine the following numbers; 3,5,4,2,5,6,2,4,4, several things are apparent. First, the data needs to be grouped and the first task is to establish an increment for the categories. Let us group the data according to integers, with no rounding of decimals. We can construct a table which groups the data. (Integer x Frequency) This data can be plotted as follows: _ x M = -------- n These values can now be used to characterize distribution patterns of data. For our coin flipping, the likelihood of a Head or a Tail is equal. Another way of saying this is that there is equal probability of obtaining a Head or not obtaining a Head with each flip of the coin. When the situation exists that there is equal probability for an event as for the opposite event, the data will be graphed as a binomial distrution, and a Normal curve will result. If the coin is flipped ten times, the probability of one head and nine tails equals the probability of nine heads and one tail. The probability of two heads and eight tails equals the probability of eight heads and two tails and so on. However, the probability of the latter (two heads) is greater than the probability of the former (one head). The most likely arrangement is five heads and five tails. When random data is arranged and displays a binomial distribution, a plot of frequency vs. occurency will result in a normal distribution curve . For an ideal set of data (i.e. no tricks, such as two headed coin, or gum on the edge of the coin), the data will be distributed in a bell shaped curve, where the median, mode and mean are equal. This does not give an accurate indication of the deviation of the data, and in particular does not inform us of the degree of dispersion of the data about the mean. The measure of the dispersion of data is known as the Standard Deviation . It is given mathematically by the formula: _ (M - X) S = sqrt(----------) n - 1 This value gives a measure of the variability of the data, and in particular, how it varies from an ideal set of data generated by a random binomial distribution. In other words, how different is it from an ideal Normal Distribution. The more variable the data, the higher the value of the standard deviation. Other measures of variability are the range (difference between minimum and maximum values), the Coefficient of Variation (Standard Deviation divided by the Mean and expressed as a Percent) and the Variance. The variance is the deviation of several or all values from the mean and must be calculated relative to the total number of values. Variance can be calculated from the formula: _ (M - X) V = ---------- n - 1 All of these calculated parameters are for a single set of data that conforms to a normal distribution. Unfortunately, biological data does not always conform in this way, and often sets of data must be compared. If the data does not fit a binomial distribution, often it fits a skewed plot known as a Poisson distribution . This distribution occurs when the probability of an event is so low, that the probability of its not occurring approaches 1. While this is a signficant statistical event in biology, details of the Poisson Distribution are left to texts on biological statistics. Likewise, the proper handling of comparisons of multiple sets of data. Suffice it indicate that all statistics comparing multiple sets begin with calculation of the parameters detailed here, and for each set of data. For example, the standard error of the mean (also known simply as the standard error) is often used to measure distinctions among populations. It is defined as the standard deviation of a distribution of means. Thus, the mean for each population is computed and the collection of means are then used to calculate a standard deviation of those means. Once all of these parameters are calculated, the general aim of statistical analysis is to estimate the significance of the data, and in particular the probability that the data represents effects of experimental treatment, or conversely, pure random distribution. Tests of significance (Student's t Test, Analysis of Variance and Confidence Limits) will also be left to more extensive treatment in other volumes. Return to table of contents
http://homepages.gac.edu/~cellab/appds/appd-b.html
13
94
Related Topics: Euler's Equation Quaternion is a geometrical operator to represent the relationship (relative length and relative orientation) between two vectors in 3D space. William Hamilton invented Quaternion and completed the calculus of Quaternions to generalize complex numbers in 4 dimension (one real part and 3 imaginary numbers). In this article, we focus on rotations of 3D vectors because Quaternion implementation for 3D rotation is usually simpler, cheaper and better behaved than other methods. Euler's equation (formula) can be used to represent a 2D point with a length and angle on a complex plane. Multiplication of 2 complex numbers implies a rotation in 2D. One may think instantly it can be extended to 3D rotation by adding additional dimension. William Hamilton initially studied it by adding an additional imaginary number, j to generalize complex numbers to 3D. However, the set of 3 dimensional complex numbers is not closed under multiplication. For example, the multiplication of i and j cannot be represented as a form of a+ib+jc. If multiplication is closed, then there exist a, b, c ∈ R that satisfies ij=a+ib+jc. The equation c2+1=0 gives the contradiction. There is no real number c to satisfy c2+1=0. Later, Hamilton realized 4 dimensional complex numbers are required for multiplication to be closed by adding an additional imaginary part, k. And, he denoted this 4D complex number set as Quaternion. (In mathematics, each algebra has twice the dimension of the previous one. Therefore, the higher number set of complex number is quaternion, and the next number set is octonion.) Now, the above example now satisfies as ij=0+i0+j0+k1. The condition of 3 imaginary numbers is explained later, in Quadrantal vectors. Hamilton's motivation was to create a geometrical operator to transform from a vector to the other in 3D space. This operator is the geometric quotient (ratio) between two vectors that changes the length and the orientation, and it is called a Quaternion because the operation is required 4 parameters. (Note that this notation is not same as numerical division nor multiplication. Instead, it is translated as "the quaternion operator q on to produce , or the operator q to convert into ".) When is transformed to vector , quaternion operator performs 2 distinct operations: 1. Tensor: scaling the length of , so as to make it of the same length as . 2. Versor: rotating , so as to cause it to coincide with in direction. These 2 opearions can be symbolically represented (tensor of q) and (versor of q) respectively. The order of these two operations does not make any difference of the result. The combined tensor and versor operation requires 4 numbers; 1 for scaling, 1 for rotation angle, 2 additional angles to determine the orientation of the rotation plane. For example, a xy-plane is rotated about x and y axis. But, rotation along z axis does not change the orientation of the xy plane. Tensor of q is the geometric quotient (ratio) between the lengths of 2 parallel vectors. It changes the scale of the vector, but, keeps the orientation of the vector unchanged. Versor of q is the geometric quotient of 2 non-parallel vectors of equal length. It represents the relative orientation of one vector with respect to the other vector, but, it does not change the length of the vector. To investigate the quaternion operator deeper, let and as unit vectors along and . so that And, draw AC perpendicular to , and let the unit vector along be . Therefore, becomes and . Since , we substitute and in the above equation, then we have a quaternion, ; The last term contains the geometric quotient of two unit vectors at right angles (90 degree) to each other. This quotient represents a unit vector perpendicular to the plane of and . (Think of cross product of 2 vectors) This unit vector is indicating the rotation axis of the plane and the direction of rotation. If we define it , the above equation becomes From this equation, A/B is tensor (scaling) operation of a quaternion, and is versor (rotation) operator of a quaternion. Note that versor of q is very similar to Euler's equation. Euler's equation contains an imaginary number i, but a quaternion has a vector instead, which is the rotation axis perpendicular to its rotation plane. Thus, a quaternion is also expressed as the sum of scalar S(q) and vector parts V(q); Quaternion can be also written as a 2-tuple form, [s, v]. A unit quaternion can be represented as , because tensor of q is 1 (no scaling). Let i, j, k represent unit vectors orthogonal (perpendicular) each other. We define multiplications and divisions as rotating a unit vector to another at right angle. Note that, this multiplication and division is not numeric algebra. This kind of multiplication and quotient is called geometric. Thus, production or quotient of two unit vectors at right angles to each other produces a unit vector, perpendicular to their plane. And, it reads as i operating on j (or, rotating from i to j) produces k. In same manner, we can write other multiplications and divisions; The square of a unit vector can be defined from above equations; In same manner, Also, ijk can be defined as -1 using above multiplication table and square of unit vector. Here, the basic quaternion mathematics is described. These algebraic definitions and properties are specially required for rotation in 3D space, which describe in the next section. You may skip this section and move on the next section. And, come back later if you need to review a specific definition or property. Note that quaternion multiplication is not commutative, however, multiplication is associative and distributive across addition. Unlike quaternion multiplication, scalar multiplication is commutative. Quaternion subtraction can be derived from scalar multiplication and quaternion addition. Quaternion congugate is defined by negating the vector part of the quaternion. Note that the multiplication of a quaternion and its conjugate is commutative. The norm of a quaternion is defined by; The norm of quaternion is multiplicative meaning that the norm of the multiplication of multiple quaternions equals to the multiplication of the norms of quaternions. The inverse of a quaternion is defined to be; The quaternion inverse makes it possible to divide two quaternions. Note that the inverse of a unit quaternion equals to the conjugate of the unit quaternion. In 2D, the multiplication of two complex numbers implies 2D rotation. When z=x+iy is multiplied by , the length of z' remains same (|z|=|z'|), but the angle of z' is added by θ. (See details in Euler's equation.) However, multiplying a quaternion p by a unit quaternion q does not conserve the length (norm) of the vector part of the quaternion p. For example; Thus, we need a special multiplication for 3D rotations that is length-conserving transformation. For this reason, we multiply the unit quaternion q at the front of p and multiplying q-1 at the back of p, in order to cancel out the length changes. This special double multiplication is called "conjugation by q". If q is a unit quaternion and p = [s, v], then the scalar (s) and the length of v, |v| are unchanged after conjugation by q. For the above example; If p = [s, v] and p' = qpq-1, then p' = [s, v'] where |v| = |v'| The proof takes 3 steps. First, we show S(p)=S(p') for p=[s, 0], then for p=[0, v]. Finally, both results are used to show S(p)=S(p') for p=[s, v]=[s, 0]+[0, v]. 1. If p has a scalar part only p = [s, 0], then 2. If p has a vector part only p = [0, v], then the scalar part of qpq-1 can be computed by 2S(q)=q+q*. 3. If p = [s, v] = [s, 0] + [0, v], then Since we found that the scalar parts of p and p' are same, the norm of p' is;
http://www.songho.ca/math/quaternion/quaternion.html
13
50
Reacting gas volume ratios of reactants or products (Avogadro's Law, In the diagram above, if the volume on the left syringe is twice that of the gas volume in the right, then there are twice as many moles or actual molecules in the left-hand gas syringe. Gay-Lussac's Law of volumes states that 'gases combine with each other in simple proportions by volume'. Law states that 'equal volumes of gases at the same temperature and pressure contain the same number of molecules' or moles of gas. This means the of the equation or the relative moles of reactants and products automatically gives us the gas volumes ratio of reactants and products, if all the gas volumes are measured at the same temperature and pressure. calculations only apply to gaseous reactants or products AND if they are all at the same temperature and pressure. Given the equation: HCl(g) + NH3(g) ==> NH4Cl(s) hydrogen chloride gas combines with 1 mole of ammonia gas to give 1 mole of ammonium chloride solid. 1 volume of hydrogen chloride will react with 1 volume of ammonia to form solid ammonium chloride e.g. 25cm3 + 25cm3 ==> solid product or 400dm3 + 400 dm3 ==> solid product (no Given the equation: N2(g) + 3H2(g) ==> 2NH3(g) 1 mole of nitrogen gas combines with 3 mols of hydrogen gas to form 2 mol of a 1 volume of nitrogen reacts with 3 volumes of hydrogen to produce 2 volumes of ammonia e.g. 50 cm3 nitrogen reacts with 150 cm3 hydrogen (3 x 50) ==> 100 cm3 of ammonia (2 x 50) 10.3: Given the equation: C3H8(g) + 5O2(g) ==> 3CO2(g) + 4H2O(l) 1 mole of propane gas reacts with 5 mols of oxygen gas to form 3 moles of carbon dioxide gas and 4 mols of (a) What volume of oxygen is required to burn 25cm3 of propane, C3H8. volume ratio is C3H8 : O2 is 1 : 5 for burning the fuel propane. so actual ratio is 25 : 5x25, so 125cm3 oxygen is needed. (b) What volume of carbon dioxide is formed if 5dm3 of propane is burned? reactant-product volume ratio is C3H8 : CO2 is 1 : 3 so actual ratio is 5 : 3x5, so 15dm3 carbon dioxide is formed. (c) What volume of air (1/5th oxygen) is required to burn propane at the rate of 2dm3 per minute in a gas fire? volume ratio is C3H8 : O2 is 1 : 5 so actual ratio is 2 : 5x2, so 10dm3 oxygen per minute is needed, therefore, since air is only 1/5th O2, 5 x 10 = 50dm3 of air per minute is required Example 10.4: Given the equation: 2H2(g) + O2(g) ==> 2H2O(l) Example 10.5: It was found that exactly 10 cm3 of bromine vapour (Br2(g)) combined with exactly 30 cm3 chlorine gas (Cl2(g)) to form bromine-chlorine compound BrClx. a) From the reacting gas volume ratio, what must be the value of x? and hence write the formula of the compound. b) Write a balanced equation to show the formation of BrClx The reacting gas volume ratio is 1 : 3, therefore we can write with certainty that 1 mole (or molecule) of bromine reacts with 3 moles (or molecules) of chlorine, and balancing the symbol equation, results in two moles (or molecules) of the bromine-chlorine compound being formed. [rgv] type in answer OTHER CALCULATION PAGES What is relative atomic mass?, relative isotopic mass and calculating relative atomic mass formula/molecular mass of a compound or element molecule Law of Conservation of Mass and simple reacting mass calculations Composition by percentage mass of elements in a compound Empirical formula and formula mass of a compound from reacting masses (easy start, not using moles) Reacting mass ratio calculations of reactants and products moles) and brief mention of actual percent % yield and theoretical yield, and formula mass determination Introducing moles: The connection between moles, mass and formula mass - the basis of reacting mole ratio calculations (relating reacting masses and formula moles to calculate empirical formula and deduce molecular formula of a compound/molecule (starting with reacting masses or % composition) Moles and the molar volume of a gas, Avogadro's Law Reacting gas volume ratios, Avogadro's Law and Gay-Lussac's Law Molarity, volumes and solution concentrations (and diagrams of apparatus) do volumetric titration calculations e.g. acid-alkali titrations (and diagrams of apparatus) Electrolysis products calculations (negative cathode and positive anode products) e.g. % purity, % percentage & theoretical yield, volumetric titration apparatus, dilution of solutions (and diagrams of apparatus), water of crystallisation, quantity of reactants required, atom economy Energy transfers in physical/chemical changes, Gas calculations involving PVT relationships, Boyle's and Charles Laws Radioactivity & half-life calculations including Revision KS4 Science Additional Science Triple Award Science Separate Sciences Courses aid to textbook revision GCSE/IGCSE/O level Chemistry Information Study Notes for revising for AQA GCSE Science, Edexcel GCSE Science/IGCSE Chemistry & OCR 21st Century Science, OCR Gateway Science WJEC gcse science chemistry CCEA/CEA gcse science chemistry O Level Chemistry (revise courses equal to US grade 8, grade 9 grade 10) A level Revision notes for GCE Advanced Subsidiary Level AS Advanced Level A2 IB Revise AQA GCE Chemistry OCR GCE Chemistry Edexcel GCE Chemistry Salters Chemistry CIE Chemistry, WJEC GCE AS A2 Chemistry, CCEA/CEA GCE AS A2 Chemistry revising courses for pre-university students (equal to US grade 11 and grade 12 and AP Honours/honors level for revising science chemistry courses revision guides content copyright © Dr W P Brown 2000-2011 All rights reserved revision notes, puzzles, quizzes, worksheets, x-words etc. * Copying of website material is not permitted Alphabetical Index for Science B C D G H I J K L M N O P U V W X Y Z
http://www.docbrown.info/page04/4_73calcs10rgv.htm
13
140
In implementing the Algebra process and content performance indicators, it is expected that students will identify and justify mathematical relationships. The intent of both the process and content performance indicators is to provide a variety of ways for students to acquire and demonstrate mathematical reasoning ability when solving problems. Local curriculum and local/state assessments must support and allow students to use any mathematically correct method when solving a problem. Throughout this document the performance indicators use the words investigate, explore, discover, conjecture, reasoning, argument, justify, explain, proof, and apply. Each of these terms is an important component in developing a studentís mathematical reasoning ability. It is therefore important that a clear and common definition of these terms be understood. The order of these terms reflects different stages of the reasoning process. Investigate/Explore - Students will be given situations in which they will be asked to look for patterns or relationships between elements within the setting. Discover - Students will make note of possible patterns and generalizations that result from investigation/exploration. Conjecture - Students will make an overall statement, thought to be true, about the new discovery. Reasoning - Students will engage in a process that leads to knowing something to be true or false. Argument - Students will communicate, in verbal or written form, the reasoning process that leads to a conclusion. A valid argument is the end result of the conjecture/reasoning process. Justify/Explain - Students will provide an argument for a mathematical conjecture. It may be an intuitive argument or a set of examples that support the conjecture. The argument may include, but is not limited to, a written paragraph, measurement using appropriate tools, the use of dynamic software, or a written proof. Proof - Students will present a valid argument, expressed in written form, justified by axioms, definitions, and theorems. Apply - Students will use a theorem or concept to solve an algebraic or numerical problem. Use a variety of problem solving strategies to understand new mathematical content A.PS.2 Recognize and understand equivalent representations of a problem situation or a mathematical concept Observe and explain patterns to formulate generalizations and conjectures A.PS.4 Use multiple representations to represent and explain problem situations (e.g., verbally, numerically, algebraically, graphically) Choose an effective approach to solve a problem from a variety of strategies (numeric, graphic, algebraic) A.PS.6 Use a variety of strategies to extend solution methods to other problems A.PS.7 Work in collaboration with others to propose, critique, evaluate, and value alternative approaches to problem solving A.PS.8 Determine information required to solve a problem, choose methods for obtaining the information, and define parameters for acceptable solutions A.PS.9 Interpret solutions within the given constraints of a problem A.PS.10 Evaluate the relative efficiency of different representations and solution methods of a problem A.RP.1 Recognize that mathematical ideas can be supported by a variety of strategies Use mathematical strategies to reach a conclusion and provide supportive arguments for a conjecture A.RP.3 Recognize when an approximation is more appropriate than an exact answer A.RP.4 Develop, verify, and explain an argument, using appropriate mathematical ideas and language A.RP.5 Construct logical arguments that verify claims or counterexamples that refute them A.RP.6 Present correct mathematical arguments in a variety of forms A.RP.7 Evaluate written arguments for validity A.RP.8 Support an argument by using a systematic approach to test more than one case A.RP.9 Devise ways to verify results or use counterexamples to refute incorrect statements A.RP.10 Extend specific results to more general cases A.RP.11 Use a Venn diagram to support a logical argument A.PR.12 Apply inductive reasoning in making and supporting mathematical conjectures Communicate verbally and in writing a correct, complete, coherent, and clear design (outline) and explanation for the steps used in solving a problem Use mathematical representations to communicate with appropriate accuracy, including numerical tables, formulas, functions, equations, charts, graphs, Venn diagrams, and other diagrams A.CM.3 Present organized mathematical ideas with the use of appropriate standard notations, including the use of symbols and other representations when sharing an idea in verbal and written form A.CM.4 Explain relationships among different representations of a problem A.CM.5 Communicate logical arguments clearly, showing why a result makes sense and why the reasoning is valid A.CM.6 Support or reject arguments or questions raised by others about the correctness of mathematical work A.CM.7 Read and listen for logical understanding of mathematical thinking shared by other students A.CM.8 Reflect on strategies of others in relation to oneís own strategy A.CM.9 Formulate mathematical questions that elicit, extend, or challenge strategies, solutions, and/or conjectures of others A.CM.10 Use correct mathematical language in developing mathematical questions that elicit, extend, or challenge other studentsí conjectures A.CM.11 Represent word problems using standard mathematical notation A.CM.12 Understand and use appropriate language, representations, and terminology when describing objects, relationships, mathematical solutions, and rationale A.CM.13 Draw conclusions about mathematical ideas through decoding, comprehension, and interpretation of mathematical visuals, symbols, and technical writing Understand and make connections among multiple representations of the same mathematical idea A.CN.2 Understand the corresponding procedures for similar problems or mathematical concepts A.CN.3 Model situations mathematically, using representations to draw conclusions and formulate new situations Understand how concepts, procedures, and mathematical results in one area of mathematics can be used to solve problems in other areas of mathematics A.CN.5 Understand how quantitative models connect to various physical models and representations A.CN.6 Recognize and apply mathematics to situations in the outside world A.CN.7 Recognize and apply mathematical ideas to problem situations that develop outside of mathematics A.CN.8 Develop an appreciation for the historical development of mathematics A.R.1 Use physical objects, diagrams, charts, tables, graphs, symbols, equations, or objects created using technology as representations of mathematical concepts A.R.2 Recognize, compare, and use an array of representational forms A.R.3 Use representation as a tool for exploring and understanding mathematical ideas A.R.4 Select appropriate representations to solve problem situations A.R.5 Investigate relationships between different representations and their impact on a given problem A.R.6 Use mathematics to show and understand physical phenomena (e.g., find the height of a building if a ladder of a given length forms a given angle of elevation with the ground) A.R.7 Use mathematics to show and understand social phenomena (e.g., determine profit from student and adult ticket sales) A.R.8 Use mathematics to show and understand mathematical phenomena (e.g., compare the graphs of the functions represented by the equations and ) Students will understand numbers, multiple ways of representing numbers, relationships among numbers, and number systems. Identify and apply the properties of real numbers (closure, commutative, associative, distributive, identity, inverse) Note: Students do not need to identify groups and fields, but students should be engaged in the ideas. Students will understand meanings of operations and procedures, and how they relate to one another. A.N.2 Simplify radical terms (no variable in the radicand) A.N.3 Perform the four arithmetic operations using like and unlike radical terms and express the result in simplest form A.N.4 Understand and use scientific notation to compute products and quotients of numbers greater than 100% A.N.5 Solve algebraic problems arising from situations that involve fractions, decimals, percents (decrease/increase and discount), and proportionality/direct variation A.N.6 Evaluate expressions involving factorial(s), absolute value(s), and exponential expression(s) A.N.7 Determine the number of possible events, using counting techniques or the Fundamental Principle of Counting A.N.8 Determine the number of possible arrangements (permutations) of a list of items Variables and Expressions A.A.1 Translate a quantitative verbal phrase into an algebraic expression A.A.2 Write verbal expressions that match given mathematical expressions |A.A.3||Distinguish the difference between an algebraic expression and an algebraic equation| |A.A.4||Translate verbal sentences into mathematical equations or inequalities| |A.A.5||Write algebraic equations or inequalities that represent a situation| |A.A.6||Analyze and solve verbal problems whose solution requires solving a linear equation in one variable or linear inequality in one variable| |A.A.7||Analyze and solve verbal problems whose solution requires solving systems of linear equations in two variables| |A.A.8||Analyze and solve verbal problems that involve quadratic equations| |A.A.9||Analyze and solve verbal problems that involve exponential growth and decay| |A.A.10||Solve systems of two linear equations in two variables algebraically (See A.G.7)| |A.A.11||Solve a system of one linear and one quadratic equation in two variables, where only factoring is required Note: The quadratic equation should represent a parabola and the solution(s) should be integers.| Variables and Expressions A.A.12 Multiply and divide monomial expressions with a common base, using the properties of exponents Note: Use integral exponents only. A.A.13 Add, subtract, and multiply monomials and polynomials A.A.14 Divide a polynomial by a monomial or binomial, where the quotient has no remainder A.A.15 Find values of a variable for which an algebraic fraction is undefined. A.A.16 Simplify fractions with polynomials in the numerator and denominator by factoring both and renaming them to lowest terms A.A.17 Add or subtract fractional expressions with monomial or like binomial denominators A.A.18 Multiply and divide algebraic fractions and express the product or quotient in simplest form A.A.19 Identify and factor the difference of two perfect squares A.A.20 Factor algebraic expressions completely, including trinomials with a lead coefficient of one (after factoring a GCF) Equations and Inequalities A.A.21 Determine whether a given value is a solution to a given linear equation in one variable or linear inequality in one variable A.A.22 Solve all types of linear equations in one variable A.A.23 Solve literal equations for a given variable A.A.24 Solve linear inequalities in one variable A.A.25 Solve equations involving fractional expressions Note: Expressions which result in linear equations in one variable. A.A.26 Solve algebraic proportions in one variable which result in linear or quadratic equations A.A.27 Understand and apply the multiplication property of zero to solve quadratic equations with integral coefficients and integral roots A.A.28 Understand the difference and connection between roots of a quadratic equation and factors of a quadratic expression Patterns, Relations, and Functions A.A.29 Use set-builder notation and/or interval notation to illustrate the elements of a set, given the elements in roster form A.A.30 Find the complement of a subset of a given set, within a given universe A.A.31 Find the intersection of sets (no more than three sets) and/or union of sets (no more than three sets) A.A.32 Graph the Explain slope as a rate of change between dependent and independent variables A.A.33 Determine the slope of a line, given the coordinates of two points on the line A.A.34 Write the equation of a line, given its slope and the coordinates of a point on the line A.A.35 Write the equation of a line, given the coordinates of two points on the line A.A.36 Write the equation of a line parallel to the x- or y-axis A.A.37 Determine the slope of a line, given its equation in any form A.A.38 Determine if two lines are parallel, given their equations in any form A.A.39 Determine whether a given point is on a line, given the equation of the line A.A.40 Determine whether a given point is in the solution set of a system of linear inequalities A.A.41 Determine the vertex and axis of symmetry of a parabola, given its equation (See A.G.10 ) Find the sine, cosine, and tangent ratios of an angle of a right triangle, given the lengths of the sides |A.A.43||Determine the measure of an angle of a right triangle, given the length of any two sides of the triangle| |A.A.44||Find the measure of a side of a right triangle, given an acute angle and the length of another side| |A.A.45||Determine the measure of a third side of a right triangle using the Pythagorean theorem, given the lengths of any two sides| A.G.1 Find the area and/or perimeter of figures composed of polygons and circles or sectors of a circle Note: Figures may include triangles, rectangles, squares, parallelograms, rhombuses, trapezoids, circles, semi-circles, quarter-circles, and regular polygons (perimeter only). A.G.2 Use formulas to calculate volume and surface area of rectangular solids and cylinders Students will apply coordinate geometry to analyze problem solving situations. A.G.3 Determine when a relation is a function, by examining ordered pairs and inspecting graphs of relations A.G.4 Identify and graph linear, quadratic (parabolic), absolute value, and exponential functions A.G.5 Investigate and generalize how changing the coefficients of a function affects its graph A.G.6 Graph linear inequalities A.G.7 Graph and solve systems of linear equations and inequalities with rational coefficients in two variables (See A.A.10) A.G.8 Find the roots of a parabolic function graphically Note: Only quadratic equations with integral solutions. A.G.9 Solve systems of linear and quadratic equations graphically Note: Only use systems of linear and quadratic equations that lead to solutions whose coordinates are integers. A.G.10 Determine the vertex and axis of symmetry of a parabola, given its graph (See A.A.41) Note: The vertex will have an ordered pair of integers and the axis of symmetry will have an integral value. Units of Measurement A.M.1 Calculate rates using appropriate units (e.g., rate of a space ship versus the rate of a snail) A.M.2 Solve problems involving conversions within measurement systems, given the relationship between the units Students will understand that all measurement contains error and be able to determine its significance. Error and Magnitude |A.M.3||Calculate the relative error in measuring square and cubic units, when there is an error in the linear measure| |A.M.2||Solve problems involving conversions within measurement systems, given the relationship between the units| Students will collect, organize, display, and analyze data. Organization and Display of Data |A.S.1||Categorize data as qualitative or quantitative| |A.S.2||Determine whether the data to be analyzed is univariate or bivariate| |A.S.3||Determine when collected data or display of data may be biased| |A.S.4||Compare and contrast the appropriateness of different measures of central tendency for a given data set| |A.S.5||Construct a histogram, cumulative frequency histogram, and a box-and-whisker plot, given a set of data| |A.S.6||Understand how the five statistical summary (minimum, maximum, and the three quartiles) is used to construct a box-and-whisker plot| |A.S.7||Create a scatter plot of bivariate data| |A.S.8||Construct manually a reasonable line of best fit for a scatter plot and determine the equation of that line| Analysis of Data |A.S.9||Analyze and interpret a frequency distribution table or histogram, a cumulative frequency distribution table or histogram, or a box-and-whisker plot| |A.S.10||Evaluate published reports and graphs that are based on data by considering: experimental design, appropriateness of the data analysis, and the soundness of the conclusions| |A.S.11||Find the percentile rank of an item in a data set and identify the point values for first, second, and third quartiles| Identify the relationship between the independent and dependent variables from a scatter plot (positive, negative, or none) |A.S.13||Understand the difference between correlation and causation| |A.S.14||Identify variables that might have a correlation but not a causal relationship| Students will make predictions that are based upon data analysis. Predictions from Data Identify and describe sources of bias and its effect, drawing conclusions from data |A.S.16||Recognize how linear transformations of one-variable data affect the dataís mean, median, mode, and range| |A.S.17||Use a reasonable line of best fit to make a prediction involving interpolation or extrapolation| Students will understand and apply concepts of probability. |A.S.18||Know the definition of conditional probability and use it to solve for probabilities in finite sample spaces| |A.S.19||Determine the number of elements in a sample space and the number of favorable events| Calculate the probability of an event and its complement |A.S.21||Determine empirical probabilities based on specific sample data| Determine, based on calculated probability of a set of events, if: Calculate the probability |Table of Contents||Prekindergarten||Kindergarten||Grade 1||Grade 2| |Grade 3||Grade 4||Grade 5||Grade 6||Grade 7|
http://www.p12.nysed.gov/ciai/mst/math/standards/algebra.html
13
65
The Horse and Cart Problem. A horse is harnessed to a cart. If the horse tries to pull the cart, the horse must exert a force on the cart. By Newton's third law the cart must then exert an equal and opposite force on the horse. Newton's second law tells us that acceleration is equal to the net force divided by the mass of the system. (F = ma, so a = F/m .) Since the two forces are equal and opposite, they must add to zero, so Newton's second law tells us that the acceleration of the system must be zero. If it doesn't accelerate, and it started it rest, it must remain at rest (by the definition of acceleration), and therefore no matter how hard the horse pulls, it can never move the cart. List all the physical errors and mistakes in the above paragraph and explain why they are wrong. Show a free-body force diagram of the horse and cart, identifying all relevant forces, and then write a short paragraph describing this situation correctly. Answer.Mistakes in the paragraph include: In these diagrams, an oval or a circle has been used to enclose the subsystem being analyzed. The forces on the cart include the forward force the horse exerts on the cart and the backward force due to friction at the ground, acting on the wheels. At rest, or at constant velocity, these two are equal in size, because the acceleration of the cart is zero. The forces on the horse include the backward force the cart exerts on the horse and the forward force of the ground on its hooves. At rest, or at constant velocity, these two are equal in size, because the acceleration of the horse is zero. Therefore A = -B. The force the horse exerts on the cart is of equal size and opposite direction to the force the cart exerts on the horse, by Newton's third law. (These two forces are an action-reaction pair.) So B = -C, and this is true whether or not anything is accelerating. Since the horse isn't accelerating, C = -D, by Newton's second law, and, finally we see that all the forces shown in the diagram are the same size. As the horse exerts greater force, both horse and cart move, accelerating from zero to some velocity. During that acceleration the net forward force on the horse must be greater than the net backward force on the horse. And also, the net forward force on the cart must be greater than the net backward force on the cart. This is from Newton's second law. So what has changed? The friction on the horse's feet is now greater than the friction on the cart's wheels. The friction on the cart's wheels is rolling resistance, and is primarily dependent on the size of the cart's load, and not on its speed. So it hasn't changed much. But, because it is accelerating, the force the horse exerts on the cart has increased. By Newton's third law, the force of the cart on the horse has increased by the same amount. But the horse is also accelerating, so the friction of the ground on its hooves must be larger than the force the cart exerts on the horse. The friction between hooves and ground is static (not sliding or rolling) friction, and can increase as necessary (up to a limit, when slipping might occur, as on a slippery mud surface or loose gravel). So, when accelerating, we still have B = -C, by Newton's third law, but D>C and B>A, so D>A. (Lower case indicates sizes of vectors, without regard to sign or direction.) Also D-C = Ma, where M is the horse's mass, and B-A = ma, where m is the mass of the cart. You can even conclude that D-A = (M+m)a by considering the system to be the horse and card together. Note that the net force on the entire system is D+C+B+A = D+A, since B = -C by Newton's third law. The net force on the system has size D-A, and is in the forward direction, since A is smaller than D and opposite in direction. B and C are considered an internal reaction force pair, and therefore always are equal and opposite, and always add vectorially to zero. Therefore, they are usually ignored when summing forces on the entire system. But, as we have seen, some information about the system can only be extracted by subdividing the system into component parts, and then these forces internal to the entire system cannot be neglected, for one of each is an external force for a subsystem. It all begins with the horse, of course. The horse places its feet so as to change the angle of the force its hooves exert on the ground, thereby increasing the backward force of its hooves on the ground. The backward reaction force of the ground on the horse's hooves therefore increases, by Newton's third law. This increases the tension on the harness and finally, the force on the cart. When the desired forward speed is reached, the horse decreases its effort, and at constant speed all these horizontal forces we talked about above are again equal in size, and there's no further acceleration. Note: The diagrams show the action reaction pair B and C without being precise about where they act. In problems of this kind the major objects are connected by a rope or light rod. In this case by a harness and wagon tree. These have negligible mass compared to the horse and wagon, and their mass is ignored (treated as if it were zero). You may worry that we should treat this as a three body system, and write equations for the third body (the harness and wagon tree). But these, being of small mass, have negligible effect on the forces in the problem, because if only two forces are acting on a zero mass body, they must add to sero, so they must therefore be equal in size and opposite in direction. When only two forces act on a nearly zero mass body, they are nearly equal in size and opposite in direction. This is a conclusion from Newton's second law. Too often textbooks assume this simplification in problems of this sort without mentioning or justifying it, thereby causing student confusion. To answer this question we needed only to consider the horizontal force components on the bodies. The vertical force components, downward forces due to gravity and upward forces of the road on cart and horse, do not contribute to the horizontal motion. Also, since nothing is undergoing rotational acceleration, we needn't bother with torques. What force pushes the horse forward? It's the force exerted by the ground! The horse pushes backward on the ground, so the ground pushes forward with an equal force. If the horse pushes back against the ground with a force greater than the cart's resisting force, only then the horse will accelerate. And, being fastened to the horse, the cart must also accelerate along with the horse. If every force has an equal size and oppositely directed reaction force, how can anything move? It is because action and reaction forces act on different bodies. Follow-up questionWe've said before that since work is force × distance, so a force does no work on a body unless the body moves. But the force of the ground on each hoof does not involve motion of either the ground or the hoof. So how can the ground do any work on the horse? Answer. It doesn't. The horse does all the work through motions of the muscles in its body. The ground provides an anchor for its hooves at each step. The ground (or pavement) isn't doing any of that work. Sometimes we forget this as we do problems. Suppose you are on roller skates and you push yourself away from a solid wall. We often calculate the work done in that process by multiplying the force the wall exerts on you by the distance you move as you push. (Actually an integral is required if that force is not constant.) But that work is still supplied by your muscles, not the wall, and the energy gained comes from stored energy from the food you've eaten. Revised June 26, 2011. Input and suggestions are welcome at the address shown to the right. When commenting on a specific document, please reference it by name or content. Return to Donald Simanek's page.
http://www.lhup.edu/~DSIMANEK/physics/horsecart.htm
13
82
How do I isolate x (or P or T...) in a formula? Rearranging equations to solve for a given variable Equations as important geological tools Although this may seem like magic, you don't have to be a "mathemagician" to do this. This page is designed to give you some tools to call upon to help you to learn some simple steps to help you to solve an equation for any of the variables (letters that represent the element or quantity of interest). Why should I manipulate equations? Believe it or not, there are many good reasons to develop your ability to rearrange equations that are important to the geosciences. It can save time, help you with units and save some brain space! Here are some reasons to develop your equation manipulation skills (in no particular order): - Equations are easier to handle before inserting numbers! And, if you can isolate a variable on one side of the equation, it is applicable to every similar problem that asks you to solve for that variable! - If you know how to manipulate equations, you only have to remember one equation that has all the variables of question in it - you can manipulate it to solve for any other variable! This means less memorization! - Manipulating equations can help you keep track of (or figure out) units on a number. Because units are defined by the equations, if you manipulate, plug in numbers and cancel units, you'll end up with exactly the right units (for a given variable)! Where is this used in the geosciences? To be honest, equation manipulation occurs in almost every aspect of the geosciences. Any time you see a P or T or ρ or x (or even =), there is an equation that you could manipulate. Because equations can be used to describe lots of important natural phenomena, being able to manipulate them gives you a powerful tool for understanding the world around you! See the Practice Manipulating Equations page for just a few examples. A Review of Important Rules for Rearranging Equations You probably learned a number of rules for manipulating equations in a previous algebra course. It never hurts to remind ourselves of the rules. So let's review: - RULE #1: you can add, subtract, multiply and divide by anything, as long as you do the same thing to both sides of the equals sign. In an equation, the equals sign acts like the fulcrum of a balance: if you add 5 of something to one side of the balance, you have to add the same amount to the other side to keep the balance steady. The same thing goes for an equation - doing the same operation to both sides keeps the meaning of the equation from changing. Let's use the equation for a line to illustrate an example of how to use Rule #1. The general equation for a line is: If we wish to solve for b in this equation, we must subtract mx from both sides. If we perform the math on each side (that is, subtract mx from mx on the right), we end up with an equation that looks like this: This equation can also be written b = y - mx, if you prefer to have the solved variable on the left. - RULE #2: to move or cancel a quantity or variable on one side of the equation, perform the "opposite" operation with it on both sides of the equation. For example if you had g-1=w and wanted to isolate g, add 1 to both sides (g-1+1 = w+1). Simplify (because (-1+1)=0) and end up with g = w+1. Let's use a more complicated equation that geologists can use to figure out the relationship of thickness to density of substances that are floating (e.g., the crust in the mantle, icebergs in water): where Habove = the height of an object above the surface of the fluid it is floating in, Htotal = the total height (or thickness) of the floating object ρobject = the density of the object and, ρfluid = the density of the fluid Let's imagine that we're studying an iceberg and we want to know what the density of that iceberg is. How do we rearrange the equation to solve for this variable? It is going to take multiple steps to isolate the ρobject on one side of the equation. How do we begin? - Let's start by isolating the part of the equation inside the parentheses. To do this, we need to divide both sides by Htotal: - We're still not quite there. What else needs to get moved to isolate ρobject? Let's isolate the fraction that contains it so we want to subtract 1 from both sides: - We still need to do a few more operations to isolate ρobject. First, multiply both sides by ρfluid to clear the fraction: - Then we need to get rid of the negative sign: - With a little rearranging of the right side of the equation, we end up with an equation to solve for the density of the iceberg! Some simple steps for manipulating equations Here are some simple steps for manipulating equations. Under each step you will find an example of how to do this with an example that uses the geologic context of density (a measure of mass per unit volume). - Assess what you have (which of the variables do you have values for?, what units are present?, etc.). DO NOT plug in any numbers yet! For example: You have a cube of pyrite that is 3 cm x 3 cm x 3 cm. You know that pyrite's density is 5.02 g/cm3. Can you figure out how much that cube of pyrite weighs (without using a balance)? First, you need to know that density (ρ) is equal to mass (m) divided by volume (v). We can write this as a mathematical expression (or equation, if you prefer): Which of these values do you have in the question above? You have density (5.02 g/cm3). And with the information you can figure out volume (length x width x height). - Determine which of the variables you want as your answer. (What is the question asking you to calculate? What is the unknown variable?) The question above asks you to determine the mass of a pyrite cube (without weighing it/using the information given in the problem). So, in the equation for density, you want to determine "mass". Remember, don't plug anything in yet. - Rearrange the equation so that the unknown variable is by itself on one side of the equals sign (=) and all the other variables are on the other side. RULE #1: you can add, subtract, multiply and divide by anything, as long as you do the same thing to both sides of the equals sign. Let's take the density equation: and rearrange it. We want to isolate the variable for mass (m). To do this we first multiply both sides of the equation by volume (v).We end up with an equation that has mass isolated on one side of the equation! - NOW plug in the numbers! Replace known variables with their values and don't forget to keep track of units! Our equation is . The nice thing about this equation is that now that we've rearranged it, all of our known variables are on one side and the one we don't know is on the other. Begin by plugging in what we know: ρ (the density of pyrite) and V (the volume (length x width x height) of the cube): Simplify the volume term by multiplying: Cancel same units on the top and bottom (where you can) so that we end up with the units we want (if you don't understand how to do this, see the Unit Conversions module): - Determine the value of the unknown variable by performing the mathematical functions. That is, add, subtract, multiply and divide according to the equation you wrote for step 2. - Ask yourself whether the answer is reasonable in the context of what you know about the geosciences and how much things should weigh. This is a thing that mostly takes experience. If you are unsure, you could find a balance and weigh the cube to see if you're in the right ballpark. If you're holding it in your hand, you could guess whether this seems about right...More importantly, if you get a number like 135,000 g, do you think that's reasonable? That's 135 kg (which is about 300 lbs!) and it is probably not right. What about if you get something like 0.00135 grams? It is important to be able to distinguish whether you're in the right range, more than whether you're exactly right. Another way to think about whether you're right is to find something that weighs the same from your own experience. What does 135 g feel like? Well, there are about 450 g in a pound, so 135 g is between 1/4 lb and 1/3 lb. What do you know that has a similar weight? (The first think that comes to mind for me is burgers...). Does it make sense that a cube of pyrite (a golden metallic mineral) that is about one inch on each side would weigh that much? Use your own experience to develop a way to evaluate weights and other measures. - I'm ready to practice! (These problems have worked answers.) - I still need more help! (See the links below for more help with equations). More help with equations Geomaths at University College London has a MathHelp page about equations and functions (more info) . The chemistry department at Texas A&M has a math review page about Algebraic Manipulation. The Economics and Business faculty at University of Sidney has a page where you can practice your equation manipulation skills! Take the algebraic manipulation quizzes! This page was written and compiled by Dr. Jennifer M. Wenner, Geology Department, University of Wisconsin Oshkosh and Dr. Eric M. Baer, Geology Program, Highline Community College,
http://nagt.org/mathyouneed/equations/index.html
13
60
Using the Mean Value Theorem for Integrals The Mean Value Theorem for Integrals guarantees that for every definite integral, a rectangle with the same area and width exists. Moreover, if you superimpose this rectangle on the definite integral, the top of the rectangle intersects the function. This rectangle, by the way, is called the mean-value rectangle for that definite integral. Its existence allows you to calculate the average value of the definite integral. Calculus boasts two Mean Value Theorems — one for derivatives and one for integrals. Here, you will look at the Mean Value Theorem for Integrals. You can find out about the Mean Value Theorem for Derivatives in Calculus For Dummies by Mark Ryan (Wiley). The best way to see how this theorem works is with a visual example: The first graph in the figure shows the region described by the definite integral This region obviously has a width of 1, and you can evaluate it easily to show that its area is The second graph in the figure shows a rectangle with a width of 1 and an area of It should come as no surprise that this rectangle’s height is also so the top of this rectangle intersects the original function. The fact that the top of the mean-value rectangle intersects the function is mostly a matter of common sense. After all, the height of this rectangle represents the average value that the function attains over a given interval. This value must fall someplace between the function’s maximum and minimum values on that interval. Here’s the formal statement of the Mean Value Theorem for Integrals: If f(x) is a continuous function on the closed interval [a, b], then there exists a number c in that interval such that: This equation may look complicated, but it’s basically a restatement of this familiar equation for the area of a rectangle: Area = Height · Width In other words, start with a definite integral that expresses an area, and then draw a rectangle of equal area with the same width (b – a). The height of that rectangle — f(c) — is such that its top edge intersects the function where x = c. The value f(c) is the average value of f(x) over the interval [a, b]. You can calculate it by rearranging the equation stated in the theorem: For example, here’s a figure that illustrates the definite integral and its mean-value rectangle. Now, here’s how you calculate the average value of the shaded area: Not surprisingly, the average value of this integral is 30, a value between the function’s minimum of 8 and its maximum of 64.
http://www.dummies.com/how-to/content/using-the-mean-value-theorem-for-integrals.html
13
63
Functions are the fundamental unit of program execution in any programming language. As in other languages, an F# function has a name, can have parameters and take arguments, and has a body. F# also supports functional programming constructs such as treating functions as values, using unnamed functions in expressions, composition of functions to form new functions, curried functions, and the implicit definition of functions by way of the partial application of function arguments. You define functions by using the let keyword, or, if the function is recursive, the let rec keyword combination. // Non-recursive function definition. let [inline] function-name parameter-list [ : return-type ] = function-body // Recursive function definition. let rec function-name parameter-list = recursive-function-body The function-name is an identifier that represents the function. The parameter-list consists of successive parameters that are separated by spaces. You can specify an explicit type for each parameter, as described in the Parameters section. If you do not specify a specific argument type, the compiler attempts to infer the type from the function body. The function-body consists of an expression. The expression that makes up the function body is typically a compound expression consisting of a number of expressions that culminate in a final expression that is the return value. The return-type is a colon followed by a type and is optional. If you do not specify the type of the return value explicitly, the compiler determines the return type from the final expression. A simple function definition resembles the following: In the previous example, the function name is f, the argument is x, which has type int, the function body is x + 1, and the return value is of type int. The inline specifier is a hint to the compiler that the function is small and that the code for the function can be integrated into the body of the caller. At any level of scope other than module scope, it is not an error to reuse a value or function name. If you reuse a name, the name declared later shadows the name declared earlier. However, at the top level scope in a module, names must be unique. For example, the following code produces an error when it appears at module scope, but not when it appears inside a function: But the following code is acceptable at any level of scope: Names of parameters are listed after the function name. You can specify a type for a parameter, as shown in the following example: If you specify a type, it follows the name of the parameter and is separated from the name by a colon. If you omit the type for the parameter, the parameter type is inferred by the compiler. For example, in the following function definition, the argument x is inferred to be of type int because 1 is of type int. However, the compiler will attempt to make the function as generic as possible. For example, note the following code: The function creates a tuple from one argument of any type. Because the type is not specified, the function can be used with any argument type. For more information, see Automatic Generalization (F#). A function body can contain definitions of local variables and functions. Such variables and functions are in scope in the body of the current function but not outside it. When you have the lightweight syntax option enabled, you must use indentation to indicate that a definition is in a function body, as shown in the following example: The compiler uses the final expression in a function body to determine the return value and type. The compiler might infer the type of the final expression from previous expressions. In the function cylinderVolume, shown in the previous section, the type of pi is determined from the type of the literal 3.14159 to be float. The compiler uses the type of pi to determine the type of the expression h * pi * r * r to be float. Therefore, the overall return type of the function is float. To specify the return value explicitly, write the code as follows: As the code is written above, the compiler applies float to the entire function; if you mean to apply it to the parameter types as well, use the following code: If you supply fewer than the specified number of arguments, you create a new function that expects the remaining arguments. This method of handling arguments is referred to as currying and is a characteristic of functional programming languages like F#. For example, suppose you are working with two sizes of pipe: one has a radius of 2.0 and the other has a radius of 3.0. You could create functions that determine the volume of pipe as follows: You would then supply the additional argument as needed for various lengths of pipe of the two different sizes: Recursive functions are functions that call themselves. They require that you specify the rec keyword following the let keyword. Invoke the recursive function from within the body of the function just as you would invoke any function call. The following recursive function computes the nth Fibonacci number. The Fibonacci number sequence has been known since antiquity and is a sequence in which each successive number is the sum of the previous two numbers in the sequence. Some recursive functions might overflow the program stack or perform inefficiently if you do not write them with care and with awareness of special techniques, such as the use of accumulators and continuations. In F#, all functions are considered values; in fact, they are known as function values. Because functions are values, they can be used as arguments to other functions or in other contexts where values are used. Following is an example of a function that takes a function value as an argument: You specify the type of a function value by using the -> token. On the left side of this token is the type of the argument, and on the right side is the return value. In the previous example, apply1 is a function that takes a function transform as an argument, where transform is a function that takes an integer and returns another integer. The following code shows how to use apply1: The value of result will be 101 after the previous code runs. Multiple arguments are separated by successive -> tokens, as shown in the following example: The result is 200. A lambda expression is an unnamed function. In the previous examples, instead of defining named functions increment and mul, you could use lambda expressions as follows: You define lambda expressions by using the fun keyword. A lambda expression resembles a function definition, except that instead of the = token, the -> token is used to separate the argument list from the function body. As in a regular function definition, the argument types can be inferred or specified explicitly, and the return type of the lambda expression is inferred from the type of the last expression in the body. For more information, see Lambda Expressions: The fun Keyword (F#). Functions in F# can be composed from other functions. The composition of two functions function1 and function2 is another function that represents the application of function1 followed the application of function2: The result is 202. Pipelining enables function calls to be chained together as successive operations. Pipelining works as follows: The result is again 202. The composition operators take two functions and return a function; by contrast, the pipeline operators take a function and an argument and return a value. The following code example shows the difference between the pipeline and composition operators by showing the differences in the function signatures and usage. // Function composition and pipeline operators compared. let addOne x = x + 1 let timesTwo x = 2 * x // Composition operator // ( >> ) : ('T1 -> 'T2) -> ('T2 -> 'T3) -> 'T1 -> 'T3 let Compose2 = addOne >> timesTwo // Backward composition operator // ( << ) : ('T2 -> 'T3) -> ('T1 -> 'T2) -> 'T1 -> 'T3 let Compose1 = addOne << timesTwo // Result is 5 let result1 = Compose1 2 // Result is 6 let result2 = Compose2 2 // Pipelining // Pipeline operator // ( <| ) : ('T -> 'U) -> 'T -> 'U let Pipeline1 x = addOne <| timesTwo x // Backward pipeline operator // ( |> ) : 'T1 -> ('T1 -> 'U) -> 'U let Pipeline2 x = addOne x |> timesTwo // Result is 5 let result3 = Pipeline1 2 // Result is 6 let result4 = Pipeline2 2 You can overload methods of a type but not functions. For more information, see Methods (F#).
http://msdn.microsoft.com/en-us/library/dd233229.aspx
13
73
A famous urban legend states that a penny dropped from the top of the Empire State Building will punch a hole in the sidewalk below. Given the height of the building and the hardness of the penny, that seems like a reasonable possibility. Whether it's true or not is a matter that can be determined scientifically. Before we do that, though, let's get some background. Falling rocks can be dangerous and, the farther they fall, the more dangerous they become. Falling raindrops, snowflakes, and leaves, however, are harmless no matter how far they fall. The distinction between those two possibilities has nothing to do with gravity, which causes all falling objects to accelerate downward at the same rate. The difference is entirely due to air resistance. Air resistance—technically known as drag—is the downwind force an object experiences as air moves passed it. Whenever an object moves through the air, the two invariably push on one another and they exchange momentum. The object acts to drag the air along with it and the air acts to drag the object along with it, action and reaction. Those two aerodynamic forces affect the motions of the object and air, and are what distinguish falling snowflakes from falling rocks. Two types of drag force affect falling objects: viscous drag and pressure drag. Viscous drag is the friction-like effect of having the air rub across the surface of the object. Though important to smoke and dust particles in the air, viscous drag is too weak to affect larger objects significantly. In contrast, pressure drag is strongly affects most large objects moving through the air. It occurs when airflow traveling around the object breaks away from the object's surface before reaching the back of the object. That separated airflow leaves a turbulent wake behind the object—a pocket of air that the object is effectively dragging along with it. The wider this turbulent wake, the more air the object is dragging and the more severe the pressure drag force. The airflow separation occurs as the airflow is attempting to travel from the sides of the object to the back of the object. At the sides, the pressure in the airflow is especially low due as it bends to arc around the sides. Bernoulli's equation is frequently invoked to help explain the low air pressure near the sides of the object. As this low-pressure air continues toward the back of the object, where the pressure is much greater, the airflow is moving into rising pressure and is pushed backward. It is decelerating. Because of inertia, the airflow could be expected to reach the back of the object anyway. However, the air nearest the object's surface—boundary layer air—rubs on that surface and slows down. This boundary layer doesn't quite make it to the back of the object. Instead, it stops moving and consequently forms a wedge that shaves much of the airflow off of the back of the object. A turbulent wake forms and the object begins to drag that wake along with it. The airflow and object are then pushing on one another with the forces of pressure drag. Those pressure drag forces depend on the amount of air in the wake and the speed at which the object is dragging the wake through the passing air. In general, the drag force on the object is proportional to the cross sectional area of its wake and the square of its speed through the air. The broader its wake and the faster it moves, the bigger the drag force it experiences. We're ready to drop the penny. When we first release it at the top of the Empire State Building, it begins to accelerate downward at 9.8 meters-per-second2—the acceleration due to gravity—and starts to move downward. If no other force appeared, the penny would move according to the equations of motion for constant downward acceleration, taught in most introductory physics classes. It would continue to accelerate downward at 9.8 meters-per-second2, meaning that its downward velocity would increase steadily until the moment it hit sidewalk. At that point, it would be traveling downward at approximately 209 mph (336 km/h) and it would do some damage to the sidewalk. That analysis, however, ignores pressure drag. Once the penny is moving downward through the air, it experiences an upward pressure drag force that affects its motion. Instead of accelerating downward in response to its weight alone, the penny now accelerates in response to the sum of two force: its downward weight and the upward drag force. The faster the penny descends through the air, the stronger the drag force becomes and the more that upward force cancels the penny's downward weight. At a certain downward velocity, the upward drag force on the penny exactly cancels the penny's weight and the penny no longer accelerates. Instead, it descends steadily at a constant velocity, its terminal velocity, no matter how much farther drops. The penny's terminal velocity depends primarily on two things: its weight and the cross sectional area of its wake. A heavy object that leaves a narrow wake will have a large terminal velocity, while a light object that leaves a broad wake will have a small terminal velocity. Big rocks are in the first category; raindrops, snowflakes, and leaves are in the second. Where does a penny belong? It turns out that a penny is more like a leaf than a rock. The penny tumbles as it falls and produces a broad turbulent wake. For its weight, it drags an awful lot of air behind it. As a result, it reaches terminal velocity at only about 25 mph (40 km/h). To prove that, I studied pennies fluttering about in a small vertical wind tunnel. Whether the penny descends through stationary air or the penny hovers in rising air, the physics is the same. Of course, it's much more convenient in the laboratory to observe the hovering penny interacting with rising air. Using a fan and plastic pipe, I created a rising stream of air and inserted a penny into that airflow. At low air speeds, the penny experiences too little upward drag force to cancel its weight. The penny therefore accelerated downward and dropped to the bottom of the wind tunnel. At high air speeds, the penny experienced such a strong upward drag force that it blew out of the wind tunnel. When the air speed was just right, the penny hovered in the wind tunnel. The air speed was then approximately 25 mph (40 km/h). That is the terminal velocity of a penny. The penny tumbles in the rising air. It is aerodynamically unstable, meaning that it cannot maintain a fixed orientation in the passing airstream. Because the aerodynamic forces act mostly on the upstream side of the penny, they tend to twist that side of the penny downstream. Whichever side of the penny is upstream at one moment soon becomes the downstream side, and the penny tumbles. As a result of this tumbling, the penny disturbs a wide swath of air and leaves a broad turbulent wake. It experiences severe pressure drag and has a low terminal velocity. The penny is an example of an aerodynamically blunt object—one in which the low-pressure air arcing around its sides runs into the rapidly increasing pressure behind it and separates catastrophically to form a vast wake. The opposite possibility is an aerodynamically streamlined object—one in which the increasing pressure beyond the object's sides is so gradual that the airflow never separates and no turbulent wake forms. A penny isn't streamlined, but a ballpoint pen could be. Almost any ballpoint pen is less blunt than a penny and some pens are approximately streamlined. Moreover, pens weigh more than pennies and that fact alone favors a higher terminal velocity. With a larger downward force (weight) and a smaller upward force (drag), the pen accelerates to a much greater terminal velocity than the penny. If it is so streamlined that it leaves virtually no wake, like the aerofoil shapes typical of airplane components, it will have an extraordinarily large terminal velocity—perhaps several hundred miles per hour. Some pens tumble, however, and that spoils their ability to slice through the air. To avoid tumbling, a pen must "weathervane"—it must experience most of its aerodynamic forces on its downstream side, behind its center of mass. Arrows and small rockets have fletching or fins to ensure that they travel point first through the air. A ballpoint pen can achieve that same point-first flight if its shape and center of mass are properly arranged. Almost any ballpoint pen dropped into my wind tunnel plummeted to the bottom. I was unable to make the air rise fast enough to observe hovering behavior in those pens. Whether they would tend to tumble in the open air was difficult to determine because of the tunnel's narrowness. Nonetheless, it's clear that a heavy, streamlined, and properly weighted pen dropped from the Empire State Building would still be accelerating downward when it reached the sidewalk. Its speed would be close to 209 mph at that point and it would indeed damage the sidewalk. As a final test of the penny's low terminal velocity, I built a radio-controlled penny dropper and floated it several hundred feet in the air with a helium-filled weather balloon. On command, the dropper released penny after penny and I tried to catch them as they fluttered to the ground. Alas, I never managed to catch one properly in my hands. It was a somewhat windy day and the ground at the local park was uneven, but that's hardly an excuse—I'm simply not good at catching things in my hands. Several of the pennies did bounce off my hands and one even bounced off my head. It was fun and I was more in danger of twisting my ankle than of getting pierced by a penny. The pennies descended so slowly that they didn't hurt at all. Tourist below the Empire State Building have nothing fear from falling pennies. Watch out, however, for some of the more streamlined objects that might make that descent. If by smart meters you mean the devices that monitor power usage and possibly adjust power consumption to periodically, then I don't see how they can affect health. Their communications with the smart grid are of no consequence to human health and having the power adjusted on household devices is unlikely to be a health issue (unless they cut off your power during a blizzard or a deadly heat wave). The radiated power from all of these wireless communications devices is so small that we have yet to find mechanisms whereby they could cause significant or lasting injury to human tissue. If there is any such mechanism, the effects are so weak that the risk associated with it are dwarfed by much more significant risks of wireless communication: the damage to traditional community, the decline of ordinary human interaction, and the surge in distracted driving. The Japanese did stop the chain reactions in the Fukushima Daiichi reactors, even before the tsunami struck the plant. The problem that they're having now is not the continued fissioning of uranium, but rather the intense radioactivity of the uranium daughter nuclei that were created while the chain reactions were underway. Those radioactive fission fragments are spontaneously decaying now and there is nothing that can stop that natural decay now. All they can do now is to try to contain those radioactive nuclei, keep them from overheating, and wait for them to decay into stable pieces. The uranium atom has the largest naturally occurring nucleus in nature. It contains 92 protons, each of which is positively charged, and those 92 like charges repel one another ferociously. Although the nuclear force acts to bind protons together when they touch, the repulsion of 92 protons alone would be too much for the nuclear force—the protons would fly apart in almost no time. To dilute the electrostatic repulsion of those protons, each uranium nucleus contains a large number of uncharged neutrons. Like protons, neutrons experience the attractive nuclear force. But unlike protons, neutrons don't experience the repulsive electrostatic force. Two neutron-rich combinations of protons and neutrons form extremely long-lived uranium nuclei: uranium-235 (92 protons, 143 neutrons) and uranium-238 (92 protons, 146 neutrons). Each uranium nucleus attracts an entourage of 92 electrons to form a stable atom and, since the electrons are responsible for the chemistry of an atom, uranium-235 and uranium-238 are chemically indistinguishable. When the thermal fission reactors of the Fukushima Daiichi plant were in operation, fission chain reactions were shattering the uranium-235 nuclei into fragments. Uranium-238 is more difficult to shatter and doesn't participate much in the reactor's operation. On occasion, however, a uranium-238 nucleus captures a neutron in the reactor and transforms sequentially into neptunium-239 and then plutonium-239. The presence of plutonium-239 in the used fuel rods is one of the problems following the accident. The main problem, however, is that the shattered fission fragment nuclei in the used reactor fuel are overly neutron-rich, a feature inherited from the neutron-rich uranium-235 nuclei themselves. Midsize nuclei, such as iodine (with 53 protons), cesium (with 55 protons), and strontium (with 38 protons), don't need as many neutrons to dilute out the repulsions between their protons. While fission of uranium-235 can produce daughter nuclei with 53 protons, 55 protons, or 38 protons, those fission-fragment versions of iodine, cesium, and strontium nuclei have too many neutrons and are therefore unstable—they undergo radioactive decay. Their eventual decay has nothing to do with chain reactions and it cannot be prevented. How quickly these radioactive fission fragment nuclei decay depends on exactly how many protons and neutrons they have. Three of the most common and dangerous nuclei present in the used fuel rods are iodine-131 (8 days half-life), cesium-137 (30 year half-life), and strontium-90 (29 year half-life). Plutonium-239 (24,200 year half-life) is also present in those rods. When these radioactive nuclei are absorbed into the body and then undergo spontaneous radioactive decay, they damage molecules and therefore pose a cancer risk. Our bodies can't distinguish the radioactive versions of these chemical elements from the nonradioactive ones, so all we can do to minimize our risk is to avoid exposure to them or to encourage our bodies to excrete them by saturating our bodies with stable versions. By asking me to "neglect possible discharges," you're asking me to neglect what actually happens. There will be a discharge, specifically a phenomenon known as "field emission". Neglect that discharge, then yes, the sphere can in principle store an unlimited amount of charge. But on route to infinity, I will have had to ignore several other exotic discharges and then the formation of a black hole. What will really happen is a field emission discharge. The repulsion between like charges will eventually become so strong that those charges will push one another out of the metal and into the vacuum, so that charges will begin to stream outward from the metal sphere. Another way to describe that growing repulsion between like charges involves fields. An electric charge is surrounded by a structure in space known as an electric field. An electric field exerts forces on electric charges, so one electric charge pushes on other electric charges by way of its electric field. As more and more like charges accumulate on the sphere, their electric fields overlap and add so that the overall electric field around the sphere becomes stronger and stronger. The charges on the sphere feel that electric field, but they are bound to the metal sphere by chemical forces and it takes energy to pluck one of them away from the metal. Eventually, the electric field becomes so strong that it can provide the energy needed to detach a charge from the metal surface. The work done by the field as it pushes the charge away from sphere supplies the necessary energy and the charge leaves the sphere and heads out into the vacuum. The actually detachment process involves a quantum physics phenomenon known as tunneling, but that's another story. The amount of charge the sphere can store before field emission begins depends on the radius of the sphere and on whether the charge is positive or negative. The smaller that radius, the faster the electric field increases and the sooner field emission starts. It's also easier to field-emit negative charges (as electrons) than it is to field-emit positive charges (as ions), so a given sphere will be able to hold more positive charge than negative charge. Modern brushless DC motors are amazing devices that can handle torque reversals instantly. In fact, they can even generate electricity during those reversals! Instant reversals of direction, however, aren't physically possible (because of inertia) and aren't actually what your friend wants anyway. I'll say more about the distinction between torque reversals and direction reversals in a minute. In general, a motor has a spinning component called the rotor that is surrounded by a stationary component called the stator. The simplest brushless DC motor has a rotor that contains permanent magnets and a stator that consists of electromagnets. The magnetic poles on the stator and rotor can attract or repel one another, depending on whether they like or opposite poles—like poles repel; opposite poles attract. Since the electronics powering the stator's electromagnets can choose which of the stator's poles are north and which are south, those electronics determine the forces acting on the rotor's poles and therefore the direction of torque on the rotor. To twist the rotor forward, the electronics make sure that the stator's poles are always acting to pull or push the rotor's poles in the forward direction so that the rotor experiences forward torque. To twist the rotor backward, the electronics reverses all those forces. Just because you reverse the direction of torque on the rotor doesn't mean that the rotor will instantly reverse its direction of rotation. The rotor (along with the rider of the scooter) has inertia and it takes time for the rotor to slow to a stop and then pick up speed in the opposite direction. More specifically, a torque causes angular acceleration; it doesn't cause angular velocity. During that reversal process, the rotor is turning in one direction while it is being twisted in the other direction. The rotor is slowing down and it is losing energy, so where is that energy going? It's actually going into the electronics which can use this electricity to recharge the batteries. The "motor" is acting as a "generator" during the slowing half of the reversal! That brushless DC motors are actually motor/generators makes them fabulous for electric vehicles of all types. They consume electric power while they are making a vehicle speed up, but they generate electric power while they are slowing a vehicle down. That's the principle behind regenerative braking—the vehicle's kinetic energy is used to recharge the batteries during braking. With suitable electronics, your friend's electric scooter can take advantage of the elegant interplay between electric power and mechanical power that brushless DC motors make possible. Those motors can handle torque reversals easily and they can even save energy in the process. There are limits, however, to the suddenness of some of the processes because huge flows of energy necessitate large voltages and powers in the motor/generators and their electronics. The peak power and voltage ratings of all the devices come into play during the most abrupt and strenuous changes in the motion of the scooter. If your friend wants to be able to go from 0 to 60 or from 60 to 0 in the blink of eye, the motor/generators and their electronics will have to handle big voltages and powers. Although that sounds like a simple question, it has a complicated answer. Gravity does affect light, but it doesn't affect light's speed. In empty space, light is always observed to travel at "The Speed of Light." But that remark hides a remarkable result: although two different observers will agree on how fast light is traveling, they may disagree in their perceptions of space and time. When those observers are in motion relative to one another, they'll certainly disagree about the time and distance separating two events (say, two firecrackers exploding at separate locations). For modest relative velocities, their disagreement will be too small to notice. But as their relative motion increases, that disagreement will become substantial. That is one of the key insights of Einstein's special theory of relativity. But even when two observers are not moving relative to one another, gravity can cause them to disagree about the time and distance separating two events. When those observers are in different gravitational circumstances, they'll perceive space and time differently. That effect is one of the key insights of Einstein's general theory of relativity. Here is a case in point: suppose two observers are in separate spacecraft, hovering motionless relative to the sun, and one observer is much closer to the sun than the other. The closer observer has a laser pointer that emits a green beam toward the farther observer. Both observers will see the light pass by and measure its speed. They'll agree that the light is traveling at "The Speed of Light". But they will not agree on the exact frequency of the light. The farther observer will see the light as slightly lower in frequency (redder) than the closer observer. Similarly, if the farther observer sends a laser pointer beam toward the closer observer, the closer observer will see the light as slightly higher in frequency (bluer) than the farther observer. How can these two observers agree on the speed of the beams but disagree on their frequencies (and colors)? They perceive space and time differently! Time is actually passing more slowly for the closer observer than for the farther observer. If they look carefully at each others' watches, the farther observer will see the closer observer's watch running slow and the closer observer will see the farther observer's watch running fast. The closer observer is actually aging slightly more slowly than the farther observer. These effects are usually very subtle and difficult to measure, but they're real. The global positioning system relies on ultra-precise clocks that are carried around the earth in satellites. Those satellites move very fast relative to us and they are farther from the earth's center and its gravity than we are. Both difference affect how time passes for those satellites and the engineers who designed and operate the global positioning system have to make corrections for the time-space effects of special and general relativity. Liquid water can evaporate to form gaseous water (i.e., steam) at any temperature, not just at its boiling temperature of 212 F. The difference between normal evaporation and boiling is that, below water's boiling temperature, evaporation occurs primarily at the surface of the liquid water whereas at or above water's boiling temperature, bubbles of pure steam become stable within the liquid and water can evaporate especially rapidly into those bubbles. So boiling is a just a rapid form of evaporation. What you are actually seeing when raindrops land on warm surfaces is tiny water droplets in the air, a mist of condensation. Those droplets happen in a couple of steps. First, the surface warms a raindrop and speeds up its evaporation. Second, a small portion of warm, especially moist air rises upward from the evaporating raindrop. Third, that portion of warm moist air cools as it encounters air well above the warmed surface. The sudden drop in temperature causes the moist air to become supersaturated with moisture—it now contains more water vapor than it can retain at equilibrium. The excess moisture condenses to form tiny water droplets that you see as a mist. This effect is particularly noticeable when it's raining because the humidity in the air is already very near 100%. The extra humidity added when the warmed raindrops evaporate is able to remain gaseous only in warmed air. Once that air cools back to the ambient temperature, the moisture must condense back out of it, producing the mist. Solid ice is less dense than liquid water, meaning that a liter of ice has less mass (and weighs less) than a liter of water. Any object that is less dense than water will float at the surface of water, so ice floats. That lower-density objects float on water is a consequence of Archimedes' principle: when an object displaces a fluid, it experiences an upward buoyant force equal in amount to the weight of the displaced fluid. If you submerge a piece of ice completely in water, that piece of ice will experience an upward buoyant force that exceeds the ice's weight because the water it displaces weighs more than the ice itself. The ice then experiences two forces: its downward weight and the upward buoyant force from the water. Since the upward force is stronger than the downward force, the ice accelerates upward. It rises to the surface of the water, bobs up and down a couple of times, and then settles at equilibrium. At that equilibrium, the ice is displacing a mixture of water and air. Amazingly enough, that mixture weighs exactly as much as the ice itself, so the ice now experiences zero net force. That's why its at equilibrium and why it can remain stationary. It has settled at just the right height to displace its weight in water and air. As for why ice is less dense than water, that has to do with the crystal structure of solid ice and the more complicated structure of liquid water. Ice's crystal structure is unusually spacious and it gives the ice crystals their surprisingly low density. Water's structure is more compact and dense. This arrangement, with solid water less dense than liquid water, is almost unique in nature. Most solids are denser than their liquids, so that they sink in their liquids. The electric circuit that powers your lamp extends only as far as a nearby transformer. That transformer is located somewhere near your house, probably as a cylindrical object on a telephone pole down the street or as a green box on a side lawn a few houses away. A transformer conveys electric power from one electric circuit to another. It performs this feat using several electromagnetic effects associated with changing electric currents—changes present in the alternating current of our power grid. In this case, the transformer is moving power from a high-voltage neighborhood circuit to a low-voltage household circuit. For safety, household electric power uses relatively low voltages, typically 120 volt in the US. But to deliver significant amounts of power at such low voltages, you need large currents. It's analogous to delivering hydraulic power at low pressures; low pressures are nice and safe, but you need large amounts of hydraulic fluid to carry much power. There is a problem, however, with sending low voltage electric power long distances: it's inefficient because wires waste power as heat in proportion to the square of the electric current they carry. Using our analogy again, sending hydraulic power long distances as a large flow of hydraulic fluid at low pressure is wasteful; the fluid will rub against the pipes and waste power as heat. To send electric power long distances, you do better to use high voltages and small currents (think high pressure and small flows of hydraulic fluid). That requires being careful with the wires because high voltages are dangerous, but it is exactly how electric power travels cross-country in the power grid: very high voltages on transmission lines that are safely out of reach. Finally, to move power from the long-distance high-voltage transmission wires to the short-distance low-voltage household wires, they use transformers. The long-distance circuit that carries power to your neighborhood closes on one side of the transformer and the short-distance circuit that carries power to your lamp closes on the other side of the transformer. No electric charges pass between those two circuits; they are electrically insulated from one another inside the transformer. The electric charges that are flowing through your lamp go round and round that little local circuit, shuttling from the transformer to your lamp and back again. The f-number of a lens measures the brightness of the image that lens casts onto the camera's image sensor. Smaller f-numbers produce brighter images, but they also yield smaller depths of focus. The f-number is actually the ratio of the lens' focal length to its effective diameter (the diameter of the light beam it collects and uses for its image). Your zoom lens has a focal length that can vary from 70 to 300 mm and a minimum f-number of 5.6. That means the when it is acting as a 300 mm telephoto lens, its effective light gathering surface is about 53 mm in diameter (300 mm divided by 5.6 gives a diameter of 53 mm). If you examine the lens, I think that you'll find that the front optical element is about 53 mm in diameter; the lens is using that entire surface to collect light when it is acting as a 300 mm lens at f-5.6. But when you zoom to lower focal lengths (less extreme telephoto), the lens uses less of the light entering its front surface. Similarly, when you dial a higher f-number, you are closing a mechanical diaphragm that is strategically located inside the lens and causing the lens to use less light. It's easy for the lens to increase its f-number by throwing away light arriving near the edges of its front optical element, but the lens can't decrease its f-number below 5.6; it can't create additional light gathering surface. Very low f-number lenses, particularly telephoto lenses with their long focal lengths, need very large diameter front optical elements. They tend to be big, expensive, and heavy. Smaller f-numbers produce brighter images, but there is a cost to that brightness. With more light rays entering the lens and focusing onto the image sensor, the need for careful focusing becomes greater. The lower the f-number, the more different directions those rays travel and the harder it is to get them all to converge properly on the image sensor. At low f-numbers, only rays from a specific distance converge to sharp focus on the image sensor; rays from objects that are too close or too far from the lens don't form sharp images and appear blurry. If you want to take a photograph in which everything, near and far, is essentially in perfect focus, you need to use a large f-number. The lens will form a dim image and you'll need to take a relatively long exposure, but you'll get a uniformly sharp picture. But if you're taking a portrait of a person and you want to blur the background so that it doesn't detract from the person's face, you'll want a small f-number. The preferred portrait lenses are moderately telephoto—they allow you to back up enough that the person's face doesn't bulge out at you in the photograph—and they have very low f-numbers—their large front optical elements gather lots of light and yield a very shallow depth of focus.
http://howeverythingworks.org/
13
57
Science Fair Project Encyclopedia In mathematics, a logarithm of x with base b may be defined as the following: for the equation bn = x, the logarithm is a function which gives n. This function is written as n = logb x. Logarithms tell how many times a number x must be divided by the base b to get 1. For example, log3(81) = 4 because 34 = 81. Logarithms are one of three functions that can be used to solve the equation bn = x for any variable, given the other two. The others are the radical, which can be used to find b (b is the nth root of x), and the exponential function, which can be used to find x (x is the nth power of b). The logarithm functions are the inverses of the exponential functions. Logarithms convert multiplication to addition, division to subtraction (making them isomorphisms between the field operations), exponentiation to multiplication, and roots to division (making logarithms crucial to slide rule construction). An antilogarithm is used to show the inverse of the logarithm. It is written antilogb(n) and means the same as bn. A double logarithm is the inverse function of the double-exponential function. A super-logarithm or hyper-logarithm is the inverse function of the super-exponential function. The super-logarithm of x grows even slower than the double logarithm for large x. In the theory of finite groups there is a related notion of discrete logarithm. For some finite groups, it is believed that the discrete logarithm is very hard to calculate, whereas discrete exponentials are quite easy. This asymmetry has applications in cryptography. Logarithms are useful in order to solve equations in which the unknown appears in the exponent, and they often occur as the solution of differential equations because of their simple derivatives. Furthermore, various quantities in science are expressed by their logarithms; see logarithmic scale for an explanation and a list. For integers b and x, the number logb(x) is irrational (i.e., not a quotient of two integers) if one of b and x has a prime factor which the other does not (and in particular if they are coprime and both greater than 1). When logarithms are used repeatedly in a work, one base (b in bn = x) is usually defined to be the base. This allows writing log(x) instead of repetitively writing the longer logb(x). So, in a system of logarithms of which 8 is the base, |log(8) = 1||antilog(1) = 8| |log(64) = 2||antilog(2) = 64| |log(512) = 3||antilog(3) = 512| |log(4096) = 4||antilog(4) = 4096| Change of base One's choice of base with logarithms is not crucial, because a logarithm can be converted from one base to another quite easily. For example, to calculate the value of a logarithm of a base other than 10, given a table or calculator that can only handle base 10, the following formula changes the base to any chosen base (assuming that b, x, and k are all positive real numbers and that b ≠ 1 and k ≠ 1) where k is any valid base. Letting k = x gives To see why this is the case, consider the following equations: |take logs on both sides| |simplify the left hand side| |divide by logk(b)| Relationships between binary, natural and common logarithms In particular we have: - log2(e) ≈ 1.44269504 - log2(10) ≈ 3.32192809 - loge(10) ≈ 2.30258509 - loge(2) ≈ 0.693147181 - log10(2) ≈ 0.301029996 - log10(e) ≈ 0.434294482 A curious coincidence is the approximation log2(x) ≈ log10(x) + ln(x), accurate to about 99.4% or 2 significant digits; this is because 1/ln(2) − 1/ln(10) ≈ 1 (in fact 1.0084...). The property is demonstrated in all six conversion factors above, arranged in pairs of two: This comes on top of the reciprocal relations we have: Another interesting coincidence is that log10(2) ≈ 0.3 (the actual value is about 0.301029995); this corresponds to the fact that, with an error of only 2.4%, 210 ≈ 103 (i.e. 1024 is about 1000; see also Binary prefix). Applications in calculus To calculate the derivative of a logarithmic function, the following formula is used where ln is the natural logarithm, i.e. with base e. Letting b = e : One can then see that the following formula gives the integral of a logarithm To calculate logb(x) if b and x are rational numbers and x ≥ b > 1 : If n0 is the largest natural number such that bn0 ≤ x or, alternately, The logarithms produced are irrational for most inputs. This algorithm works because : To use irrational numbers as inputs, apply the algorithm to successively detailed rational approximations. The limit of the result sequence should converge to the actual result. Joost Bürgi, a Swiss clockmaker in the employ of the Duke of Hesse-Kassel, first conceived of logarithms. The method of natural logarithms was first propounded in 1614, in a book entitled Mirifici Logarithmorum Canonis Descriptio, by John Napier (c. 1550 - 1618; Latinized Neperus), Baron of Merchiston in Scotland, four years after the publication of his memorable invention. This method contributed to the advance of science, and especially of astronomy, by making some difficult calculations possible. Prior to the advent of calculators and computers, it was constantly used in surveying, navigation, and other branches of practical mathematics. Besides their usefulness in computation, logarithms also fill an important place in the higher theoretical mathematics. At first, Napier called logarithms "artificial numbers" and antilogarithms "natural numbers". Later, Napier formed the word logarithm, a portmanteau, to mean a number that indicates a ratio: λoγoς (logos) meaning ratio, and αριθμoς (arithmos) meaning number. Napier chose that because the difference of two logarithms determines the ratio of the numbers for which they stand, so that an arithmetic series of logarithms corresponds to a geometric series of numbers. The term antilogarithm was introduced in the 1800s and, while convenient, its use was never widespread. Tables of logarithms Prior to the advent of computers and calculators, using logarithms meant using tables of logarithms, which had to be created manually. Base-10 logarithms are useful in computations when electronic means are not available. See common logarithm for details, including the use of characteristics and mantissas of common (i.e., base-10) logarithms. In 1617, Briggs published the first installment of his own table of common logarithms, containing the logarithms of all integers below 1000 to eight decimal places. This he followed, in 1624, by his Arithmetica Logarithmica, containing the logarithms of all integers from 1 to 20,000 and from 90,000 to 100,000 to fourteen places of decimals, together with a learned introduction, in which the theory and use of logarithms are fully developed. The interval from 20,000 to 90,000 was filled up by Adrian Vlacq , a Dutch computer; but in his table, which appeared in 1628, the logarithms were given to only ten places of decimals. Vlacq's table was later to found to contain 603 errors, but "this cannot be regarded as a great number, when it is considered that the table was the result of an original calculation, and that more than 2,100,000 printed figures are liable to error." (Athenaeum, 15 June 1872. See also the Monthly Notices of the Royal Astronomical Society for May 1872.) An edition of Vlacq's work, containing many corrections, was issued at Leipzig in 1794 under the title Thesaurus Logarithmorum Completus by Jurij Vega. Callet's seven-place table (Paris, 1795), instead of stopping at 100,000, gave the eight-place logarithms of the numbers between 100,000 and 108,000, in order to diminish the errors of interpolation, which were greatest in the early part of the table; and this addition was generally included in seven-place tables. The only important published extension of Vlacq's table was made by Mr. Sang 1871, whose table contained the seven-place logarithms of all numbers below 200,000. Briggs and Vlacq also published original tables of the logarithms of the trigonometric functions. Besides the tables mentioned above, a great collection, called Tables du Cadastre, was constructed under the direction of Prony , by an original computation, under the auspices of the French republican government of the 1700s. This work, which contained the logarithms of all numbers up to 100,000 to nineteen places, and of the numbers between 100,000 and 200,000 to twenty-four places, exists only in manuscript, "in seventeen enormous folios," at the Observatory of Paris. It was begun in 1792; and "the whole of the calculations, which to secure greater accuracy were performed in duplicate, and the two manuscripts subsequently collated with care, were completed in the short space of two years." (English Cyclopaedia, Biography, Vol. IV., article "Prony.") Cubic interpolation could be used to find the logarithm of any number to a similar accuracy. To the modern student who has the benefit of a calculator, the work put into the tables just mentioned is a small indication of the importance of logarithms. Much of the history of logarithms is derived from The Elements of Logarithms with an Explanation of the Three and Four Place Tables of Logarithmic and Trigonometric Functions, by James Mills Peirce, University Professor of Mathematics in Harvard University, 1873. - Natural logarithm - Logarithmic identities - Discrete logarithm - Common logarithm - Zech's logarithms - Jurij Vega The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Logarithmic
13
560
Quick Table of Contents |4 Area Model 4.2 Rectangular Areas 4.3 Spaces and Conditionality 4.7 Ordering Constraints 4.8 Keeps and Breaks 4.9 Rendering Model 4.10 Sample Area Tree 4.11 List of Traits on Areas In XSL, one creates a tree of formatting objects that serve as inputs or specifications to a formatter. The formatter generates a hierarchical arrangement of areas which comprise the formatted result. This section defines the general model of areas and how they interact. The purpose is to present an abstract framework which is used in describing the semantics of formatting objects. It should be seen as describing a series of constraints for conforming implementations, and not as prescribing particular algorithms. The formatter generates an ordered tree, the area tree, which describes a geometric structuring of the output medium. The terms child, sibling, parent, descendant, and ancestor refer to this tree structure. The tree has a root node. Each area tree node other than the root is called an area and is associated to a rectangular portion of the output medium. Areas are not formatting objects; rather, a formatting object generates zero or more rectangular areas, and normally each area is generated by a unique object in the formatting object tree. The only exceptions are when several leaf nodes of the formatting object tree are combined to generate a single area, for example when several characters in sequence generate a single ligature glyph. In all such cases, relevant properties such as font-family and font-size are the same for all the generating formatting objects (see section [4.7.2 Line-building]). An area has a content-rectangle, the portion in which its child areas are assigned, and optional padding and border. The diagram shows how these portions are related to one another. The outer bound of the border is called the border-rectangle, and the outer bound of the padding is called the padding-rectangle. Each area has a set of traits, a mapping of names to values, in the way elements have attributes and formatting objects have properties. Individual traits are used either for rendering the area or for defining constraints on the result of formatting, or both. Traits used strictly for formatting purposes or for defining constraints may be called formatting traits, and traits used for rendering may be called rendering traits. Traits whose values are copied or derived from a property of the same or a corresponding name are listed in [C Property Summary] and [5 Property Refinement / Resolution]; other traits are listed in [4.11 List of Traits on Areas]. NOTE: traits are also associated with FOs during the process of refinement. Some traits are assigned during formatting, while others are already present after refinement. The semantics of each type of formatting object that generates areas are given in terms of which areas it generates and their place in the area-tree hierarchy. This may be further modified by interactions between the various types of formatting objects. The properties of the formatting object determine what areas are generated and how the formatting object's content is distributed among them. (For example, a word that is not to be hyphenated may not have its glyphs distributed into areas on two separate line-areas.) The traits of an area are either: 1. "directly-derived" -- The values of directly-derived traits are the computed value of a property of the same or a corresponding name on the generating formatting object, or 2. "indirectly-derived" -- The values of indirectly-derived traits are the result of a computation involving the computed values of one or more properties on the generating formatting object, other traits on this area or other interacting areas (ancestors, parent, siblings, and/or children) and/or one or more values constructed by the formatter. The calculation formula may depend on the type of the formatting object. This description assumes that refined values have been computed for all properties of formatting objects in the result tree, i.e., all relative and corresponding values have been computed and the inheritable values have been propagated as described in [5 Property Refinement / Resolution]. This allows the process of inheritance to be described once and avoids a need to repeat information on computing values in this description. There are two types of areas: block-areas and inline-areas. These differ according to how they are typically stacked by the formatter. An area can have block-area children or inline-area children as determined by the generating formatting object, but a given area's children must all be of one type. Although block-areas and inline-areas are typically stacked, some areas can be explicitly positioned. A line-area is a special kind of block-area whose children are all inline-areas. A glyph-area is a special kind of inline-area which has no child areas, and has a single glyph image as its content. Typical examples of areas are: a paragraph rendered by using an fo:block formatting object, which generates block-areas, and a character rendered by using an fo:character formatting object, which generates an inline-area (in fact, a glyph-area). Associated with any area are two directions, which are derived from the generating formatting object's writing-mode and reference-orientation properties: the block-progression-direction is the direction for stacking block-area descendants of the area, and the inline-progression-direction is the direction for stacking inline-area descendants of the area. Another trait, the shift-direction, is present on inline-areas and refers to the direction in which baseline shifts are applied. Also the glyph-orientation defines the orientation of glyph-images in the rendered result. The Boolean trait is-reference-area determines whether or not an area establishes a coordinate system for specifying indents. An area for which this trait is is called a reference-area. Only a reference-area may have a block-progression-direction which is different from that of its parent. A reference-area may be either a block-area or an inline-area. The Boolean trait is-viewport-area determines whether or not an area establishes an opening through which its descendant areas can be viewed, and can be used to present clipped or scrolled material; for example, in printing applications where bleed and trim is desired. An area for which this trait is is called a viewport-area. A common construct is a viewport/reference pair. This is a block-area viewport-area V and a block-area reference-area R, where R is the sole child of V and where the start-edge and end-edge of the content-rectangle of R are parallel to the start-edge and end-edge of the content-rectangle of V. Each element has the traits top-position, bottom-position, left-position, and right-position which represent the distance from the edges of its content-rectangle to the like-named edges of the nearest ancestor reference-area (or the page-viewport-area case of areas generated by descendants of formatting objects whose absolute-position is fixed); the left-offset and top-offset determine the amount by which a relatively-positioned area is shifted for rendering. These traits receive their values during the formatting process, or in the case of absolutely positioned areas, during refinement. The block-progression-dimension and inline-progression-dimension of an area represent the extent of the content-rectangle of that area in each of the two relative dimensions. Other traits include: the is-first and is-last traits, which are Boolean traits indicating the order in which areas are generated and returned by a given formatting object. is-first is for the first area (or only area) generated and returned by a formatting object, true for the last area (or only area). the amount of space outside the border-rectangle: space-before, space-after, space-start, and space-end (though some of these may be required to be zero on certain classes of area); the thickness of each of the four sides of the padding: padding-before, padding-after, padding-start, and padding-end; the style, thickness, and color of each of the four sides of the border: border-before, etc.; and the background rendering of the area: background-color, background-image, and other background traits. "Before", "after", "start", and "end" refer to relative directions and are defined below. a set of font traits (see [7.7 Common Font Properties]) which are used to request a font that is deemed to be used within that area. The nominal-font for an area is determined by the font traits and the character descendants of the area. (see [5.5.7 Font Properties] Unless otherwise specified, the traits of a formatting object are present on each of its generated areas, and with the same value. (However, see sections [4.7.2 Line-building] and [4.9.4 Border, Padding, and Background].) As described above, the content-rectangle is the rectangle bounding the inside of the padding and is used to describe the constraints on the positions of descendant areas. It is possible that marks from descendant glyphs or other areas may appear outside the content-rectangle. Related to this is the allocation-rectangle of an area, which is used to describe the constraints on the position of the area within its parent area. For an inline-area this is either the normal-allocation-rectangle or the expanded-allocation-rectangle. The normal-allocation-rectangle extends to the content-rectangle in the block-progression-direction and to the border-rectangle in the inline-progression-direction. The expanded-allocation-rectangle extends outside the border-rectangle by an amount equal to the space-after in the block-progression-direction, an amount equal to the space-before in the opposite direction, an amount equal to the space-end in the inline-progression-direction, and an amount equal to the space-start in the opposite direction. Unless otherwise specified, the allocation-rectangle for an area is the normal-allocation-rectangle. Allocation- and content-rectangles of an inline-area For a block-area, the allocation-rectangle extends to the border-rectangle in the block-progression-direction and outside the content-rectangle in the inline-progression-direction by an amount equal to the end-indent, and in the opposite direction by an amount equal to the start-indent. The inclusion of space outside the border-rectangle of a block-area in the inline-progression-direction does not affect placement constraints, and is intended to promote compatibility with the CSS box model. Allocation- and content-rectangles of a block-area The edges of a rectangle are designated as follows: the before-edge is the edge occurring first in the block-progression-direction and perpendicular to it; the after-edge is the edge opposite the before-edge; the start-edge is the edge occurring first in the inline-progression-direction and perpendicular to it, the end-edge is the edge opposite the start-edge. The following diagram shows the correspondence between the various edge names for a mixed writing-mode example: For purposes of this definition, the content-rectangle of an area uses the inline-progression-direction and block-progression-direction of that area; but the border-rectangle, padding-rectangle, and allocation-rectangle use the directions of its parent area. Thus the edges designated for the content-rectangle may not correspond with the same-named edges on the padding-, border-, and allocation-rectangles. This is important in the case of nested block-areas with different writing-modes. Each inline-area has a alignment-point determined by the formatter, on the start-edge of its allocation-rectangle; for a glyph-area, this is a point on the start-edge of the glyph on its alignment baseline (see below). This is script-dependent and does not necessarily correspond to the (0,0) coordinate point used for the data describing the glyph shape. In the area tree, the set of areas with a given parent is ordered. The terms initial, final, preceding, and following refer to this ordering. In any ordered tree, this sibling order extends to an ordering of the entire tree in at least two ways. In the pre-order traversal order of a tree, the children of each node (their order unchanged relative to one another) follow the node, but precede any following siblings of the node or of its ancestors. In the post-order traversal order of a tree, the children of each node precede the node, but follow any preceding siblings of the node or of its ancestors. "Preceding" and "following", when applied to non-siblings, will depend on the extension order used, which must be specified. However, in either of these given orders, the leaves of the tree (nodes without children) are unambiguously ordered. This section defines the notion of block-stacking constraints and inline-stacking constraints involving areas. These are defined as ordered relations, i.e., if A and B have a stacking constraint it does not necessarily mean that B and A have a stacking constraint. These definitions are recursive in nature and some cases may depend upon simpler cases of the same definition. This is not circularity but rather a consequence of recursion. The intention of the definitions is to identify areas at any level of the tree which have only space between them. The area-class trait is an enumerated value which is xsl-normal for an area which is stacked with other areas in sequence. A normal area is an area for which this trait is area is an area with area-class xsl-fixed; placement of these areas is controlled by the fo:page-sequence ancestor of its generating formatting object. A reference-level-out-of-line area is an area with area-class placement of these areas is controlled by the formatting object generating the relevant reference-area. Areas with area-class equal to one of xsl-before-float are defined to be indicating that they are supposed to be properly stacked. If P is a block-area, then there is a fence before P if P is a reference-area or if the border-before-width or padding-before-width of P are non-zero. Similarly, there is a fence after P if P is a reference-area or if the border-after-width or padding-after-width of P are non-zero. If A and B are stackable areas, and S is a sequence of space-specifiers, it is defined that A and B have block-stacking constraint S if any of the following conditions holds: B is a block-area which is the first normal child of A, and S is the sequence consisting of the space-before of B. A is a block-area which is the last normal child of B, and S is the sequence consisting of the space-after of A. A and B are both block-areas, and either a. B is the next stackable sibling area of A, and S is the sequence consisting of the space-after of A and the space-before of B; b. B is the first normal child of a block-area P, there is no fence before P, A and P have a block-stacking constraint S', and S consists of S' followed by the space-before of B; or c. A is the last normal child of a block-area P, there is no fence after P, P and B have a block-stacking constraint S'', and S consists of the space-after of A followed by S''. The use of "stackable" in two places in the above definition allows block-stacking constraints to apply between areas of area-class Adjacent Edges with Block-stacking When A and B have a block-stacking constraint, the adjacent edges of A and B are an ordered pair recursively defined as: In case 1, the before-edge of the content-rectangle of A and the before-edge of the allocation-rectangle of B. In case 2, the after-edge of the content-rectangle of A and the after-edge of the allocation-rectangle of B. In case 3a, the after-edge of the allocation-rectangle of A and the before-edge of the allocation-rectangle of B. In case 3b, the first of the adjacent edges of A and P, and the before-edge of the allocation-rectangle of B. In case 3c, the after-edge of the allocation-rectangle of A and the second of the adjacent edges of P and B. Block-stacking constraint example Example. In this diagram each node represents a block-area. Assume that all padding and border widths are zero, and none of the areas are reference-areas. Then P and A have a block-stacking constraint, as do A and B, A and C, B and C, C and D, D and B, B and E, D and E, and E and P; these are the only pairs in the diagram having block-stacking constraints. If B had non-zero padding-after, then D and E would not have any block-stacking constraint (though B and E would continue to have a block-stacking constraint). Inline-stacking constraints. This section will recursively define the inline-stacking constraints between two areas (either two inline-areas or one inline-area and one line-area), together with the notion of fence before and fence after; these definitions are interwoven with one another. This parallels the definition for block-stacking constraints, but with the additional complication that we may have a stacking constraint between inline-areas which are stacked in opposite inline-progression-directions. (This is not an issue for block-stacking constraints because a block-area which is not a reference-area may not have a block-progression-direction different from that of its parent.) If P and Q have an inline-stacking constraint, then P has a fence before Q if P is a reference-area or has non-zero border-width or padding-width at the first adjacent edge of P and Q. Similarly, Q has a fence after P if Q is a reference-area or has non-zero border-width or padding-width at the second adjacent edge of P and Q. If A and B are normal areas, and S is a sequence of space-specifiers, it is defined that A and B have inline-stacking constraint S if any of the following conditions holds: A is an inline-area or line-area, B is an inline-area which is the first normal child of A, and S is the sequence consisting of the space-start of B. B is an inline-area or line-area, A is an inline-area which is the last normal child of B, and S is the sequence consisting of the space-end of A. A and B are each either an inline-area or a line-area, and either a. both A and B are inline-areas, B is the next normal sibling area of A, and S is the sequence consisting of the space-end of A and the space-start of B; b. B is an inline-area which is the first normal child of an inline-area P, P has no fence after A, A and P have an inline-stacking constraint S', the inline-progression-direction of P is the same as the inline-progression-direction of the nearest common ancestor area of A and P, and S consists of S' followed by the space-start of B. c. A is an inline-area which is the last normal child of an inline-area P, P has no fence before B, P and B have an inline-stacking constraint S'', the inline-progression-direction of P is the same as the inline-progression-direction of the nearest common ancestor area of P and B, and S consists of the space-end of A followed by S''. d. B is an inline-area which is the last normal child of an inline-area P, P has no fence after A, A and P have an inline-stacking constraint S', the inline-progression-direction of P is opposite to the inline-progression-direction of the nearest common ancestor area of A and P, and S consists of S' followed by the space-end of B. e. A is an inline-area which is the first normal child of an inline-area P, P has no fence before B, P and B have an inline-stacking constraint S'', the inline-progression-direction of P is opposite to the inline-progression-direction of the nearest common ancestor area of P and B, and S consists of the space-start of A followed by S''. Adjacent Edges with Inline-stacking 1. Adjacent Edges with Inline-stacking 2. When A and B have an inline-stacking constraint, the adjacent edges of A and B are an ordered pair defined as: In case 1, the start-edge of the content-rectangle of A and the start-edge of the allocation-rectangle of B. In case 2, the end-edge of the content-rectangle of A and the end-edge of the allocation-rectangle of B. In case 3a, the end-edge of the allocation-rectangle of A and the start-edge of the allocation-rectangle of B. In case 3b, the first of the adjacent edges of A and P, and the start-edge of the allocation-rectangle of B. In case 3c, the end-edge of the allocation-rectangle of A and the second of the adjacent edges of P and B. In case 3d, the first of the adjacent edges of A and P, and the end-edge of the allocation-rectangle of B. In case 3e, the start-edge of the allocation-rectangle of A and the second of the adjacent edges of P and B. Two areas are adjacent if they have a block-stacking constraint or an inline-stacking constraint. It follows from the definitions that areas of the same type (inline or block) can be adjacent only if all their non-common ancestors are also of the same type (up to but not including their nearest common ancestor). Thus, for example, two inline-areas which reside in different line-areas are never adjacent. An area A begins an area P if A is a descendant of P and P and A have either a block-stacking constraint or an inline-stacking constraint. In this case the second of the adjacent edges of P and A is defined to be a leading edge in P. A space-specifier which applies to the leading edge is also defined to begin P Similarly, An area A ends an area P if A is a descendant of P and A and P have either a block-stacking constraint or an inline-stacking constraint. In this case the first of the adjacent edges of A and P is defined to be a trailing edge in P. A space-specifier which applies to the trailing edge is also defined to end P Each script has its preferred "baseline" for aligning glyphs from that script. Western scripts typically use a "alphabetic" baseline that touches at or near the bottom of capital letters. Further, for each font there is a preferred way of aligning embedded characters from different scripts, e.g., for a Western font there is a separate baseline for aligning embedded ideographic or Indic characters. Each block-area and inline-area has a dominant-baseline-identifier trait whose value is a baseline identifier corresponding to the type of alignment expected for inline-area descendants of that area, and each inline-area has an alignment-baseline which specifies how the area is aligned to its parent. These traits are interpreted as described in section [7.7.1 Fonts and Font Data]. For each font, an actual-baseline-table maps these identifiers to points on the start-edge of the area. By abuse of terminology, the line in the inline-progression-direction through the point corresponding to the dominant-baseline-identifier is called the "dominant baseline." The text-altitude of an area is defined in terms of the actual-baseline-table for the nominal-font of that area, and is normally a length equal to the distance between the dominant baseline and the text-before baseline. This is modified if the font-height-override-before has a value use-font-metrics. The text-depth is normally defined as a length equal to the distance between the dominant baseline and the text-after baseline. This is modified if the font-height-override-after has a value other than A space-specifier is a compound datatype whose components are minimum, optimum, maximum, conditionality and precedence. Minimum, optimum, and maximum are lengths and can be used to define a constraint on a distance, namely that the distance should preferably be the optimum, and in any case no less than the minimum nor more than the maximum. Any of these values may be negative, which can (for example) cause areas to overlap, but in any case the minimum should be less than or equal to the optimum value, and the optimum less than or equal to the maximum value. Conditionality is an enumerated value which controls whether a space-specifier has effect at the beginning or end of a reference-area or a line-area. Possible values are a conditional space-specifier is one for which this value is Precedence has a value which is either an integer or the special force. A forcing space-specifier is one for which this value is Space-specifiers occurring in sequence may interact with each other. The constraint imposed by a sequence of space-specifiers is computed by calculating for each space-specifier its associated resolved space-specifier in accordance with their conditionality and precedence, as shown below in the space-resolution rules. The constraint imposed on a distance by a sequence of resolved space-specifiers is additive; that is, the distance is constrained to be no less than the sum of the resolved minimum values and no larger than the sum of the resolved maximum values. To compute the resolved space-specifier of a given space-specifier S, consider the maximal inline-stacking constraint or block-stacking constraint containing S. The resolved space-specifier of S is a non-conditional, forcing space-specifier computed in terms of this sequence. If any of the space-specifiers (in the maximal sequence) is conditional, and begins a reference-area or line-area, then it is suppressed, which means that its resolved space-specifier is zero. Further, any conditional space-specifiers which consecutively follow it in the sequence are also suppressed. If a conditional space-specifier ends a reference-area or line-area, then it is suppressed together with any other conditional space-specifiers which consecutively precede it in the sequence. If any of the remaining space-specifiers is forcing, all non-forcing space-specifiers are suppressed, and the value of each of the forcing space-specifiers is taken as its resolved value. Alternatively if all of the remaining space-specifiers are non-forcing, then the resolved space-specifier is defined in terms of those space-specifiers whose precedence is numerically highest, and among these those whose optimum value is the greatest. All other space-specifiers are suppressed. If there is only one of these then its value is taken as its resolved value. Otherwise, when there are two or more space-specifiers all of the same highest precedence and the same (largest) optimum, the resolved space-specifier of the last space-specifier in the sequence is derived from these spaces by taking their common optimum value as its optimum, the greatest of their minimum values as its minimum, and the least of their maximum values as its maximum, and all other space-specifiers are suppressed. Example. Suppose the sequence of space values occurring at the beginning of a reference-area is: first, a space with value 10 points (that is minimum, optimum, and maximum all equal to 10 points) and conditionality discard; second, a space with value 4 points and retain; and third, a space with value 5 points and conditionality all three spaces having precedence zero. Then the first (10 point) space is suppressed under rule 1, and the second (4 point) space is suppressed under rule 3. The resolved value of the third space is a non-conditional 5 points, even though it originally came from a conditional space. The padding of a block-area does not interact with any space-specifier (except that by definition, the presence of padding at the before- or after-edge prevents areas on either side of it from having a stacking constraint.) The border or padding at the before-edge or after-edge of a block-area may be specified as conditional. If so, then it is set to zero if its associated edge is a leading or trailing edge in a reference-area. In this case, the border or padding is taken to be zero for purposes of the stacking constraint definitions. Block-areas have several traits which typically affect the placement of their children. The line-height is used in line placement calculations. The line-stacking-strategy trait controls what kind of allocation is used for descendant line-areas and has an enumerated value line-height). This is all rigorously described below. All areas have these traits, but they only have relevance for areas which have stacked line-area children. The space-before and space-after traits determine the distance between the block-area and surrounding block-areas. A block-area which is not a line-area typically has its size in the inline-progression-direction determined by its start-indent and end-indent and by the size of its nearest ancestor reference-area. A block-area which is not a line-area typically varies its block-progression-dimension to accommodate its descendants. Alternatively the generating formatting object may specify a block-progression-dimension for the block-area. Block-area children of an area are typically stacked in the block-progression-direction within their parent area, and this is the default method of positioning block-areas. However, formatting objects are free to specify other methods of positioning child areas of areas which they generate, for example list-items or tables. For a parent area P whose children are block-areas, P is defined to be properly stacked if all of the following conditions hold: For each block-area B which is a descendant of P, the following hold: the before-edge and after-edge of its allocation-rectangle are parallel to the before-edge and after-edges of the content-rectangle of P, the start-edge of its allocation-rectangle is parallel to the start-edge of the content-rectangle of R (where R is the closest ancestor reference-area of B), and offset from it inward by a distance equal to the block-area's start-indent plus its start-intrusion-adjustment (as defined below), minus its border-start, padding-start, and space-start values, and the end-edge of its allocation-rectangle is parallel to the end-edge of the content-rectangle of R, and offset from it inward by a distance equal to the block-area's end-indent plus its end-intrusion-adjustment (as defined below), minus its border-end, padding-end, and space-end values. For each pair of normal areas B and B' in the subtree below P, if B and B' have a block-stacking constraint S, then the distance between the adjacent edges of B and B' is consistent with the constraint imposed by the resolved values of the space-specifiers in S. The start-intrusion-adjustment and end-intrusion-adjustment are traits used to deal with intrusions from floats in the inline-progression-direction. The notion of indent is intended to apply to the content-rectangle, but the constraint is written in terms of the allocation-rectangle, because as noted earlier ([4.2.3 Geometric Definitions]) the edges of the content-rectangle may not correspond to like-named edges of the allocation-rectangle. Example. In the diagram, if area has a space-after value of 3 points, B a of 1 point, and C a space-before of 2 points, all force, and with zero border and padding, then the constraints will place B's 4 points below that of A, and C's 6 points below that of A. Thus the 4-point gap receives the from P, and the 2-point gap before C receives the background color from B. Intrusion adjustments (both start- and end-) are defined to account for the indentation that occurs as the result of side floats. If A and B are areas which have the same nearest reference area ancestor, then A and B are defined to be inline-overlapping if there is some line parallel to the inline-progression-direction, which intersects both the allocation-rectangle of A and the allocation-rectangle of B. If the distance in the block-progression direction from the after-edge of the allocation-rectangle of A to the before-edge of the allocation-rectangle of B some nonnegative number y, then A and B are defined to have clearance of y. If B is a block-area then its before-intrusion-allowance is defined to be the sum of the border-before-width and padding-before-width values of all areas which are ancestors of B and descendants of B's nearest ancestor reference-area. If A is an area of class xsl-side-float and B is a block-area, and A and B have the same nearest reference area ancestor then A is defined to intrude on B if: A and B are inline-overlapping, or A and B have clearance of y, where y is some value greater than zero and less than the before-intrusion-allowance of B (this is to account for irregularly-drawn borders and padding); or A has float="start", B is a descendant of an area L generated by an fo:list-item-body, A intrudes on some line-area or reference-area descendant D of the sibling area of L, and D and B are inline-overlapping. (This is to ensure that intrusion persists long enough so that the list-item-body does not drift to the other side of the list-item-label.) If A is a block-area with float="start", then the start-intrusion value of A is the distance from the start-edge of the content-rectangle of the parent of A to the end-edge of the allocation-rectangle of A. If A is a block-area with float="end" then the end-intrusion value of A is the distance from the start-edge of the allocation-rectangle of A to the end-edge of the content-rectangle of the parent of A. If B is a block-area which is a reference-area or a line-area, then the start-intrusion-adjustment is defined to be the maximum of the start-intrusion values of the areas which intrude on B. The end-intrusion-adjustment is defined to be the maximum of the end-intrusion values of the areas which intrude on B. If B is not a reference-area or line-area, then its start-intrusion-adjustment and end-intrusion-adjustment are defined to be zero. A line-area is a special type of block-area, and is generated by the same formatting object which generated its parent. Line-areas do not have borders and padding, i.e., border-before-width, padding-before-width, etc. are all zero. Inline-areas are stacked within a line-area relative to a baseline-start-point which is a point determined by the formatter, on the start-edge of the line area's content-rectangle. The allocation-rectangle of a line is determined by the value of the line-stacking-strategy trait: if the font-height, the allocation-rectangle is the nominal-requested-line-rectangle, defined below; if the value is max-height, the allocation-rectangle is the maximum-line-rectangle, defined below; and if the value is line-height, the allocation-rectangle is the per-inline-height-rectangle, defined below. The nominal-requested-line-rectangle for a line-area is the rectangle whose start-edge and end-edge are parallel to and coincident with the start-edge and end-edge of the content-rectangle of the parent block-area (as modified by typographic properties such as indents), whose before-edge is separated from the baseline-start-point by the text-altitude, and whose after-edge is separated from the baseline-start-point by the text-depth. It has the same block-progression-dimension for each line-area child of a block-area. The maximum-line-rectangle for a line-area is the rectangle whose start-edge and end-edge are parallel to and coincident with the start-edge and end-edge of the nominal-requested-line-rectangle, and whose extent in the block-progression-direction is the minimum required to enclose both the nominal-requested-line-rectangle and the allocation-rectangles of all the inline-areas stacked within the line-area; this may vary depending on the descendants of the line-area. Nominal and Maximum Line Rectangles The per-inline-height-rectangle for a line-area is the rectangle whose start-edge and end-edge are parallel to and coincident with the start-edge and end-edge of the nominal-requested-line-rectangle, and whose extent in the block-progression-dimension is determined as follows. For each inline-area the half-leading is defined to be half the difference of its line-height minus its block-progression-dimension. The expanded-rectangle of an inline-area is the rectangle with start-edge and end-edge coincident with those of its allocation-rectangle, and whose before-edge and after-edge are outside those of its allocation-rectangle by a distance equal to the half-leading. The extent of the per-inline-height-rectangle in the block-progression-direction is then defined to be the minimum required to enclose both the nominal-requested-line-rectangle and the expanded-rectangles of all the inline-areas stacked within the line-area; this may vary depending on the descendants of the line-area. Using the nominal-requested-line-rectangle allows equal baseline-to-baseline spacing. Using the maximum-line-rectangle allows constant space between line-areas. Using the per-inline-height-rectangle and zero space-before and space-after allows CSS-style line box stacking. An inline-area has its own line-height trait, which may be different from the line-height of its containing block-area. This may affect the placement of its ancestor line-area when the line-stacking-strategy line-height. An inline-area has an actual-baseline-table for its nominal-font. It has a dominant-baseline-identifier trait which determines how its stacked inline-area descendants are to be aligned. An inline-area may or may not have child areas, and if so it may or may not be a reference-area. The dimensions of the content-rectangle for an inline-area without children is computed as specified by the generating formatting object, as are those of an inline-area with block-area children. An inline-area with inline-area children has a content-rectangle which extends from its dominant baseline (see [4.2.6 Font Baseline Tables]) by its after-baseline-height in the block-progression-direction, and in the opposite direction by its before-baseline-height; in the inline-progression-direction it extends from the start-edge of the allocation-rectangle of its first child to the end-end of the allocation-rectangle of its last child. Examples of inline-areas with children might include portions of inline mathematical expressions or areas arising from mixed writing systems (left-to-right within right-to-left, for example). Inline-area children of an area are typically stacked in the inline-progression-direction within their parent area, and this is the default method of positioning inline-areas. Inline-areas are stacked relative to the dominant baseline, as defined above ([4.2.6 Font Baseline Tables]). For a parent area P whose children are inline-areas, P is defined to be properly stacked if all of the following conditions hold: For each inline-area descendant I of P, the start-edge, end-edge, before-edge and after-edge of the allocation-rectangle of I are parallel to corresponding edges of the content-rectangle of the nearest ancestor reference-area of I. For each pair of normal areas I and I' in the subtree below P, if I and I' have an inline-stacking constraint S, then the distance between the adjacent edges of I and I' is consistent with the constraint imposed by the resolved values of the space-specifiers in S. For any inline-area descendant I of P, the distance in the shift-direction from the dominant baseline of P to the alignment-point of I equals the distance between the dominant baseline of P and the point corresponding to the alignment-baseline of I (as determined by the actual-baseline-table of P), plus the sum of the baseline-shifts for I and all of its ancestors which are descendants of P. This alignment is done with respect to the line-area's dominant baseline, and not with respect to the dominant baseline of any intermediate area. The first summand is computed to compensate for mixed writing systems with different baseline types, and the other summands involve deliberate baseline shifts for things like superscripts and subscripts. The most common inline-area is a glyph-area, which contains the representation for a character in a particular font. A glyph-area has an associated nominal-font, determined by the area's typographic traits, which apply to its character data, and a glyph-orientation determined by its writing-mode and reference-orientation, which determine the orientation of the glyph when it is rendered. The alignment-point and dominant-baseline-identifier of a glyph-area are assigned according to the writing-system in use (e.g., the glyph baseline in Western languages), and are used to control placement of inline-areas descendants of a line-area. The formatter may generate inline-areas with different inline-progression-directions from their parent to accommodate correct inline-area stacking in the case of mixed writing systems. A glyph-area has no children. Its block-progression-dimension and actual-baseline-table are the same for all glyphs in a font. A subset S of the areas returned to a formatting object is called properly ordered if the areas in that subset have the same order as their generating formatting objects. Specifically, if A1 and A2 are areas in S, returned by child formatting objects F1 and F2 where F1 precedes F2, then A1 must precede A2 in the pre-order traversal order of the area tree. If F1 equals F2 and A1 is returned prior to A2, then A1 must precede A2 in the pre-order-traversal of the area tree. For each formatting object F and each area-class C, the subset consisting of the areas returned to F with area-class C must be properly ordered, except where otherwise specified. This section describes the ordering constraints that apply to formatting an fo:block or similar block-level object. A block-level formatting object F which constructs lines does so by constructing block-areas which it returns to its parent formatting object, and placing normal areas returned to F by its child formatting objects as children of those block-areas or of line-areas which it constructs as children of those block-areas. For each such formatting object F, it must be possible to form an ordered partition P consisting of ordered subsets S1, S2, ..., Sn of the normal areas returned by the child formatting objects, such that the following are all satisfied: Each subset consists of a sequence of inline-areas, or of a single block-area. The ordering of the partition follows the ordering of the formatting object tree. Specifically, if A is in Si and B is in Sj with i < j, or if A and B are both in the same subset Si with A before B in the subset order, then either A is returned by a preceding sibling formatting object of B, or A and B are returned by the same formatting object with A being returned before B. The partitioning occurs at legal line-breaks. Specifically, if A is the last area of Si and B is the first area of Si+1, then the rules of the language and script in effect must permit a line-break between A and B, within the context of all areas in Si and Si+1. The partition follows the ordering of the area tree, except for certain glyph substitutions and deletions. Specifically, if B1, B2, ..., Bp are the normal child areas of the area or areas returned by F, (ordered in the pre-order traversal order of the area tree) then there is a one-to-one correspondence between these child areas and the partition subsets (i.e., n = p), and for each i, if Si consists of a single block-area then Bi is that block-area, and if Si consists of inline-areas then Bi is a line-area whose child areas are the same as the inline-areas in Si, and in the same order, except that where the rules of the language and script in effect call for glyph-areas to be substituted, inserted, or deleted, then the substituted or inserted glyph-areas appear in the area tree in the corresponding place, and the deleted glyph-areas do not appear in the Deletions occur when a glyph-area which is last within a subset Si, has a suppress-at-line-break value of provided that i < n and Bi+1 is a line-area. Deletions also occur when a glyph-area which is first within a subset Si, has a suppress-at-line-break value of provided that i > 1 and Bi-1 is a line-area. Insertions and substitutions may occur because of addition of hyphens or spelling changes due to hyphenation, or glyph image construction from syllabification, or Substitutions that replace a sequence of glyph-areas with a single glyph-area should only occur when the margin, border, and padding in the inline-progression-direction (start- and end-), baseline-shift, and letter-spacing values are zero, treat-as-word-space is false, and the values of all other relevant traits match (i.e., alignment-adjust, baseline-identifier, color trait, background traits, dominant-baseline-identifier, font traits, glyph-orientation-vertical, line-height, line-height-shift-adjustment, Line-areas do not receive the background traits or text-decoration of their generating formatting object, or any other trait that requires generation of a mark during rendering. This section describes the ordering constraints that apply to formatting an fo:inline or similar inline-level object. An inline-level formatting object F which constructs one or more inline-areas does so by placing normal inline-areas returned to F by its child formatting objects as children of inline-areas which it generates. For each such formatting object F, it must be possible to form an ordered partition P consisting of ordered subsets S1, S2, ..., Sn of the normal inline-areas returned by the child formatting objects, such that the following are all satisfied: Each subset consists of a sequence of inline-areas, or of a single block-area. The ordering of the partition follows the ordering of the formatting object tree, as defined above. The partitioning occurs at legal line-breaks, as defined above. The partition follows the ordering of the area tree, except for certain glyph substitutions and deletions, as defined above. Keep and break conditions apply to a class of areas, which are typically page-reference-areas, column-areas, and line-areas. The appropriate class for a given condition is referred to as a context and an area in this class is a context-area. As defined in Section [6.4.1 Introduction], page-reference-areas are areas generated by an fo:page-sequence using the specifications in a fo:page-master, and column-areas are normal-flow-reference-areas generated from a region-body, or region-reference-areas generated from other types of region-master. A keep or break condition is an open statement about a formatting object and the tree relationships of the areas it generates with the relevant context-areas. These tree relationships are defined mainly in terms of leading or trailing areas. If A is a descendant of P, then A is defined to be leading in P if A has no preceding sibling which is a normal area, nor does any of its ancestor areas up to but not including P. Similarly, A is defined to be trailing in P if A has no following sibling which is a normal area, nor does any of its ancestor areas up to but not including P. For any given formatting object, the next formatting object in the flow is the first formatting object following (in the pre-order traversal order) which generates and returns normal areas. Break conditions are either break-before or break-after conditions. A break-before condition is satisfied if the first area generated and returned by the formatting object is leading within a context-area. A break-after condition depends on the next formatting object in the flow; it is satisfied if either there is no such next formatting object, or if the first normal area generated and returned by that formatting object is leading in a context-area. Break conditions are imposed by the break-before and break-after properties. A refined value of page for these traits imposes a break condition with a context consisting of the page-reference-areas; a value of odd-page imposes a break condition with a context of even-numbered page-reference-areas page-reference-areas, respectively; a value of column imposes a break condition with a context of column-areas. A value of auto in a break-before or break-after trait imposes no break condition. Keep conditions are either keep-with-previous, keep-with-next, or keep-together conditions. A keep-with-previous condition on an object is satisfied if the first area generated and returned by the formatting object is not leading within a context-area, or if there are no preceding areas in a post-order traversal of the area tree. A keep-with-next condition is satisfied if the last area generated and returned by the formatting object is not trailing within a context-area, or if there are no following areas in a pre-order traversal of the area tree. A keep-together condition is satisfied if all areas generated and returned by the formatting object are descendants of a single context-area. Keep conditions are imposed by the "within-page", "within-column", and "within-line" components of the "keep-with-previous", "keep-with-next", and The refined value of each component specifies the strength of the keep condition imposed, with higher numbers being stronger than lower numbers and the value always being stronger than all numeric values. A component with value does not impose a keep condition. A "within-page" component imposes a keep-condition with context consisting of the page-reference-areas; "within-column", with context consisting of the column-areas; and "within-line" with context consisting of the line-areas. The area tree is constrained to satisfy all break conditions imposed. Each keep condition must also be satisfied, except when this would cause a break condition or a stronger keep condition to fail to be satisfied. If not all of a set of keep conditions of equal strength can be satisfied, then some maximal satisfiable subset of conditions of that strength must be satisfied (together with all break conditions and maximal subsets of stronger keep conditions, if any). This section makes explicit the relationship between the area tree and visually rendered output. Areas generate three types of marks: (1) the area background, if any, (2) the marks intrinsic to the area (a glyph, image, or decoration) if any, and (3) the area border, if any. An area tree is rendered by causing marks to appear on an output medium in accordance with the areas in the area tree. This section describes the geometric location of such marks, and how conflicts between marks are to be resolved. Each area is rendered in a particular location. Formatting object semantics describe the location of intrinsic marks relative to the object's location, i.e., the left, right, top, and bottom edges of its content-rectangle. This section describes how the area's location is determined, which determines the location of its intrinsic marks. For each page, the page-viewport-area corresponds isometrically to the output medium. The page-reference-area is offset from the page-viewport-area as described below in section [4.9.2 Viewport Geometry]. All areas in the tree with an area-class of are positioned such that the left-, right-, top-, and bottom-edges of its offset inward from the content-rectangle of its ancestor page-viewport-area by distances specified by the left-position, and bottom-position traits, respectively. Any area in the tree which is the child of a viewport-area is rendered as described in section [4.9.2 Viewport Geometry]. All other areas in the tree are positioned such that the left-, right-, top-, and bottom-edges of its content-rectangle are offset inward content-rectangle of its nearest ancestor reference-area by distances specified by the left-position, bottom-position traits, respectively. These are shifted left and down by the values of the top-offset and left-offset traits, respectively, if the area has a relative-position of A reference-area which is the child of a viewport-area is positioned such that the start-edge and end-edge of its content-rectangle are parallel to the start-edge and end-edge of the content-rectangle of its parent viewport-area. The start-edge of its content-rectangle is offset from the start-edge of the content-rectangle of its parent viewport-area by an inline-scroll-amount, and the before-edge of its content-rectangle is offset from the before-edge of the content-rectangle of its parent viewport-area by a block-scroll-amount. If the block-progression-dimension of the reference-area is larger of the viewport-area and the overflow trait for the reference-area is then the inline-scroll-amount and block-scroll-amount are determined by a scrolling mechanism, if any, provided by the user agent. Otherwise, both are zero. The visibility of marks depends upon the location of the marks, the visibility of the area, and the overflow of any ancestor viewport-areas. If an area has visibility hidden it generates no marks. If an area has an overflow of or when the environment is non-dynamic and the overflow is scroll then the area determines a clipping rectangle, which is defined to be the rectangle determined by the value of the clip trait of the area, and for any mark generated by one of its descendant areas, portions of the mark the clipping rectangle do not appear. The border- and padding-rectangles are determined relative to the content-rectangle by the values of the common padding and border width traits (border-before-width, etc.). For any area which is not a child of a viewport-area, the border is rendered between the border-rectangle and the padding-rectangle in accordance with the common border color and style traits. For a child of a viewport-area, the border is not rendered. For an area which is not part of a viewport/reference pair, the background is rendered. For an area that is either a viewport-area or a reference-area in a viewport/reference pair, if the refined value of and the block-progression-dimension of the reference-area is larger than that of the viewport-area, then the background is rendered on the reference-area and not the viewport-area, and otherwise it is rendered on the viewport-area and not the reference-area. The background, if any, is rendered in the padding-rectangle, in accordance with the background-image, background-color, background-repeat, background-position-vertical, and background-position-horizontal traits. For each class of formatting objects, the marks intrinsic to its generated areas are specified in the formatting object description. For example, an fo:character object generates a glyph-area, and this is rendered by drawing a glyph within that area's content-rectangle in accordance with the area's font traits and glyph-orientation and blink traits. In addition, other traits (for example the various score and score-color traits) specify other intrinsic marks. In the case of score traits (underline-score, overline-score and through-score), the score thickness and position are specified by the nominal-font in effect; where the font fails to specify these quantities, they are implementation-dependent. Marks are layered as described below, which defines a partial ordering of which marks are beneath which other marks. Two marks are defined to conflict if they apply to the same point in the output medium. When two marks conflict, the one which is beneath the other does not affect points in the output medium where they both apply. Marks generated by the same area are layered as follows: the area background is beneath the area's intrinsic marks, and the intrinsic marks are beneath the border. Layering among the area's intrinsic marks is defined by the semantics of the area's generating formatting object and its properties. For example, a glyph-area's glyph drawing comes beneath the marks generated for text-decoration. The stacking layer of an area is defined by its stacking context and its z-index value. The stacking layer of an area A is defined to be less than that of an area B if some ancestor-or-self A' of A and B' of B have the same stacking context and the z-index of A' is less than the z-index of B'. If neither stacking layer is less than the other then they are defined to have the same stacking layer. If A and B are areas, and the stacking layer of A is less than the stacking layer of B, then all marks generated by A are beneath all marks generated by B. If A and B are areas with the same stacking layer, the backgrounds of A and B come beneath all other marks generated by A and B. Further, if A is an ancestor of B (still with the same stacking layer), then the background of A is beneath all the areas of B, and all the areas of B are beneath the intrinsic areas (and border) of A. If A and B have the same stacking layer and neither is an ancestor of the other, then it is an error if either their backgrounds conflict or if a non-background mark of A conflicts with a non-background mark of B. An implementation may recover by proceeding as if the marks from the first area in the pre-order traversal order are beneath those of the other area.
http://www.w3.org/TR/2000/WD-xsl-20001018/slice4.html
13
82
The Fields of Gravity, Pressure, and Mass Level Surfaces. Coordinate surfaces of equal geometric depth below the ideal sea surface are useful when considering geometrical features, but in problems of statics or dynamics that involve consideration of the acting forces, they are not always satisfactory. Because the gravitational force represents one of the most important of the acting forces, it is convenient to use as coordinate surfaces the level surfaces, defined as surfaces that are everywhere normal to the force of gravity. It will presently be shown that these surfaces do not coincide with surfaces of equal geometric depth. It follows from the definition of level surfaces that, if no forces other than gravitational are acting, a mass can be moved along a level surface without expenditure of work and that the amount of work expended or gained by moving a unit mass from one surface to another is independent of the path taken. The amount of work, W, required for moving a unit mass a distance, h, along the plumb line is In the following, the sea surface will be considered a level surface. The work required or gained in moving a unit mass from sea level to a point above or below sea level is called the gravity potential, and in the m.t.s. system the unit of gravity potential is thus one dynamic decimeter. The practical unit of the gravity potential is the dynamic meter, for which the symbol D is used. When dealing with the sea the vertical axis is taken as positive downward. The geopotential of a level surface at the geometrical depth, z, is therefore, in dynamic meters, The acceleration of gravity varies with latitude and depth, and the geometrical distance between standard level surfaces therefore varies with the coordinates. At the North Pole the geometrical depth of the 1000-dynamic-meter surface is 1017.0 m, but at the Equator the depth is 1022.3 m, because g is greater at the Poles than at the Equator. Thus, level surfaces and surfaces of equal geometric depth do not coincide. Level surfaces slope relative to the surfaces of equal geometric depth, and therefore a component of the acceleration of gravity acts along surfaces of equal geometrical depth. The topography of the sea bottom is represented by means of isobaths—that is, lines of equal geometrical depth—but it could be presented equally well by means of lines of equal geopotential. The contour lines would then represent the lines of intersection between the level surfaces and the irregular surface of the bottom. These contours would no longer be at equal geometric distances, and hence would differ from the usual topographic chart, but their characteristics would be that the amount of work needed for moving a given mass from one contour to another would be constant. They would also represent the new coast lines if the sea level were lowered without alterations of the topographic features of the bottom, provided the new sea level would assume perfect hydrostatic equilibrium and adjust itself normal to the gravitational force. Any scalar field can similarly be represented by means of a series of topographic charts of equiscalar surfaces in which the contour lines The Field of Gravity. The fact that gravity is the resultant of two forces, the attraction of the earth and the centrifugal force due to the earth's rotation, need not be considered, and it is sufficient to define gravity as the force that is derived empirically by pendulum observations. Furthermore, it is not necessary to take into account the minor irregular variations of gravity that detailed surveys reveal, but it is enough to make use of the “normal” value, in meters per second per second, which at sea level can be represented as a function of the latitude, ϕ, by Helmert's formula: The field of gravity can be completely described by means of a set of equipotential surfaces corresponding to standard intervals of the gravity potential. These are at equal distances if the geopotential is The Field of Pressure. The distribution of pressure in the sea can be determined by means of the equation of static equilibrium: The hydrostatic equation will be discussed further in connection with the equations of motion (p. 440). At this time it is enough to emphasize that, as far as conditions in the ocean are concerned, the equation, for all practical purposes, is exact. Introducing the geopotential expressed in dynamic meters as the vertical coordinate, one has 10dD = gdz. When the pressure is measured in decibars (defined by 1 bar = 106 dynes per square centimeter), the factor k becomes equal to 1/10, and equation (XII, 3) is reduced to Because ρs, ϕ, pand αs, ϕ, p differ little from unity, a difference in pressure is expressed in decibars by nearly the same number that expresses the difference in geopotential in dynamic meters, or the difference in geometric depth in meters. Approximately, The pressure field can be completely described by means of a system of isobaric surfaces. Using the geopotential as the vertical coordinate, one can present the pressure distribution by a series of charts showing isobars at standard level surfaces or by a series of charts showing the geopotential topography of standard isobaric surfaces. In meteorology, the former manner of representation is generally used on weather maps, in which the pressure distribution at sea level is represented by isobars. In oceanography, on the other hand, it has been found practical to represent the geopotential topography of isobaric surfaces. The pressure gradient is defined by The pressure gradient has two principal components: the vertical, directed normal to the level surfaces, and the horizontal, directed parallel to the level surfaces. When static equilibrium exists, the vertical component, expressed as force per unit mass, is balanced by the acceleration of gravity. This is the statement which is expressed mathematically by means of the equation of hydrostatic equilibrium. In a resting system the horizontal component of the pressure gradient is not balanced by any other force, and therefore the existence of a horizontal pressure gradient indicates that the system is not at rest or cannot remain at rest. The horizontal pressure gradients, therefore, although extremely small, are all-important to the state of motion, whereas the vertical are insignificant in this respect. It is evident that no motion due to pressure distribution exists or can develop if the isobaric surfaces coincide with level surfaces. In such a state of perfect hydrostatic equilibrium the horizontal pressure gradient vanishes. Such a state would be present if the atmospheric pressure, acting on the sea surface, were constant, if the sea surface coincided with the ideal sea level and if the density of the water depended on pressure only. None of these conditions is fulfilled. The isobaric surfaces are generally inclined relative to the level surfaces, and horizontal pressure gradients are present, forming a field of internal force. This field of force can also be defined by considering the slopes of isobaric surfaces instead of the horizontal pressure gradients. By definition the pressure gradient along an isobaric surface is zero, but, if this surface does not coincide with a level surface, a component of the acceleration of gravity acts along the isobaric surface and will tend to set the water in motion, or must be balanced by other forces if a steady state of motion is reached. The internal field of force can therefore be represented also by means of the component of the acceleration of gravity along isobaric surfaces (p. 440). Regardless of the definition of the field of force that is associated with the pressure distribution, for a complete description of this field one must know the absolute isobars at level surfaces or the absolute geopotential contour lines of isobaric surfaces. These demands cannot possibly be met. One reason is that measurements of geopotential distances of isobaric surfaces must be made from the actual sea surface, the topography of which is unknown. It will be shown that all one can do is to determine the pressure field that would be present if the pressure In order to illustrate this point a fresh-water lake will be considered which is so small that horizontal differences in atmospheric pressure can be disregarded and the acceleration of gravity can be considered constant. Let it first be assumed that the water is homogeneous, meaning that the density is independent of the coordinates. In this case, the distance between any two isobaric surfaces is expressed by the equation This equation simply states that the geometrical distance between isobaric surfaces is constant, and it defines completely the internal field of pressure. The total field of pressure depends, however, upon the configuration of the free surface of the lake. If no wind blows and if no stress is thus exerted on the free surface of the lake, perfect hydrostatic equilibrium exists, the free surface is a level surface, and, similarly, all other isobaric surfaces coincide with level surfaces. On the other hand, if a wind blows across the lake, the equilibrium will be disturbed, the water level will be lowered at one end of the lake, and water will be piled up against the other end. The free surface will still be an isobaric surface, but it will now be inclined relative to a level surface. The relative field of pressure, however, will remain unaltered as represented by equation (XII, 4), meaning that all other isobaric surfaces will have the same geometric shape as that of the free surface. One might continue and introduce a number of layers of different density, and one would find that the same reasoning would be applicable. The method is therefore also applicable when one deals with a liquid within which the density changes continually with depth. By means of observations of the density at different depths, one can derive the relative field of pressure and can represent this by means of the topography of the isobaric surfaces relative to some arbitrarily or purposely selected isobaric surface. The relative field of force can be derived from the slopes of the isobaric surface relative to the selected reference surface, but, in order to find the absolute field of pressure and the corresponding absolute field of force, it is necessary to determine the absolute shape of one isobaric surface. These considerations have been set forth in great detail because it is essential to be fully aware of the difference between the absolute field of pressure and the relative field of pressure, and to know what types of data are needed in order to determine each of these fields. The Field of Mass. The field of mass in the ocean is generally described by means of the specific volume as expressed by (p. 57) The former field is of a simple character. The surfaces of α35,0,p coincide with the isobaric surfaces, the deviations of which from level surfaces are so small that for practical purposes the surfaces of α35,0,p can be considered as coinciding with level surfaces or with surfaces of equal geometric depth. The field of α35,0,p can therefore be fully described by means of tables giving α35,0,p as a function of pressure and giving the average relationships between pressure, geopotential and geometric depths. Since this field can be considered a constant one, the field of mass is completely described by means of the anomaly of the specific volume, δ, the determination of which was discussed on p. 58. The field of mass can be represented by means of the topography of anomaly surfaces or by means of horizontal charts or vertical sections in which curves of δ = constant are entered. The latter method is the most common. It should always be borne in mind, however, that the specific volume in situ is equal to the sum of the standard specific volume, α35,0,p at the pressure in situ and the anomaly, δ. The Relative Field of Pressure. It is impossible to determine the relative field of pressure in the sea by direct observations, using some type of pressure gauge, because an error of only 0.1 m in the depth of a pressure gauge below the sea surface would introduce errors greater than the horizontal differences that should be established. If the field of mass is known, however, the internal field of pressure can be determined from the equation of static equilibrium in one of the forms Integration of the latter form gives Equation (XII, 6) can be interpreted as expressing that the relative field of pressure is composed of two fields: the standard field and the field of anomalies. The standard field can be determined once and for all, because the standard geopotential distance between isobaric surfaces represents the distance if the salinity of the sea water is constant at 35 ‰ and the temperature is constant at 0°C. The standard geopotential distance decreases with increasing pressure, because the specific volume decreases (density increases) with pressure, as is evident from table 7H in Bjerknes (1910), according to which the standard geopotential distance between the isobaric surfaces 0 and 100 decibars is 97.242 dynamic meters, whereas the corresponding distance between the 5000-and 5100-decibar surfaces is 95.153 dynamic meters. The standard geopotential distance between any two standard isobaric surfaces is, on the other hand, independent of latitude, but the geometric distance between isobaric surfaces varies with latitude because g varies. Because in the standard field all isobaric surfaces are parallel relative to each other, this standard field lacks a relative field of horizontal force. The relative field of force, which is associated with the distribution of mass, is completely described by the field of the geopotential anomalies. It follows that a chart showing the topography of one isobaric surface relative to another by means of the geopotential anomalies is equivalent to a chart showing the actual geopotential topography of one isobaric surface relative to another. The practical determination of the relative field of pressure is therefore reduced to computation and representation of the geopotential anomalies, but the absolute pressure field can be found only if one can determine independently the absolute topography of one isobaric surface. In order to evaluate equation (XII, 7), it is necessary to know the anomaly, δ, as a function of absolute pressure. The anomaly is computed from observations of temperature and salinity, but oceanographic observations give information about the temperature and the salinity at known geometrical depths below the actual sea surface, and not at known pressures. This difficulty can fortunately be overcome by means of an artificial substitution, because at any given depth the numerical value of the absolute pressure expressed in decibars is nearly the same as the numerical value of the depth expressed in meters, as is evident from the following corresponding values: |Standard sea pressure (decibars)||1000||2000||3000||4000||5000||6000| |Approximate geometric depth (m)||990||1975||2956||3933||4906||5875| Thus, the numerical values of geometric depth deviate only 1 or 2 percent from the numerical values of the standard pressure at that depth. This agreement is not accidental, but has been brought about by the selection of the practical unit of pressure, the decibar. It follows that the temperature at a pressure of 1000 decibars is nearly equal to the temperature at a geometric depth of 990 m, or the temperature at the pressure of 6000 decibars is nearly equal to the temperature at a depth of 5875 m. The vertical temperature gradients in the ocean are small, especially at great depths, and therefore no serious error is introduced if, instead of using the temperature at 990 m when computing δ, one makes use of the temperature at 1000 m, and so on. The difference between anomalies for neighboring stations will be even less affected by this procedure, because within a limited area the vertical temperature gradients will be similar. The introduced error will be nearly the same at both stations, and the difference will be an error of absolutely negligible amount. In practice one can therefore consider the numbers that represent the geometric depth in meters as representing absolute pressure in decibars. If the depth in meters at which either directly observed or interpolated values of temperature and salinity are available is interpreted as representing pressure in decibars, one can compute, by means of the tables in the appendix, the anomaly of specific volume at the given pressure. By multiplying the average anomaly of specific volume between two pressures by the difference in pressure in decibars (which is considered equal to the difference in depth in meters), one obtains the geopotential anomaly of the isobaric sheet in question expressed in dynamic meters. By adding these geopotential anomalies, one can find the corresponding anomaly between any two given pressures. An example of a complete computation is given in table 61. Certain simple relationships between the field of pressure and the field of mass can be derived by means of the equations for equiscalar surfaces (p. 155) and the hydrostatic equation. In a vertical profile the isobars and the isopycnals are defined by |Meters or decibars||Temp. (°C)||Salinity (‰)||σt||105Δs,ϕ||105δs,p||105δϕ,p||105δ||ΔD||ΔD(dynamic meter)| Profiles of isobaric surfaces based on the data from a series of stations in a section must evidently be in agreement with the inclination of the δ curves, as shown in a section and based on the same data, but this obvious rule often receives little or no attention. Relative Geopotential Topography of Isobaric Surfaces. If simultaneous observations of the vertical distribution of temperature and salinity were available from a number of oceanographic stations within a given area, the relative pressure distribution at the time of the observations could be represented by a series of charts showing the geopotential topography of standard isobaric surfaces relative to one arbitrarily or purposely selected reference surface. From the preceding it is evident that these topographies are completely represented by means of the geopotential anomalies. In practice, simultaneous observations are not available, but in many instances it is permissible to assume that the time changes of the pressure distribution are so small that observations taken within a given period may be considered simultaneous. The smaller the area, the shorter must be the time interval within which the observations are made. Figs. 110, p. 454 and 204, p. 726, represent examples of geopotential topographies. The conclusions as to currents which can be based on such charts will be considered later. Charts of geopotential topographies can be prepared in two different ways. By the common method, the anomalies of a given surface relative to the selected reference surface are plotted on a chart and isolines are drawn, following the general rules for presenting scalar quantities. In this manner, relative topographies of a series of isobaric surfaces can be prepared, but the method has the disadvantage that each topography is prepared separately. By the other method a series of charts of relative topographies is prepared stepwise, taking advantage of the fact that the anomaly of geopotential thickness of an isobaric sheet is proportional to the average This method is widely used in meteorology, but is not commonly employed in oceanography because, for the most part, the different systems of curves are so nearly parallel to each other that graphical addition is cumbersome. The method is occasionally useful, however, and has the advantage of showing clearly the relationship between the distribution of mass and the distribution of pressure. It especially brings out the geometrical feature that the isohypses of the isobaric surfaces retain their form when passing from one isobaric surface to another only if the anomaly curves are of the same form as the isohypses. This characteristic of the field is of great importance to the dynamics of the system. Character of the Total Field of Pressure. From the above discussion it is evident that, in the absence of a relative field of pressure, isosteric and isobaric surfaces must coincide. Therefore, if for some reason one isobaric surface, say the free surface, deviates from a level surface, then all isobaric and isosteric surfaces must deviate in a similar manner. Assume that one isobaric surface in the disturbed condition lies at a distance Δh cm below the position in undisturbed conditions. Then all other isobaric surfaces along the same vertical are also displaced the distance Δh from their undisturbed position. The distance Δh is positive downward because the positive z axis points downward. Call the pressure at a given depth at undisturbed conditions p0. Then the pressure at disturbed conditions is pt = p0 − Δp, where Δp = gρΔh and where the displacement Δh can be considered as being due to a deficit or an excess of mass in the water column under consideration. The above considerations are equally valid if a relative field of pressure exists. The absolute distribution of pressure can always be completely determined from the equation
http://publishing.cdlib.org/ucpressebooks/view?docId=kt167nb66r&doc.view=content&chunk.id=d3_2_ch12&toc.depth=1&anchor.id=p411&brand=eschol
13
76
The brain is the center of the nervous system in all vertebrate and most invertebrate animals—only a few invertebrates such as sponges, jellyfish, adult sea squirts and starfish do not have one, even if diffuse neural tissue is present. It is located in the head, usually close to the primary sensory organs for such senses as vision, hearing, balance, taste, and smell. The brain of a vertebrate is the most complex organ of its body. In a typical human the cerebral cortex (the largest part) is estimated to contain 15–33 billion neurons, each connected by synapses to several thousand other neurons. These neurons communicate with one another by means of long protoplasmic fibers called axons, which carry trains of signal pulses called action potentials to distant parts of the brain or body targeting specific recipient cells. Physiologically, the function of the brain is to exert centralized control over the other organs of the body. The brain acts on the rest of the body both by generating patterns of muscle activity and by driving secretion of chemicals called hormones. This centralized control allows rapid and coordinated responses to changes in the environment. Some basic types of responsiveness such as reflexes can be mediated by the spinal cord or peripheral ganglia, but sophisticated purposeful control of behavior based on complex sensory input requires the information-integrating capabilities of a centralized brain. From a philosophical point of view, what makes the brain special in comparison to other organs is that it forms the physical structure that generates the mind. As Hippocrates put it: "Men ought to know that from nothing else but the brain come joys, delights, laughter and sports, and sorrows, griefs, despondency, and lamentations." Through much of history, the mind was thought to be separate from the brain. Even for present-day neuroscience, the mechanisms by which brain activity gives rise to consciousness and thought remain very challenging to understand: despite rapid scientific progress, much about how the brain works remains a mystery. The operations of individual brain cells are now understood in considerable detail, but the way they cooperate in ensembles of millions has been very difficult to decipher. The most promising approaches treat the brain as a biological computer, very different in mechanism from electronic computers, but similar in the sense that it acquires information from the surrounding world, stores it, and processes it in a variety of ways. This article compares the properties of brains across the entire range of animal species, with the greatest attention to vertebrates. It deals with the human brain insofar as it shares the properties of other brains. The ways in which the human brain differs from other brains are covered in the human brain article. Several topics that might be covered here are instead covered there because much more can be said about them in a human context. The most important is brain disease and the effects of brain damage, covered in the human brain article because the most common diseases of the human brain either do not show up in other species, or else manifest themselves in different ways. The shape and size of the brains of different species vary greatly, and identifying common features is often difficult. Nevertheless, there are a number of principles of brain architecture that apply across a wide range of species. Some aspects of brain structure are common to almost the entire range of animals species; others distinguish "advanced" brains from more primitive ones, or distinguish vertebrates from invertebrates. The simplest way to gain information about brain anatomy is by visual inspection, but many more sophisticated techniques have been developed. Brain tissue in its natural state is too soft to work with, but it can be hardened by immersion in alcohol or other fixatives, and then sliced apart for examination of the interior. Visually, the interior of the brain consists of areas of so-called grey matter, with a dark color, separated by areas of white matter, with a lighter color. Further information can be gained by staining slices of brain tissue with a variety of chemicals that bring out areas where specific types of molecules are present in high concentrations. It is also possible to examine the microstructure of brain tissue using a microscope, and to trace the pattern of connections from one brain area to another. The brains of all species are composed primarily of two broad classes of cells: neurons and glial cells. Glial cells (also known as glia or neuroglia) come in several types, and perform a number of critical functions, including structural support, metabolic support, insulation, and guidance of development. Neurons, however, are usually considered the most important cells in the brain. The property that makes neurons unique is their ability to send signals to specific target cells over long distances. They send these signals by means of an axon, which is a thin protoplasmic fiber that extends from the cell body and projects, usually with numerous branches, to other areas, sometimes nearby, sometimes in distant parts of the brain or body. The length of an axon can be extraordinary: for example, if a pyramidal cell of the cerebral cortex were magnified so that its cell body became the size of a human body, its axon, equally magnified, would become a cable a few centimeters in diameter, extending more than a kilometer. These axons transmit signals in the form of electrochemical pulses called action potentials, which last less than a thousandth of a second and travel along the axon at speeds of 1–100 meters per second. Some neurons emit action potentials constantly, at rates of 10–100 per second, usually in irregular patterns; other neurons are quiet most of the time, but occasionally emit a burst of action potentials. Axons transmit signals to other neurons by means of specialized junctions called synapses. A single axon may make as many as several thousand synaptic connections with other cells. When an action potential, traveling along an axon, arrives at a synapse, it causes a chemical called a neurotransmitter to be released. The neurotransmitter binds to receptor molecules in the membrane of the target cell. Synapses are the key functional elements of the brain. The essential function of the brain is cell-to-cell communication, and synapses are the points at which communication occurs. The human brain has been estimated to contain approximately 100 trillion synapses; even the brain of a fruit fly contains several million. The functions of these synapses are very diverse: some are excitatory (excite the target cell); others are inhibitory; others work by activating second messenger systems that change the internal chemistry of their target cells in complex ways. A large fraction of synapses are dynamically modifiable; that is, they are capable of changing strength in a way that is controlled by the patterns of signals that pass through them. It is widely believed that activity-dependent modification of synapses is the brain's primary mechanism for learning and memory. Most of the space in the brain is taken up by axons, which are often bundled together in what are called nerve fiber tracts. Many axons are wrapped in thick sheaths of a fatty substance called myelin, which serves to greatly increase the speed of signal propagation. Myelin is white, so parts of the brain filled exclusively with nerve fibers appear as light-colored white matter, in contrast to the darker-colored grey matter that marks areas with high densities of neuron cell bodies. The generic bilaterian nervous system Except for a few primitive types such as sponges (which have no nervous system) and cnidarians (which have a nervous system consisting of a diffuse nerve net), all living multicellular animals are bilaterians, meaning animals with a bilaterally symmetric body shape (that is, left and right sides that are approximate mirror images of each other). All bilaterians are thought to have descended from a common ancestor that appeared early in the Cambrian period, 550–600 million years ago, and it has been hypothesized that this common ancestor had the shape of a simple tubeworm with a segmented body. At a schematic level, that basic worm-shape continues to be reflected in the body and nervous system architecture of all modern bilaterians, including vertebrates. The fundamental bilateral body form is a tube with a hollow gut cavity running from the mouth to the anus, and a nerve cord with an enlargement (a ganglion) for each body segment, with an especially large ganglion at the front, called the brain. The brain is small and simple in some species, such as nematode worms; in other species, including vertebrates, it is the most complex organ in the body. Some types of worms, such as leeches, also have an enlarged ganglion at the back end of the nerve cord, known as a "tail brain". There are a few types of existing bilaterians that lack a recognizable brain, including echinoderms, tunicates, and a group of primitive flatworms called Acoelomorpha. It has not been definitively established whether the existence of these brainless species indicates that the earliest bilaterians lacked a brain, or whether their ancestors evolved in a way that led to the disappearance of a previously existing brain structure. This category includes arthropods, molluscs, and numerous types of worms. The diversity of invertebrate body plans is matched by an equal diversity in brain structures. Two groups of invertebrates have notably complex brains: arthropods (insects, crustaceans, arachnids, and others), and cephalopods (octopuses, squids, and similar molluscs). The brains of arthropods and cephalopods arise from twin parallel nerve cords that extend through the body of the animal. Arthropods have a central brain with three divisions and large optical lobes behind each eye for visual processing. Cephalopods such as the octopus and squid have the largest brains of any invertebrates. There are several invertebrate species whose brains have been studied intensively because they have properties that make them convenient for experimental work: - Fruit flies (Drosophila), because of the large array of techniques available for studying their genetics, have been a natural subject for studying the role of genes in brain development. In spite of the large evolutionary distance between insects and mammals, many aspects of Drosophila neurogenetics have turned out to be relevant to humans. The first biological clock genes, for example, were identified by examining Drosophila mutants that showed disrupted daily activity cycles. A search in the genomes of vertebrates turned up a set of analogous genes, which were found to play similar roles in the mouse biological clock—and therefore almost certainly in the human biological clock as well. - The nematode worm Caenorhabditis elegans, like Drosophila, has been studied largely because of its importance in genetics. In the early 1970s, Sydney Brenner chose it as a model system for studying the way that genes control development. One of the advantages of working with this worm is that the body plan is very stereotyped: the nervous system of the hermaphrodite morph contains exactly 302 neurons, always in the same places, making identical synaptic connections in every worm. Brenner's team sliced worms into thousands of ultrathin sections and photographed every section under an electron microscope, then visually matched fibers from section to section, to map out every neuron and synapse in the entire body. Nothing approaching this level of detail is available for any other organism, and the information has been used to enable a multitude of studies that would not have been possible without it. - The sea slug Aplysia was chosen by Nobel Prize-winning neurophysiologist Eric Kandel as a model for studying the cellular basis of learning and memory, because of the simplicity and accessibility of its nervous system, and it has been examined in hundreds of experiments. The first vertebrates appeared over 500 million years ago (Mya), during the Cambrian period, and may have resembled the modern hagfish in form. Sharks appeared about 450 Mya, amphibians about 400 Mya, reptiles about 350 Mya, and mammals about 200 Mya. No modern species should be described as more "primitive" than others, strictly speaking, since each has an equally long evolutionary history—but the brains of modern hagfishes, lampreys, sharks, amphibians, reptiles, and mammals show a gradient of size and complexity that roughly follows the evolutionary sequence. All of these brains contain the same set of basic anatomical components, but many are rudimentary in the hagfish, whereas in mammals the foremost part (the telencephalon) is greatly elaborated and expanded. Brains are most simply compared in terms of their size. The relationship between brain size, body size and other variables has been studied across a wide range of vertebrate species. As a rule, brain size increases with body size, but not in a simple linear proportion. In general, smaller animals tend to have larger brains, measured as a fraction of body size: the animal with the largest brain-size-to-body-size ratio is the hummingbird. For mammals, the relationship between brain volume and body mass essentially follows a power law with an exponent of about 0.75. This formula describes the central tendency, but every family of mammals departs from it to some degree, in a way that reflects in part the complexity of their behavior. For example, primates have brains 5 to 10 times larger than the formula predicts. Predators tend to have larger brains than their prey, relative to body size. All vertebrate brains share a common underlying form, which appears most clearly during early stages of embryonic development. In its earliest form, the brain appears as three swellings at the front end of the neural tube; these swellings eventually become the forebrain, midbrain, and hindbrain (the prosencephalon, mesencephalon, and rhombencephalon, respectively). At the earliest stages of brain development, the three areas are roughly equal in size. In many classes of vertebrates, such as fish and amphibians, the three parts remain similar in size in the adult, but in mammals the forebrain becomes much larger than the other parts, and the midbrain becomes very small. The brains of vertebrates are made of very soft tissue. Living brain tissue is pinkish on the outside and mostly white on the inside, with subtle variations in color. Vertebrate brains are surrounded by a system of connective tissue membranes called meninges that separate the skull from the brain. Blood vessels enter the central nervous system through holes in the meningeal layers. The cells in the blood vessel walls are joined tightly to one another, forming the so-called blood–brain barrier, which protects the brain from toxins that might enter through the bloodstream. Neuroanatomists usually divide the vertebrate brain into six main regions: the telencephalon (cerebral hemispheres), diencephalon (thalamus and hypothalamus), mesencephalon (midbrain), cerebellum, pons, and medulla oblongata. Each of these areas has a complex internal structure. Some parts, such as the cerebral cortex and cerebellum, consist of layers that are folded or convoluted to fit within the available space. Other parts, such as the thalamus and hypothalamus, consist of clusters of many small nuclei. Thousands of distinguishable areas can be identified within the vertebrate brain based on fine distinctions of neural structure, chemistry, and connectivity. Although the same basic components are present in all vertebrate brains, some branches of vertebrate evolution have led to substantial distortions of brain geometry, especially in the forebrain area. The brain of a shark shows the basic components in a straightforward way, but in teleost fishes (the great majority of existing fish species), the forebrain has become "everted", like a sock turned inside out. In birds, there are also major changes in forebrain structure. These distortions can make it difficult to match brain components from one species with those of another species. Here is a list of some of the most important vertebrate brain components, along with a brief description of their functions as currently understood: - The medulla, along with the spinal cord, contains many small nuclei involved in a wide variety of sensory and motor functions. - The pons lies in the brainstem directly above the medulla. Among other things, it contains nuclei that control sleep, respiration, swallowing, bladder function, equilibrium, eye movement, facial expressions, and posture. - The hypothalamus is a small region at the base of the forebrain, whose complexity and importance belies its size. It is composed of numerous small nuclei, each with distinct connections and neurochemistry. The hypothalamus regulates sleep and wake cycles, eating and drinking, hormone release, and many other critical biological functions. - The thalamus is another collection of nuclei with diverse functions. Some are involved in relaying information to and from the cerebral hemispheres. Others are involved in motivation. The subthalamic area (zona incerta) seems to contain action-generating systems for several types of "consummatory" behaviors, including eating, drinking, defecation, and copulation. - The cerebellum modulates the outputs of other brain systems to make them precise. Removal of the cerebellum does not prevent an animal from doing anything in particular, but it makes actions hesitant and clumsy. This precision is not built-in, but learned by trial and error. Learning how to ride a bicycle is an example of a type of neural plasticity that may take place largely within the cerebellum. - The optic tectum allows actions to be directed toward points in space, most commonly in response to visual input. In mammals it is usually referred to as the superior colliculus, and its best-studied function is to direct eye movements. It also directs reaching movements and other object-directed actions. It receives strong visual inputs, but also inputs from other senses that are useful in directing actions, such as auditory input in owls and input from the thermosensitive pit organs in snakes. In some fishes, such as lampreys, this region is the largest part of the brain. The superior colliculus is part of the midbrain. - The pallium is a layer of gray matter that lies on the surface of the forebrain. In reptiles and mammals, it is called the cerebral cortex. Multiple functions involve the pallium, including olfaction and spatial memory. In mammals, where it becomes so large as to dominate the brain, it takes over functions from many other brain areas. In many mammals, the cerebral cortex consists of folded bulges called gyri that create deep furrows or fissures called sulci. The folds increase the surface area of the cortex and therefore increase the amount of gray matter and the amount of information that can be processed. - The hippocampus, strictly speaking, is found only in mammals. However, the area it derives from, the medial pallium, has counterparts in all vertebrates. There is evidence that this part of the brain is involved in spatial memory and navigation in fishes, birds, reptiles, and mammals. - The basal ganglia are a group of interconnected structures in the forebrain. The primary function of the basal ganglia appears to be action selection: they send inhibitory signals to all parts of the brain that can generate motor behaviors, and in the right circumstances can release the inhibition, so that the action-generating systems are able to execute their actions. Reward and punishment exert their most important neural effects by altering connections within the basal ganglia. - The olfactory bulb is a special structure that processes olfactory sensory signals and sends its output to the olfactory part of the pallium. It is a major brain component in many vertebrates, but is greatly reduced in primates. The most obvious difference between the brains of mammals and other vertebrates is in terms of size. On average, a mammal has a brain roughly twice as large as that of a bird of the same body size, and ten times as large as that of a reptile of the same body size. Size, however, is not the only difference: there are also substantial differences in shape. The hindbrain and midbrain of mammals are generally similar to those of other vertebrates, but dramatic differences appear in the forebrain, which is greatly enlarged and also altered in structure. The cerebral cortex is the part of the brain that most strongly distinguishes mammals. In non-mammalian vertebrates, the surface of the cerebrum is lined with a comparatively simple three-layered structure called the pallium. In mammals, the pallium evolves into a complex six-layered structure called neocortex or isocortex. Several areas at the edge of the neocortex, including the hippocampus and amygdala, are also much more extensively developed in mammals than in other vertebrates. The elaboration of the cerebral cortex carries with it changes to other brain areas. The superior colliculus, which plays a major role in visual control of behavior in most vertebrates, shrinks to a small size in mammals, and many of its functions are taken over by visual areas of the cerebral cortex. The cerebellum of mammals contains a large portion (the neocerebellum) dedicated to supporting the cerebral cortex, which has no counterpart in other vertebrates. The brains of humans and other primates contain the same structures as the brains of other mammals, but are generally larger in proportion to body size. The most widely accepted way of comparing brain sizes across species is the so-called encephalization quotient (EQ), which takes into account the nonlinearity of the brain-to-body relationship. Humans have an average EQ in the 7-to-8 range, while most other primates have an EQ in the 2-to-3 range. Dolphins have values higher than those of primates other than humans, but nearly all other mammals have EQ values that are substantially lower. Most of the enlargement of the primate brain comes from a massive expansion of the cerebral cortex, especially the prefrontal cortex and the parts of the cortex involved in vision. The visual processing network of primates includes at least 30 distinguishable brain areas, with a complex web of interconnections. It has been estimated that visual processing areas occupy more than half of the total surface of the primate neocortex. The prefrontal cortex carries out functions that include planning, working memory, motivation, attention, and executive control. It takes up a much larger proportion of the brain for primates than for other species, and an especially large fraction of the human brain. The functions of the brain depend on the ability of neurons to transmit electrochemical signals to other cells, and their ability to respond appropriately to electrochemical signals received from other cells. The electrical properties of neurons are controlled by a wide variety of biochemical and metabolic processes, most notably the interactions between neurotransmitters and receptors that take place at synapses. Neurotransmitters and receptors Neurotransmitters are chemicals that are released at synapses when an action potential activates them—neurotransmitters attach themselves to receptor molecules on the membrane of the synapse's target cell, and thereby alter the electrical or chemical properties of the receptor molecules. With few exceptions, each neuron in the brain releases the same chemical neurotransmitter, or combination of neurotransmitters, at all the synaptic connections it makes with other neurons; this rule is known as Dale's principle. Thus, a neuron can be characterized by the neurotransmitters that it releases. The great majority of psychoactive drugs exert their effects by altering specific neurotransmitter systems. This applies to drugs such as marijuana, nicotine, heroin, cocaine, alcohol, fluoxetine, chlorpromazine, and many others. The two neurotransmitters that are used most widely in the vertebrate brain are glutamate, which almost always exerts excitatory effects on target neurons, and gamma-aminobutyric acid (GABA), which is almost always inhibitory. Neurons using these transmitters can be found in nearly every part of the brain. Because of their ubiquity, drugs that act on glutamate or GABA tend to have broad and powerful effects. Some general anesthetics act by reducing the effects of glutamate; most tranquilizers exert their sedative effects by enhancing the effects of GABA. There are dozens of other chemical neurotransmitters that are used in more limited areas of the brain, often areas dedicated to a particular function. Serotonin, for example—the primary target of antidepressant drugs and many dietary aids—comes exclusively from a small brainstem area called the Raphe nuclei. Norepinephrine, which is involved in arousal, comes exclusively from a nearby small area called the locus coeruleus. Other neurotransmitters such as acetylcholine and dopamine have multiple sources in the brain, but are not as ubiquitously distributed as glutamate and GABA. As a side effect of the electrochemical processes used by neurons for signaling, brain tissue generates electric fields when it is active. When large numbers of neurons show synchronized activity, the electric fields that they generate can be large enough to detect outside the skull, using electroencephalography (EEG) or magnetoencephalography (MEG). EEG recordings, along with recordings made from electrodes implanted inside the brains of animals such as rats, show that the brain of a living animal is constantly active, even during sleep. Each part of the brain shows a mixture of rhythmic and nonrhythmic activity, which may vary according to behavioral state. In mammals, the cerebral cortex tends to show large slow delta waves during sleep, faster alpha waves when the animal is awake but inattentive, and chaotic-looking irregular activity when the animal is actively engaged in a task. During an epileptic seizure, the brain's inhibitory control mechanisms fail to function and electrical activity rises to pathological levels, producing EEG traces that show large wave and spike patterns not seen in a healthy brain. Relating these population-level patterns to the computational functions of individual neurons is a major focus of current research in neurophysiology. All vertebrates have a blood–brain barrier that allows metabolism inside the brain to operate differently from metabolism in other parts of the body. Glial cells play a major role in brain metabolism, by controlling the chemical composition of the fluid that surrounds neurons, including levels of ions and nutrients. Brain tissue consumes a large amount of energy in proportion to its volume, so large brains place severe metabolic demands on animals. The need to limit body weight in order, for example, to fly, has apparently led to selection for a reduction of brain size in some species, such as bats. Most of the brain's energy consumption goes into sustaining the electric charge (membrane potential) of neurons. Most vertebrate species devote between 2% and 8% of basal metabolism to the brain. In primates, however, the fraction is much higher—in humans it rises to 20–25%. The energy consumption of the brain does not vary greatly over time, but active regions of the cerebral cortex consume somewhat more energy than inactive regions; this forms the basis for the functional brain imaging methods PET, fMRI, and NIRS. The brain typically gets most of its energy from oxygen-dependent metabolism of glucose (i.e., blood sugar), but ketones provide a major alternative source, together with contributions from medium chain fatty acids (octanoic and hexanoic acids), lactate , acetate , and possibly amino acids. From an evolutionary-biological perspective, the function of the brain is to provide coherent control over the actions of an animal. A centralized brain allows groups of muscles to be co-activated in complex patterns; it also allows stimuli impinging on one part of the body to evoke responses in other parts, and it can prevent different parts of the body from acting at cross-purposes to each other. To generate purposeful and unified action, the brain first brings information from sense organs together at a central location. It then processes this raw data to extract information about the structure of the environment. Next it combines the processed sensory information with information about the current needs of an animal and with memory of past circumstances. Finally, on the basis of the results, it generates motor response patterns that are suited to maximize the welfare of the animal. These signal-processing tasks require intricate interplay between a variety of functional subsystems. The invention of electronic computers in the 1940s, along with the development of mathematical information theory, led to a realization that brains can potentially be understood as information processing systems. This concept formed the basis of the field of cybernetics, and eventually gave rise to the field now known as computational neuroscience. The earliest attempts at cybernetics were somewhat crude in that they treated the brain as essentially a digital computer in disguise, as for example in John von Neumann's 1958 book, The Computer and the Brain. Over the years, though, accumulating information about the electrical responses of brain cells recorded from behaving animals has steadily moved theoretical concepts in the direction of increasing realism. The essence of the information processing approach is to try to understand brain function in terms of information flow and implementation of algorithms. One of the most influential early contributions was a 1959 paper titled What the frog's eye tells the frog's brain: the paper examined the visual responses of neurons in the retina and optic tectum of frogs, and came to the conclusion that some neurons in the tectum of the frog are wired to combine elementary responses in a way that makes them function as "bug perceivers". A few years later David Hubel and Torsten Wiesel discovered cells in the primary visual cortex of monkeys that become active when sharp edges move across specific points in the field of view—a discovery that eventually brought them a Nobel Prize. Follow-up studies in higher-order visual areas found cells that detect binocular disparity, color, movement, and aspects of shape, with areas located at increasing distances from the primary visual cortex showing increasingly complex responses. Other investigations of brain areas unrelated to vision have revealed cells with a wide variety of response correlates, some related to memory, some to abstract types of cognition such as space. Theorists have worked to understand these response patterns by constructing mathematical models of neurons and neural networks, which can be simulated using computers. Some useful models are abstract, focusing on the conceptual structure of neural algorithms rather than the details of how they are implemented in the brain; other models attempt to incorporate data about the biophysical properties of real neurons. No model on any level is yet considered to be a fully valid description of brain function, though. The essential difficulty is that sophisticated computation by neural networks requires distributed processing in which hundreds or thousands of neurons work cooperatively—current methods of brain activity recording are only capable of isolating action potentials from a few dozen neurons at a time. One of the primary functions of a brain is to extract biologically relevant information from sensory inputs. The human brain is provided with information about light, sound, the chemical composition of the atmosphere, temperature, head orientation, limb position, the chemical composition of the bloodstream, and more. In other animals additional senses may be present, such as the infrared heat-sense of snakes, the magnetic field sense of some birds, or the electric field sense of some types of fish. Moreover, other animals may develop existing sensory systems in new ways, such as the adaptation by bats of the auditory sense into a form of sonar. One way or another, all of these sensory modalities are initially detected by specialized sensors that project signals into the brain. Each sensory system begins with specialized receptor cells, such as light-receptive neurons in the retina of the eye, vibration-sensitive neurons in the cochlea of the ear, or pressure-sensitive neurons in the skin. The axons of sensory receptor cells travel into the spinal cord or brain, where they transmit their signals to a first-order sensory nucleus dedicated to one specific sensory modality. This primary sensory nucleus sends information to higher-order sensory areas that are dedicated to the same modality. Eventually, via a way-station in the thalamus, the signals are sent to the cerebral cortex, where they are processed to extract biologically relevant features, and integrated with signals coming from other sensory systems. Motor systems are areas of the brain that are directly or indirectly involved in producing body movements, that is, in activating muscles. Except for the muscles that control the eye, which are driven by nuclei in the midbrain, all the voluntary muscles in the body are directly innervated by motor neurons in the spinal cord and hindbrain. Spinal motor neurons are controlled both by neural circuits intrinsic to the spinal cord, and by inputs that descend from the brain. The intrinsic spinal circuits implement many reflex responses, and contain pattern generators for rhythmic movements such as walking or swimming. The descending connections from the brain allow for more sophisticated control. The brain contains several motor areas that project directly to the spinal cord. At the lowest level are motor areas in the medulla and pons, which control stereotyped movements such as walking, breathing, or swallowing. At a higher level are areas in the midbrain, such as the red nucleus, which is responsible for coordinating movements of the arms and legs. At a higher level yet is the primary motor cortex, a strip of tissue located at the posterior edge of the frontal lobe. The primary motor cortex sends projections to the subcortical motor areas, but also sends a massive projection directly to the spinal cord, through the pyramidal tract. This direct corticospinal projection allows for precise voluntary control of the fine details of movements. Other motor-related brain areas exert secondary effects by projecting to the primary motor areas. Among the most important secondary areas are the premotor cortex, basal ganglia, and cerebellum. |Ventral horn||Spinal cord||Contains motor neurons that directly activate muscles| |Oculomotor nuclei||Midbrain||Contains motor neurons that directly activate the eye muscles| |Cerebellum||Hindbrain||Calibrates precision and timing of movements| |Basal ganglia||Forebrain||Action selection on the basis of motivation| |Motor cortex||Frontal lobe||Direct cortical activation of spinal motor circuits| |Premotor cortex||Frontal lobe||Groups elementary movements into coordinated patterns| |Supplementary motor area||Frontal lobe||Sequences movements into temporal patterns| |Prefrontal cortex||Frontal lobe||Planning and other executive functions| In addition to all of the above, the brain and spinal cord contain extensive circuitry to control the autonomic nervous system, which works by secreting hormones and by modulating the "smooth" muscles of the gut. The autonomic nervous system affects heart rate, digestion, respiration rate, salivation, perspiration, urination, and sexual arousal, and several other processes. Most of its functions are not under direct voluntary control. Perhaps the most obvious aspect of the behavior of any animal is the daily cycle between sleeping and waking. Arousal and alertness are also modulated on a finer time scale, though, by an extensive network of brain areas. A key component of the arousal system is the suprachiasmatic nucleus (SCN), a tiny part of the hypothalamus located directly above the point at which the optic nerves from the two eyes cross. The SCN contains the body's central biological clock. Neurons there show activity levels that rise and fall with a period of about 24 hours, circadian rhythms: these activity fluctuations are driven by rhythmic changes in expression of a set of "clock genes". The SCN continues to keep time even if it is excised from the brain and placed in a dish of warm nutrient solution, but it ordinarily receives input from the optic nerves, through the retinohypothalamic tract (RHT), that allows daily light-dark cycles to calibrate the clock. The SCN projects to a set of areas in the hypothalamus, brainstem, and midbrain that are involved in implementing sleep-wake cycles. An important component of the system is the reticular formation, a group of neuron-clusters scattered diffusely through the core of the lower brain. Reticular neurons send signals to the thalamus, which in turn sends activity-level-controlling signals to every part of the cortex. Damage to the reticular formation can produce a permanent state of coma. Sleep involves great changes in brain activity. Until the 1950s it was generally believed that the brain essentially shuts off during sleep, but this is now known to be far from true; activity continues, but patterns become very different. There are two types of sleep: REM sleep (with dreaming) and NREM (non-REM, usually without dreaming) sleep, which repeat in slightly varying patterns throughout a sleep episode. Three broad types of distinct brain activity patterns can be measured: REM, light NREM and deep NREM. During deep NREM sleep, also called slow wave sleep, activity in the cortex takes the form of large synchronized waves, whereas in the waking state it is noisy and desynchronized. Levels of the neurotransmitters norepinephrine and serotonin drop during slow wave sleep, and fall almost to zero during REM sleep; levels of acetylcholine show the reverse pattern. For any animal, survival requires maintaining a variety of parameters of bodily state within a limited range of variation: these include temperature, water content, salt concentration in the bloodstream, blood glucose levels, blood oxygen level, and others. The ability of an animal to regulate the internal environment of its body—the milieu intérieur, as pioneering physiologist Claude Bernard called it—is known as homeostasis (Greek for "standing still"). Maintaining homeostasis is a crucial function of the brain. The basic principle that underlies homeostasis is negative feedback: any time a parameter diverges from its set-point, sensors generate an error signal that evokes a response that causes the parameter to shift back toward its optimum value. (This principle is widely used in engineering, for example in the control of temperature using a thermostat.) In vertebrates, the part of the brain that plays the greatest role is the hypothalamus, a small region at the base of the forebrain whose size does not reflect its complexity or the importance of its function. The hypothalamus is a collection of small nuclei, most of which are involved in basic biological functions. Some of these functions relate to arousal or to social interactions such as sexuality, aggression, or maternal behaviors; but many of them relate to homeostasis. Several hypothalamic nuclei receive input from sensors located in the lining of blood vessels, conveying information about temperature, sodium level, glucose level, blood oxygen level, and other parameters. These hypothalamic nuclei send output signals to motor areas that can generate actions to rectify deficiencies. Some of the outputs also go to the pituitary gland, a tiny gland attached to the brain directly underneath the hypothalamus. The pituitary gland secretes hormones into the bloodstream, where they circulate throughout the body and induce changes in cellular activity. According to evolutionary theory, all species are genetically programmed to act as though they have a goal of surviving and propagating offspring. At the level of an individual animal, this overarching goal of genetic fitness translates into a set of specific survival-promoting behaviors, such as seeking food, water, shelter, and a mate. The motivational system in the brain monitors the current state of satisfaction of these goals, and activates behaviors to meet any needs that arise. The motivational system works largely by a reward–punishment mechanism. When a particular behavior is followed by favorable consequences, the reward mechanism in the brain is activated, which induces structural changes inside the brain that cause the same behavior to be repeated later, whenever a similar situation arises. Conversely, when a behavior is followed by unfavorable consequences, the brain's punishment mechanism is activated, inducing structural changes that cause the behavior to be suppressed when similar situations arise in the future. Every type of animal brain that has been studied uses a reward–punishment mechanism: even worms and insects can alter their behavior to seek food sources or to avoid dangers. In vertebrates, the reward-punishment system is implemented by a specific set of brain structures, at the heart of which lie the basal ganglia, a set of interconnected areas at the base of the forebrain. There is substantial evidence that the basal ganglia are the central site at which decisions are made: the basal ganglia exert a sustained inhibitory control over most of the motor systems in the brain; when this inhibition is released, a motor system is permitted to execute the action it is programmed to carry out. Rewards and punishments function by altering the relationship between the inputs that the basal ganglia receive and the decision-signals that are emitted. The reward mechanism is better understood than the punishment mechanism, because its role in drug abuse has caused it to be studied very intensively. Research has shown that the neurotransmitter dopamine plays a central role: addictive drugs such as cocaine, amphetamine, and nicotine either cause dopamine levels to rise or cause the effects of dopamine inside the brain to be enhanced. Learning and memory Almost all animals are capable of modifying their behavior as a result of experience—even the most primitive types of worms. Because behavior is driven by brain activity, changes in behavior must somehow correspond to changes inside the brain. Theorists dating back to Santiago Ramón y Cajal argued that the most plausible explanation is that learning and memory are expressed as changes in the synaptic connections between neurons. Until 1970, however, experimental evidence to support the synaptic plasticity hypothesis was lacking. In 1971 Tim Bliss and Terje Lømo published a paper on a phenomenon now called long-term potentiation: the paper showed clear evidence of activity-induced synaptic changes that lasted for at least several days. Since then technical advances have made these sorts of experiments much easier to carry out, and thousands of studies have been made that have clarified the mechanism of synaptic change, and uncovered other types of activity-driven synaptic change in a variety of brain areas, including the cerebral cortex, hippocampus, basal ganglia, and cerebellum. Neuroscientists currently distinguish several types of learning and memory that are implemented by the brain in distinct ways: - Working memory is the ability of the brain to maintain a temporary representation of information about the task that an animal is currently engaged in. This sort of dynamic memory is thought to be mediated by the formation of cell assemblies—groups of activated neurons that maintain their activity by constantly stimulating one another. - Episodic memory is the ability to remember the details of specific events. This sort of memory can last for a lifetime. Much evidence implicates the hippocampus in playing a crucial role: people with severe damage to the hippocampus sometimes show amnesia, that is, inability to form new long-lasting episodic memories. - Semantic memory is the ability to learn facts and relationships. This sort of memory is probably stored largely in the cerebral cortex, mediated by changes in connections between cells that represent specific types of information. - Instrumental learning is the ability for rewards and punishments to modify behavior. It is implemented by a network of brain areas centered on the basal ganglia. - Motor learning is the ability to refine patterns of body movement by practicing, or more generally by repetition. A number of brain areas are involved, including the premotor cortex, basal ganglia, and especially the cerebellum, which functions as a large memory bank for microadjustments of the parameters of movement. The brain does not simply grow, but rather develops in an intricately orchestrated sequence of stages. It changes in shape from a simple swelling at the front of the nerve cord in the earliest embryonic stages, to a complex array of areas and connections. Neurons are created in special zones that contain stem cells, and then migrate through the tissue to reach their ultimate locations. Once neurons have positioned themselves, their axons sprout and navigate through the brain, branching and extending as they go, until the tips reach their targets and form synaptic connections. In a number of parts of the nervous system, neurons and synapses are produced in excessive numbers during the early stages, and then the unneeded ones are pruned away. For vertebrates, the early stages of neural development are similar across all species. As the embryo transforms from a round blob of cells into a wormlike structure, a narrow strip of ectoderm running along the midline of the back is induced to become the neural plate, the precursor of the nervous system. The neural plate folds inward to form the neural groove, and then the lips that line the groove merge to enclose the neural tube, a hollow cord of cells with a fluid-filled ventricle at the center. At the front end, the ventricles and cord swell to form three vesicles that are the precursors of the forebrain, midbrain, and hindbrain. At the next stage, the forebrain splits into two vesicles called the telencephalon (which will contain the cerebral cortex, basal ganglia, and related structures) and the diencephalon (which will contain the thalamus and hypothalamus). At about the same time, the hindbrain splits into the metencephalon (which will contain the cerebellum and pons) and the myelencephalon (which will contain the medulla oblongata). Each of these areas contains proliferative zones where neurons and glial cells are generated; the resulting cells then migrate, sometimes for long distances, to their final positions. Once a neuron is in place, it extends dendrites and an axon into the area around it. Axons, because they commonly extend a great distance from the cell body and need to reach specific targets, grow in a particularly complex way. The tip of a growing axon consists of a blob of protoplasm called a growth cone, studded with chemical receptors. These receptors sense the local environment, causing the growth cone to be attracted or repelled by various cellular elements, and thus to be pulled in a particular direction at each point along its path. The result of this pathfinding process is that the growth cone navigates through the brain until it reaches its destination area, where other chemical cues cause it to begin generating synapses. Considering the entire brain, thousands of genes create products that influence axonal pathfinding. The synaptic network that finally emerges is only partly determined by genes, though. In many parts of the brain, axons initially "overgrow", and then are "pruned" by mechanisms that depend on neural activity. In the projection from the eye to the midbrain, for example, the structure in the adult contains a very precise mapping, connecting each point on the surface of the retina to a corresponding point in a midbrain layer. In the first stages of development, each axon from the retina is guided to the right general vicinity in the midbrain by chemical cues, but then branches very profusely and makes initial contact with a wide swath of midbrain neurons. The retina, before birth, contains special mechanisms that cause it to generate waves of activity that originate spontaneously at a random point and then propagate slowly across the retinal layer. These waves are useful because they cause neighboring neurons to be active at the same time; that is, they produce a neural activity pattern that contains information about the spatial arrangement of the neurons. This information is exploited in the midbrain by a mechanism that causes synapses to weaken, and eventually vanish, if activity in an axon is not followed by activity of the target cell. The result of this sophisticated process is a gradual tuning and tightening of the map, leaving it finally in its precise adult form. Similar things happen in other brain areas: an initial synaptic matrix is generated as a result of genetically determined chemical guidance, but then gradually refined by activity-dependent mechanisms, partly driven by internal dynamics, partly by external sensory inputs. In some cases, as with the retina-midbrain system, activity patterns depend on mechanisms that operate only in the developing brain, and apparently exist solely to guide development. In humans and many other mammals, new neurons are created mainly before birth, and the infant brain contains substantially more neurons than the adult brain. There are, however, a few areas where new neurons continue to be generated throughout life. The two areas for which adult neurogenesis is well established are the olfactory bulb, which is involved in the sense of smell, and the dentate gyrus of the hippocampus, where there is evidence that the new neurons play a role in storing newly acquired memories. With these exceptions, however, the set of neurons that is present in early childhood is the set that is present for life. Glial cells are different: as with most types of cells in the body, they are generated throughout the lifespan. There has long been debate about whether the qualities of mind, personality, and intelligence can be attributed to heredity or to upbringing—this is the nature versus nurture controversy. Although many details remain to be settled, neuroscience research has clearly shown that both factors are important. Genes determine the general form of the brain, and genes determine how the brain reacts to experience. Experience, however, is required to refine the matrix of synaptic connections, which in its developed form contains far more information than the genome does. In some respects, all that matters is the presence or absence of experience during critical periods of development. In other respects, the quantity and quality of experience are important; for example, there is substantial evidence that animals raised in enriched environments have thicker cerebral cortices, indicating a higher density of synaptic connections, than animals whose levels of stimulation are restricted. The field of neuroscience encompasses all approaches that seek to understand the brain and the rest of the nervous system. Psychology seeks to understand mind and behavior, and neurology is the medical discipline that diagnoses and treats diseases of the nervous system. The brain is also the most important organ studied in psychiatry, the branch of medicine that works to study, prevent, and treat mental disorders. Cognitive science seeks to unify neuroscience and psychology with other fields that concern themselves with the brain, such as computer science (artificial intelligence and similar fields) and philosophy. The oldest method of studying the brain is anatomical, and until the middle of the 20th century, much of the progress in neuroscience came from the development of better cell stains and better microscopes. Neuroanatomists study the large-scale structure of the brain as well as the microscopic structure of neurons and their components, especially synapses. Among other tools, they employ a plethora of stains that reveal neural structure, chemistry, and connectivity. In recent years, the development of immunostaining techniques has allowed investigation of neurons that express specific sets of genes. Also, functional neuroanatomy uses medical imaging techniques to correlate variations in human brain structure with differences in cognition or behavior. Neurophysiologists study the chemical, pharmacological, and electrical properties of the brain: their primary tools are drugs and recording devices. Thousands of experimentally developed drugs affect the nervous system, some in highly specific ways. Recordings of brain activity can be made using electrodes, either glued to the scalp as in EEG studies, or implanted inside the brains of animals for extracellular recordings, which can detect action potentials generated by individual neurons. Because the brain does not contain pain receptors, it is possible using these techniques to record brain activity from animals that are awake and behaving without causing distress. The same techniques have occasionally been used to study brain activity in human patients suffering from intractable epilepsy, in cases where there was a medical necessity to implant electrodes to localize the brain area responsible for epileptic seizures. Functional imaging techniques such as functional magnetic resonance imaging are also used to study brain activity; these techniques have mainly been used with human subjects, because they require a conscious subject to remain motionless for long periods of time, but they have the great advantage of being noninvasive. Another approach to brain function is to examine the consequences of damage to specific brain areas. Even though it is protected by the skull and meninges, surrounded by cerebrospinal fluid, and isolated from the bloodstream by the blood–brain barrier, the delicate nature of the brain makes it vulnerable to numerous diseases and several types of damage. In humans, the effects of strokes and other types of brain damage have been a key source of information about brain function. Because there is no ability to experimentally control the nature of the damage, however, this information is often difficult to interpret. In animal studies, most commonly involving rats, it is possible to use electrodes or locally injected chemicals to produce precise patterns of damage and then examine the consequences for behavior. Computational neuroscience encompasses two approaches: first, the use of computers to study the brain; second, the study of how brains perform computation. On one hand, it is possible to write a computer program to simulate the operation of a group of neurons by making use of systems of equations that describe their electrochemical activity; such simulations are known as biologically realistic neural networks. On the other hand, it is possible to study algorithms for neural computation by simulating, or mathematically analyzing, the operations of simplified "units" that have some of the properties of neurons but abstract out much of their biological complexity. The computational functions of the brain are studied both by computer scientists and neuroscientists. Recent years have seen increasing applications of genetic and genomic techniques to the study of the brain. The most common subjects are mice, because of the availability of technical tools. It is now possible with relative ease to "knock out" or mutate a wide variety of genes, and then examine the effects on brain function. More sophisticated approaches are also being used: for example, using Cre-Lox recombination it is possible to activate or deactivate genes in specific parts of the brain, at specific times. Early philosophers were divided as to whether the seat of the soul lies in the brain or heart. Aristotle favored the heart, and thought that the function of the brain was merely to cool the blood. Democritus, the inventor of the atomic theory of matter, argued for a three-part soul, with intellect in the head, emotion in the heart, and lust near the liver. Hippocrates, the "father of medicine", came down unequivocally in favor of the brain. In his treatise on epilepsy he wrote: Men ought to know that from nothing else but the brain come joys, delights, laughter and sports, and sorrows, griefs, despondency, and lamentations. ... And by the same organ we become mad and delirious, and fears and terrors assail us, some by night, and some by day, and dreams and untimely wanderings, and cares that are not suitable, and ignorance of present circumstances, desuetude, and unskillfulness. All these things we endure from the brain, when it is not healthy... The Roman physician Galen also argued for the importance of the brain, and theorized in some depth about how it might work. Galen traced out the anatomical relationships among brain, nerves, and muscles, demonstrating that all muscles in the body are connected to the brain through a branching network of nerves. He postulated that nerves activate muscles mechanically by carrying a mysterious substance he called pneumata psychikon, usually translated as "animal spirits". Galen's ideas were widely known during the Middle Ages, but not much further progress came until the Renaissance, when detailed anatomical study resumed, combined with the theoretical speculations of René Descartes and those who followed him. Descartes, like Galen, thought of the nervous system in hydraulic terms. He believed that the highest cognitive functions are carried out by a non-physical res cogitans, but that the majority of behaviors of humans, and all behaviors of animals, could be explained mechanistically. The first real progress toward a modern understanding of nervous function, though, came from the investigations of Luigi Galvani, who discovered that a shock of static electricity applied to an exposed nerve of a dead frog could cause its leg to contract. Since that time, each major advance in understanding has followed more or less directly from the development of a new technique of investigation. Until the early years of the 20th century, the most important advances were derived from new methods for staining cells. Particularly critical was the invention of the Golgi stain, which (when correctly used) stains only a small fraction of neurons, but stains them in their entirety, including cell body, dendrites, and axon. Without such a stain, brain tissue under a microscope appears as an impenetrable tangle of protoplasmic fibers, in which it is impossible to determine any structure. In the hands of Camillo Golgi, and especially of the Spanish neuroanatomist Santiago Ramón y Cajal, the new stain revealed hundreds of distinct types of neurons, each with its own unique dendritic structure and pattern of connectivity. In the first half of the 20th century, advances in electronics enabled investigation of the electrical properties of nerve cells, culminating in work by Alan Hodgkin, Andrew Huxley, and others on the biophysics of the action potential, and the work of Bernard Katz and others on the electrochemistry of the synapse. These studies complemented the anatomical picture with a conception of the brain as a dynamic entity. Reflecting the new understanding, in 1942 Charles Sherrington visualized the workings of the brain waking from sleep: The great topmost sheet of the mass, that where hardly a light had twinkled or moved, becomes now a sparkling field of rhythmic flashing points with trains of traveling sparks hurrying hither and thither. The brain is waking and with it the mind is returning. It is as if the Milky Way entered upon some cosmic dance. Swiftly the head mass becomes an enchanted loom where millions of flashing shuttles weave a dissolving pattern, always a meaningful pattern though never an abiding one; a shifting harmony of subpatterns. - —Sherrington, 1942, Man on his Nature In the second half of the 20th century, developments in chemistry, electron microscopy, genetics, computer science, functional brain imaging, and other fields progressively opened new windows into brain structure and function. In the United States, the 1990s were officially designated as the "Decade of the Brain" to commemorate advances made in brain research, and to promote funding for such research. In the 21st century, these trends have continued, and several new approaches have come into prominence, including multielectrode recording, which allows the activity of many brain cells to be recorded all at the same time; genetic engineering, which allows molecular components of the brain to be altered experimentally; and genomics, which allows variations in brain structure to be correlated with variations in DNA properties. - Pelvig, DP; Pakkenberg, H; Stark, AK; Pakkenberg, B (2008). "Neocortical glial cell numbers in human brains". Neurobiology of Aging 29 (11): 1754–1762. doi:10.1016/j.neurobiolaging.2007.04.013. PMID 17544173. - Hippocrates (400 BCE). On the Sacred Disease. Francis Adams. - Shepherd, GM (1994). Neurobiology. Oxford University Press. p. 3. ISBN 978-0-19-508843-4. - Sporns, O (2010). Networks of the Brain. MIT Press. p. 143. ISBN 978-0-262-01469-4. - Başar, E (2010). Brain-Body-Mind in the Nebulous Cartesian System: A Holistic Approach by Oscillations. Springer. p. 225. ISBN 978-1-4419-6134-1. - Singh, I (2006). "A brief review of the techniques used in the study of neuroanatomy". Textbook of human neuroanatomy. Jaypee Brothers Publishers. p. 24. ISBN 978-81-8061-808-6. - Principles of Neural Science p. 20 - Principles of Neural Science, p. 21 - Douglas, RJ; Martin, KA (2004). "Neuronal circuits of the neocortex". Annual Review of Neuroscience 27: 419–451. doi:10.1146/annurev.neuro.27.070203.144152. PMID 15217339. - Barnett, MW; Larkman, PM (2007). "The action potential". Practical Neurology 7 (3): 192–197. PMID 17515599. - Principles of Neural Science, Ch.10, p. 175 - Principles of Neural Science, Ch. 10 - Shepherd, GM (2004). "Ch. 1: Introduction to synaptic circuits". The Synaptic Organization of the Brain. Oxford University Press US. ISBN 978-0-19-515956-1. - Williams, RW; Herrup, K (1988). "The control of neuron number". Annual Review of Neuroscience 11: 423–453. doi:10.1146/annurev.ne.11.030188.002231. PMID 3284447. - Heisenberg, M (2003). "Mushroom body memoir: from maps to models". Nature Reviews Neuroscience 4 (4): 266–275. doi:10.1038/nrn1074. PMID 12671643. - Principles of Neural Science, Ch. 2 - Jacobs, DK, Nakanishi N, Yuan D et al. (2007). "Evolution of sensory structures in basal metazoa". Integrative & Comparative Biology 47 (5): 712–723. doi:10.1093/icb/icm094. PMID 21669752. - Balavoine, G (2003). "The segmented Urbilateria: A testable scenario". Integrative & Comparative Biology 43 (1): 137–147. doi:10.1093/icb/43.1.137. - Schmidt-Rhaesa, A (2007). The Evolution of Organ Systems. Oxford University Press. p. 110. ISBN 978-0-19-856669-4. - Kristan Jr, WB; Calabrese, RL; Friesen, WO (2005). "Neuronal control of leech behavior". Prog Neurobiology 76 (5): 279–327. doi:10.1016/j.pneurobio.2005.09.004. PMID 16260077. - Mwinyi, A; Bailly, X; Bourlat, SJ; Jondelius, U; Littlewood, DT; Podsiadlowski, L (2010). "The phylogenetic position of Acoela as revealed by the complete mitochondrial genome of Symsagittifera roscoffensis". BMC Evolutionary Biology 10: 309. doi:10.1186/1471-2148-10-309. PMC 2973942. PMID 20942955. - Barnes, RD (1987). Invertebrate Zoology (5th ed.). Saunders College Pub. p. 1. ISBN 978-0-03-008914-5. - Butler, AB (2000). "Chordate Evolution and the Origin of Craniates: An Old Brain in a New Head". Anatomical Record 261 (3): 111–125. doi:10.1002/1097-0185(20000615)261:3<111::AID-AR6>3.0.CO;2-F. PMID 10867629. - Bulloch, TH; Kutch, W (1995). "Are the main grades of brains different principally in numbers of connections or also in quality?". In Breidbach O. The nervous systems of invertebrates: an evolutionary and comparative approach. Birkhäuser. p. 439. ISBN 978-3-7643-5076-5. - "Flybrain: An online atlas and database of the drosophila nervous system". Retrieved 2011-10-14. - Konopka, RJ; Benzer, S (1971). "Clock Mutants of Drosophila melanogaster". Proc Nat Acad Sci U.S.A. 68 (9): 2112–6. doi:10.1073/pnas.68.9.2112. PMC 389363. PMID 5002428. - Shin HS et a. (1985). "An unusual coding sequence from a Drosophila clock gene is conserved in vertebrates". Nature 317 (6036): 445–8. doi:10.1038/317445a0. PMID 2413365. - "WormBook: The online review of C. elegans biology". Retrieved 2011-10-14. - Hobert, O (2005). Specification of the nervous system. In The C. elegans Research Community. "Wormbook". WormBook: 1–19. doi:10.1895/wormbook.1.12.1. PMID 18050401. - White, JG; Southgate, E; Thomson, JN; Brenner, S (1986). "The Structure of the Nervous System of the Nematode Caenorhabditis elegans". Phil. Trans. Roy. Soc. London (Biology) 314 (1165): 1–340. doi:10.1098/rstb.1986.0056. - Hodgkin, J (2001). "Caenorhabditis elegans". In Brenner S, Miller JH. Encyclopedia of Genetics. Elsevier. pp. 251–256. ISBN 978-0-12-227080-2. - Kandel, ER (2007). In Search of Memory: The Emergence of a New Science of Mind. WW Norton. pp. 145–150. ISBN 978-0-393-32937-7. - Shu, DG; Morris, SC; Han, J; Zhang, Z-F; Yasui, K.; Janvier, P.; Chen, L.; Zhang, X.-L. et al. (2003). "Head and backbone of the Early Cambrian vertebrate Haikouichthys". Nature 421 (6922): 526–529. doi:10.1038/nature01264. PMID 12556891. - Striedter, GF (2005). "Ch. 3: Conservation in vertebrate brains". Principles of Brain Evolution. Sinauer Associates. ISBN 978-0-87893-820-9. - Armstrong, E (1983). "Relative brain size and metabolism in mammals". Science 220 (4603): 1302–1304. doi:10.1126/science.6407108. PMID 6407108. - Jerison, HJ (1973). Evolution of the Brain and Intelligence. Academic Press. pp. 55–74. ISBN 978-0-12-385250-2. - Principles of Neural Science, p. 1019 - Principles of Neural Science, Ch. 17 - Parent, A; Carpenter, MB (1995). "Ch. 1". Carpenter's Human Neuroanatomy. Williams & Wilkins. ISBN 978-0-683-06752-1. - Northcutt, RG (2008). "Forebrain evolution in bony fishes". Brain Research Bulletin 75 (2–4): 191–205. doi:10.1016/j.brainresbull.2007.10.058. PMID 18331871. - Reiner, A; Yamamoto, K; Karten, HJ (2005). "Organization and evolution of the avian forebrain". The Anatomical Record Part A 287 (1): 1080–1102. doi:10.1002/ar.a.20253. PMID 16206213. - Principles of Neural Science, Chs. 44, 45 - Siegel, A; Sapru, HN (2010). Essential Neuroscience. Lippincott Williams & Wilkins. pp. 184–189. ISBN 978-0-7817-8383-5. - Swaab, DF; Boller, F; Aminoff, MJ (2003). The Human Hypothalamus. Elsevier. ISBN 978-0-444-51357-1. - Jones, EG (1985). The Thalamus. Plenum Press. ISBN 978-0-306-41856-3. - Principles of Neural Science, Ch. 42 - Saitoh, K; Ménard, A; Grillner, S (2007). "Tectal control of locomotion, steering, and eye movements in lamprey". Journal of Neurophysiology 97 (4): 3093–3108. doi:10.1152/jn.00639.2006. PMID 17303814. - Puelles, L (2001). "Thoughts on the development, structure and evolution of the mammalian and avian telencephalic pallium". Phil. Trans. Roy. Soc. London B (Biological Sciences) 356 (1414): 1583–1598. doi:10.1098/rstb.2001.0973. PMC 1088538. PMID 11604125. - Salas, C; Broglio, C; Rodríguez, F (2003). "Evolution of forebrain and spatial cognition in vertebrates: conservation across diversity". Brain, Behavior and Evolution 62 (2): 72–82. doi:10.1159/000072438. PMID 12937346. - Grillner, S et al. (2005). "Mechanisms for selection of basic motor programs—roles for the striatum and pallidum". Trends in Neurosciences 28 (7): 364–370. doi:10.1016/j.tins.2005.05.004. PMID 15935487. - Northcutt, RG (1981). "Evolution of the telencephalon in nonmammals". Annual Review of Neuroscience 4: 301–350. doi:10.1146/annurev.ne.04.030181.001505. PMID 7013637. - Northcutt, RG (2002). "Understanding vertebrate brain evolution". Integrative & Comparative Biology 42 (4): 743–756. doi:10.1093/icb/42.4.743. PMID 21708771. - Barton, RA; Harvey, PH (2000). "Mosaic evolution of brain structure in mammals". Nature 405 (6790): 1055–1058. doi:10.1038/35016580. PMID 10890446. - Aboitiz, F; Morales, D; Montiel, J (2003). "The evolutionary origin of the mammalian isocortex: Towards an integrated developmental and functional approach". Behavioral and Brain Sciences 26 (5): 535–552. doi:10.1017/S0140525X03000128. PMID 15179935. - Romer, AS; Parsons, TS (1977). The Vertebrate Body. Holt-Saunders International. p. 531. ISBN 0-03-910284-X. - Roth, G; Dicke, U (2005). "Evolution of the brain and Intelligence". Trends in Cognitive Sciences 9 (5): 250–257. doi:10.1016/j.tics.2005.03.005. PMID 15866152. - Marino, Lori (2004). "Cetacean Brain Evolution: Multiplication Generates Complexity" (PDF). International Society for Comparative Psychology (17): 1–16. Retrieved 2010-08-29. - Shoshani, J; Kupsky, WJ; Marchant, GH (2006). "Elephant brain Part I: Gross morphology, functions, comparative anatomy, and evolution". Brain Research Bulletin 70 (2): 124–157. doi:10.1016/j.brainresbull.2006.03.016. PMID 16782503. - Finlay, BL; Darlington, RB; Nicastro, N (2001). "Developmental structure in brain evolution". Behavioral and Brain Sciences 24 (2): 263–308. doi:10.1017/S0140525X01003958. PMID 11530543. - Calvin, WH (1996). How Brains Think. Basic Books. ISBN 978-0-465-07278-1. - Sereno, MI; Dale, AM; Reppas, AM; Kwong, KK; Belliveau, JW; Brady, TJ; Rosen, BR; Tootell, RBH (1995). "Borders of multiple visual areas in human revealed by functional magnetic resonance imaging". Science (AAAS) 268 (5212): 889–893. doi:10.1126/science.7754376. PMID 7754376. - Fuster, JM (2008). The Prefrontal Cortex. Elsevier. pp. 1–7. ISBN 978-0-12-373644-4. - Principles of Neural Science, Ch. 15 - Cooper, JR; Bloom, FE; Roth, RH (2003). The Biochemical Basis of Neuropharmacology. Oxford University Press US. ISBN 978-0-19-514008-8. - McGeer, PL; McGeer, EG (1989). "Chapter 15, Amino acid neurotransmitters". In G. Siegel et al. Basic Neurochemistry. Raven Press. pp. 311–332. ISBN 978-0-88167-343-2. - Foster, AC; Kemp, JA (2006). "Glutamate- and GABA-based CNS therapeutics". Current Opinion in Pharmacology 6 (1): 7–17. doi:10.1016/j.coph.2005.11.005. PMID 16377242. - Frazer, A; Hensler, JG (1999). "Understanding the neuroanatomical organization of serotonergic cells in the brain provides insight into the functions of this neurotransmitter". In Siegel, GJ. Basic Neurochemistry (Sixth ed.). Lippincott Williams & Wilkins. ISBN 0-397-51820-X. - Mehler, MF; Purpura, DP (2009). "Autism, fever, epigenetics and the locus coeruleus". Brain Research Reviews 59 (2): 388–392. doi:10.1016/j.brainresrev.2008.11.001. PMC 2668953. PMID 19059284. - Rang, HP (2003). Pharmacology. Churchill Livingstone. pp. 476–483. ISBN 0-443-07145-4. - Speckmann, E-J; Elger, CE (2004). "Introduction to the neurophysiological basis of the EEG and DC potentials". In Niedermeyer E, Lopes da Silva FH. Electroencephalography: Basic Principles, Clinical Applications, and Related Fields. Lippincott Williams & Wilkins. pp. 17–31. ISBN 0-7817-5126-8. - Buzsáki, G (2006). Rhythms of the Brain. Oxford University Press. ISBN 978-0-19-530106-9. OCLC 63279497. - Nieuwenhuys, R; Donkelaar, HJ; Nicholson, C (1998). The Central Nervous System of Vertebrates, Volume 1. Springer. pp. 11–14. ISBN 978-3-540-56013-5. - Safi, K; Seid, MA; Dechmann, DK (2005). "Bigger is not always better: when brains get smaller". Biology Letters 1 (3): 283–286. doi:10.1098/rsbl.2005.0333. PMC 1617168. PMID 17148188. - Mink, JW; Blumenschine, RJ; Adams, DB (1981). "Ratio of central nervous system to body metabolism in vertebrates: its constancy and functional basis". American Journal of Physiology 241 (3): R203–212. PMID 7282965. - Raichle, M; Gusnard, DA (2002). "Appraising the brain's energy budget". Proc. Nat. Acad. Sci. U.S.A. 99 (16): 10237–10239. doi:10.1073/pnas.172399499. PMC 124895. PMID 12149485. - Mehagnoul-Schipper, DJ; van der Kallen, BF; Colier, WNJM; van der Sluijs, MC; van Erning, LJ; Thijssen, HO; Oeseburg, B; Hoefnagels, WH et al. (2002). "Simultaneous measurements of cerebral oxygenation changes during brain activation by near-infrared spectroscopy and functional magnetic resonance imaging in healthy young and elderly subjects.". Hum Brain Mapp 16 (1): 14–23. doi:10.1002/hbm.10026. - Soengas, JL; Aldegunde, M (2002). "Energy metabolism of fish brain". Comparative Biochemistry and Physiology Part B: Biochemistry and Molecular Biology 131 (3): 271–296. doi:10.1016/S1096-4959(02)00022-2. PMID 11959012. - Carew, TJ (2000). "Ch. 1". Behavioral Neurobiology: the Cellular Organization of Natural Behavior. Sinauer Associates. ISBN 978-0-87893-092-0. - Churchland, PS; Koch, C; Sejnowski, TJ (1993). "What is computational neuroscience?". In Schwartz EL. Computational Neuroscience. MIT Press. pp. 46–55. ISBN 978-0-262-69164-2. - von Neumann, J; Churchland, PM; Churchland, PS (2000). The Computer and the Brain. Yale University Press. pp. xi–xxii. ISBN 978-0-300-08473-3. - Lettvin, JY; Maturana, HR; McCulloch, WS; Pitts, WH (1959). "What the frog's eye tells the frog's brain" (pdf). Proceedings of the Institute of Radio Engineering 47: 1940–1951. - Hubel, DH; Wiesel, TN (2005). Brain and visual perception: the story of a 25-year collaboration. Oxford University Press US. pp. 657–704. ISBN 978-0-19-517618-6. - Farah, MJ (2000). The Cognitive Neuroscience of Vision. Wiley-Blackwell. pp. 1–29. ISBN 978-0-631-21403-8. - Engel, AK; Singer, W (2001). "Temporal binding and the neural correlates of sensory awareness". Tends in Cognitive Sciences 5 (1): 16–25. doi:10.1016/S1364-6613(00)01568-0. PMID 11164732. - Dayan, P; Abbott, LF (2005). "Ch.7: Network models". Theoretical Neuroscience. MIT Press. ISBN 978-0-262-54185-5. - Averbeck, BB; Lee, D (2004). "Coding and transmission of information by neural ensembles". Trends in Neurosciences 27 (4): 225–230. doi:10.1016/j.tins.2004.02.006. PMID 15046882. - Principles of Neural Science, Ch. 21 - Principles of Neural Science, Ch. 34 - Principles of Neural Science, Chs. 36, 37 - Principles of Neural Science, Ch. 33 - Dafny, N. "Anatomy of the spinal cord". Neuroscience Online. Retrieved 2011-10-10. - Dragoi, V. "Ocular motor system". Neuroscience Online. Retrieved 2011-10-10. - Gurney, K; Prescott, TJ; Wickens, JR; Redgrave, P (2004). "Computational models of the basal ganglia: from robots to membranes". Trends in Neurosciences 27 (8): 453–459. doi:10.1016/j.tins.2004.06.003. PMID 15271492. - Principles of Neural Science, Ch. 38 - Shima, K; Tanji, J (1998). "Both supplementary and presupplementary motor areas are crucial for the temporal organization of multiple movements". Journal of Neurophysiology 80 (6): 3247–3260. PMID 9862919. - Miller, EK; Cohen, JD (2001). "An integrative theory of prefrontal cortex function". Annual Review of Neuroscience 24 (1): 167–202. doi:10.1146/annurev.neuro.24.1.167. PMID 11283309. - Principles of Neural Science, Ch. 49 - Principles of Neural Science, Ch. 45 - Antle, MC; Silver, R (2005). "Orchestrating time: arrangements of the brain circadian clock" (PDF). Trends in Neurosciences 28 (3): 145–151. doi:10.1016/j.tins.2005.01.003. PMID 15749168. - Principles of Neural Science, Ch. 47 - Kleitman, N (1938, revised 1963, reprinted 1987). Sleep and Wakefulness. The University of Chicago Press, Midway Reprints series. ISBN 0-226-44073-7. - Dougherty, P. "Hypothalamus: structural organization". Neuroscience Online. Retrieved 2011-10-11. - Gross, CG (1998). "Claude Bernard and the constancy of the internal environment" (PDF). The Neuroscientist 4 (5): 380–385. doi:10.1177/107385849800400520. - Dougherty, P. "Hypothalamic control of pituitary hormone". Neuroscience Online. Retrieved 2011-10-11. - Chiel, HJ; Beer, RD (1997). "The brain has a body: adaptive behavior emerges from interactions of nervous system, body, and environment". Trends in Neurosciences 20 (12): 553–557. doi:10.1016/S0166-2236(97)01149-1. PMID 9416664. - Berridge, KC (2004). "Motivation concepts in behavioral neuroscience". Physiology & Behavior 8 (2): 179–209. doi:10.1016/j.physbeh.2004.02.004. PMID 15159167. - Ardiel, EL; Rankin, CH (2010). "An elegant mind: learning and memory in Caenorhabditis elegans". Learning and Memory 17 (4): 191–201. doi:10.1101/lm.960510. PMID 20335372. - Hyman, SE; Malenka, RC (2001). "Addiction and the brain: the neurobiology of compulsion and its persistence". Nature Reviews Neuroscience 2 (10): 695–703. doi:10.1038/35094560. PMID 11584307. - Ramón y Cajal, S (1894). "The Croonian Lecture: La Fine Structure des Centres Nerveux". Proceedings of the Royal Society of London 55 (331–335): 444–468. doi:10.1098/rspl.1894.0063. - Lømo, T (2003). "The discovery of long-term potentiation". Phil. Trans. Roy. Soc. London B (Biological Sciences) 358 (1432): 617–620. doi:10.1098/rstb.2002.1226. PMC 1693150. PMID 12740104. - Malenka, R; Bear, M (2004). "LTP and LTD: an embarrassment of riches". Neuron 44 (1): 5–21. doi:10.1016/j.neuron.2004.09.012. PMID 15450156. - Curtis, CE; D'Esposito, M (2003). "Persistent activity in the prefrontal cortex during working memory". Trends in Cognitive Sciences 7 (9): 415–423. doi:10.1016/S1364-6613(03)00197-9. PMID 12963473. - Tulving, E; Markowitsch, HJ (1998). "Episodic and declarative memory: role of the hippocampus". Hippocampus 8 (3): 198–204. doi:10.1002/(SICI)1098-1063(1998)8:3<198::AID-HIPO2>3.0.CO;2-G. PMID 9662134. - Martin, A; Chao, LL (2001). "Semantic memory and the brain: structures and processes". Current Opinion in Neurobiology 11 (2): 194–201. doi:10.1016/S0959-4388(00)00196-3. PMID 11301239. - Balleine, BW; Liljeholm, Mimi; Ostlund, SB (2009). "The integrative function of the basal ganglia in instrumental learning". Behavioral Brain Research 199 (1): 43–52. doi:10.1016/j.bbr.2008.10.034. PMID 19027797. - Doya, K (2000). "Complementary roles of basal ganglia and cerebellum in learning and motor control". Current Opinion in Neurobiology 10 (6): 732–739. doi:10.1016/S0959-4388(00)00153-7. PMID 11240282. - Principles of Neural Development, Ch. 1 - Principles of Neural Development, Ch. 4 - Principles of Neural Development, Chs. 5, 7 - Principles of Neural Development, Ch. 12 - Wong, R (1999). "Retinal waves and visual system development". Annual Review of Neuroscience 22: 29–47. doi:10.1146/annurev.neuro.22.1.29. PMID 10202531. - Principles of Neural Development, Ch. 6 - Rakic, P (2002). "Adult neurogenesis in mammals: an identity crisis". J. Neuroscience 22 (3): 614–618. PMID 11826088. - Ridley, M (2003). Nature via Nurture: Genes, Experience, and What Makes Us Human. Forth Estate. pp. 1–6. ISBN 978-0-06-000678-5. - Wiesel, T (1982). "Postnatal development of the visual cortex and the influence of environment" (PDF). Nature 299 (5884): 583–591. doi:10.1038/299583a0. PMID 6811951. - van Praag, H; Kempermann, G; Gage, FH (2000). "Neural consequences of environmental enrichment". Nature Reviews Neuroscience 1 (3): 191–198. doi:10.1038/35044558. PMID 11257907. - Principles of Neural Science, Ch. 1 - Storrow, HA (1969). Outline of Clinical Psychiatry. Appleton-Century-Crofts. pp. 27–30. - Thagard, P (2008). Zalta, EN, ed. "Cognitive Science". The Stanford Encyclopedia of Philosophy. Retrieved 2011-10-14. - Bear, MF; Connors, BW; Paradiso, MA (2007). "Ch. 2". Neuroscience: Exploring the Brain. Lippincott Williams & Wilkins. ISBN 978-0-7817-6003-4. - Dowling, JE (2001). Neurons and Networks. Harvard University Press. pp. 15–24. ISBN 978-0-674-00462-7. - Wyllie, E; Gupta, A; Lachhwani, DK (2005). "Ch. 77". The Treatment of Epilepsy: Principles and Practice. Lippincott Williams & Wilkins. ISBN 978-0-7817-4995-4. - Laureys, S; Boly, M; Tononi, G (2009). "Functional neuroimaging". In Laureys S, Tononi G. The Neurology of Consciousness: Cognitive Neuroscience and Neuropathology. Academic Press. pp. 31–42. ISBN 978-0-12-374168-4. - Carmena, JM et al. (2003). "Learning to Control a Brain–Machine Interface for Reaching and Grasping by Primates". PLoS Biology 1 (2): 193–208. doi:10.1371/journal.pbio.0000042. PMC 261882. PMID 14624244. - Kolb, B; Whishaw, I (2008). "Ch. 1". Fundamentals of Human Neuropsychology. Macmillan. ISBN 978-0-7167-9586-5. - Abbott, LF; Dayan, P (2001). "Preface". Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. MIT Press. ISBN 978-0-262-54185-5. - Tonegawa, S; Nakazawa, K; Wilson, MA (2003). "Genetic neuroscience of mammalian learning and memory". Phil. Trans. Roy. Soc. London B (Biological Sciences) 358 (1432): 787–795. doi:10.1098/rstb.2002.1243. PMC 1693163. PMID 12740125. - Finger, S (2001). Origins of Neuroscience. Oxford University Press. pp. 14–15. ISBN 978-0-19-514694-3. - Finger, S (2001). Origins of Neuroscience. Oxford University Press. pp. 193–195. ISBN 978-0-19-514694-3. - Bloom, FE (1975). Schmidt FO, Worden FG, Swazey JP, Adelman G, ed. The Neurosciences, Paths of Discovery. MIT Press. p. 211. ISBN 978-0-262-23072-8. - Shepherd, GM (1991). "Ch.1 : Introduction and Overview". Foundations of the Neuron Doctrine. Oxford University Press. ISBN 978-0-19-506491-9. - Piccolino, M (2002). "Fifty years of the Hodgkin-Huxley era". Trends in Neurosciences 25 (11): 552–553. doi:10.1016/S0166-2236(02)02276-2. PMID 12392928. - Sherrington, CS (1942). Man on his nature. Cambridge University Press. p. 178. ISBN 978-0-8385-7701-1. - Jones, EG; Mendell, LM (1999). "Assessing the Decade of the Brain". Science 284 (5415): 739. doi:10.1126/science.284.5415.739. PMID 10336393. - Buzsáki, G (2004). "Large-scale recording of neuronal ensembles". Nature Neuroscience 7 (5): 446–451. doi:10.1038/nn1233. PMID 15114356. - Geschwind, DH; Konopka, G (2009). "Neuroscience in the era of functional genomics and systems biology". Nature 461 (7266): 908–915. doi:10.1038/nature08537. PMID 19829370. |Wikiquote has a collection of quotations related to: Brain| |Wikimedia Commons has media related to: Brain| - Brain Museum, comparative mammalian brain collection - BrainInfo, neuroanatomy database - Neuroscience for Kids - BrainMaps.org, interactive high-resolution digital brain atlas of primate and non-primate brains - The Brain from Top to Bottom, at McGill University - The HOPES Brain Tutorial, at Stanford University
http://en.wikipedia.org/wiki/Brain
13
112
See something needing your input? Click here to Join us in providing quality, expert-guided information to the public for free! From Citizendium, the Citizens' Compendium In cryptography, block ciphers are one of the two main types of symmetric cipher; they operate on fixed-size blocks of plaintext, giving a block of ciphertext for each. The other main type are stream ciphers, which generate a continuous stream of keying material to be mixed with messages. The basic function of block ciphers is to keep messages or stored data secret; the intent is that an unauthorised person be completely unable to read the enciphered material. Block ciphers therefore use a key and are designed to be hard to read without that key. Of course an attacker's intent is exactly the opposite; he wants to read the material without authorisation, and often without the key. Seecryptanalysis for his methods. Among the best-known and most widely used block ciphers are two US government standards. The Data Encryption Standard (DES) from the 1970s is now considered obsolete; the Advanced Encryption Standard (AES) replaced it in 2002. To choose the new standard, the National Institute of Standards and Technology ran an AES competition. Fifteen ciphers were entered, five finalists selected, and eventually AES chosen. Text below gives an overview; for details of the process and the criteria, and descriptions of all fifteen candidates, see the AES competition article. These standards greatly influenced the design of other block ciphers, and the latter part of this article is divided into sections based on that. DES and alternatives describes 20th century block ciphers, all with the 64-bit block size of DES. The AES generation describes the next generation, the first 21st century ciphers, all with the 128-bit block size of AES. Large-block ciphers covers a few special cases that do not fit in the other sections. Block ciphers are essential components in many security systems. However, just having a good block cipher does not give you security, much as just having good tires does not give you transportation. It may not even help; good tires are of little use if you need a boat. Even in systems where block ciphers are needed, they are never the whole story. This section gives an overview of the rest of the story; it aims to provide a context for the rest of the article by mentioning some issues that, while outside the study of the ciphers themselves, are crucially important in understanding and using these ciphers. Any cipher is worthless without a good key. Keys must be kept secure, they should be large enough and sufficiently random that searching for the key (a brute force attack) is effectively impossible, and in any application which encrypts large volumes of data, the key must be changed from time to time. See the cryptography article for discussion. It is hard to design any system that must withstand adversaries; seecryptography is difficult. In particular, block ciphers must withstandcryptanalysis; it is impossible to design a good block cipher, or to evaluate the security of one, without a thorough understanding of the available attack methods. Also, Kerckhoffs' Principle applies to block ciphers; no cipher can be considered secure unless it can resist an attacker who knows all its details except the key in use. Analysis of security claims cannot even begin until all internal details of a cipher are published, so anyone making security claims without publishing those details will be either ignored or mocked by most experts. A block cipher defines how a single block is encrypted; a mode of operation defines how multiple block encryptions are combined to achieve some larger goal. Using a mode that is inappropriate for the application at hand may lead to insecurity, even if the cipher itself is secure. A block cipher can be used to build another cryptographic function such as a random number generator, a stream cipher, or a cryptographic hash. These are primarily a matter of choosing the correct mode, but there are more general design issues as well; see the linked articles for details. Block ciphers are often used as components in hybrid cryptosystems; these combine public key (asymmetric) cryptography with secret key (symmetric) techniques such as block ciphers or stream ciphers. Typically, the symmetric cipher is the workhorse that encrypts large amounts of data; the public key mechanism manages keys for the symmetric cipher and providesauthentication. Generally other components such as cryptographic hashes and a cryptographically strong random number generator are required as well. Such a system can only be as strong as its weakest link, and it may not even be that strong. Using secure components including good block ciphers is certainly necessary, but just having good components does not guarantee that the system will be secure. See hybrid cryptosystem for how the components fit together, and information security for broader issues. That said, we turn to the block ciphers themselves. One could say there are only three things to worry about in designing a block cipher: - make the block size large enough that an enemy cannot create a code book, collecting so many known plaintext/ciphertext pairs that the cipher is broken. - make the key size large enough that he cannot use a brute force attack, trying all possible keys - then design the cipher well enough that no other attack is effective Getting adequate block size and key size is the easy part; just choose large enough numbers. This section describes how those choices are made. Making ciphers that resist attacks that are cleverer than brute force (see cryptanalysis) is far more difficult. The following section, Principles and techniques covers ideas and methods for that. Later on, we describe two generations of actual ciphers. The 20th century ciphers use 64-bit blocks and key sizes from 56 bits up. The 21st century ciphers use 128-bit blocks and 128-bit or larger keys. If two or more ciphers use the same block and key sizes, they are effectively interchangeable. One can replace another in almost any application without requiring any other change to the application. This might be done to comply with a particular government's standards, to replace a cipher against which some new attack had been discovered, to provide efficiency in a particular environment, or simply to suit a preference. Nearly all cryptographic libraries give a developer a choice of components, and some protocols such as IPsec allow a network administrator to select ciphers. This may be a good idea if all the available ciphers are strong, but if some are weak it just gives the developer or administrator, neither of whom is likely to be an expert on ciphers, an opportunity to get it wrong. There is an argument that supporting multiple ciphers is an unnecessary complication. On the other hand, being able to change ciphers easily if one is broken provides a valuable safety mechanism. Striking some sort of balance with a few strong ciphers is probably the best policy. The block size of a cipher is chosen partly for implementation convenience; using a multiple of 32 bits makes software implementations simpler. However, it must also be large enough to guard against code book attacks. DES and the generation of ciphers that followed it all used a 64-bit block size. To weaken such a cipher significantly the attacker must build up a code book with 232 blocks, 32 gigabytes of data, all encrypted with the same key, As long as the cipher user changes keys reasonably often, a code book attack is not a threat. Procedures and protocols for block cipher usage therefore always include a re-keying policy. However, with Moore's Law making larger code books more practical, NIST chose to play it safe in their AES specifications; they used a 128-bit block size. This was a somewhat controversial innovation at the time (1998), since it meant changes to a number of applications and it was not absolutely clear that the larger size was necessary. However, it has since become common practice; later ciphers such as Camellia,SEED and ARIA also use 128 bits. There are also a few ciphers which either support variable block size or have a large fixed block size. See the section on large-block ciphers for details. In theory, any cipher except a one-time pad can be broken by a brute force attack; the enemy just has to try keys until he finds the right one. However, the attack is practical only if the cipher's key size is inadequate. If the key uses bits, there are 2n possible keys and on average the attacker must test half of them, so the average cost of the attack is 2n-1 encryptions. Current block ciphers all use at least 128-bit keys, which makes brute force attacks utterly impractical. Suppose an attacker has a billion processors in a monster parallel machine (several orders of magnitude more than any current machine) and each processor can test a billion keys a second (also a generous estimate; if the clock is k GHz, the processor must do an encryption in k cycles to achieve this). This amazingly powerful attacker can test about 260 keys a second, so he needs 267 seconds against a 128-bit key. There are about 225seconds in a year, so that is about 242 years. This is over 4,000,000,000,000 (four trillion) years so the cipher is clearly secure against brute force. Many ciphers support larger keys as well; the reasons are discussed in the brute force attack article. Principles and techniques This section introduces the main principles of block cipher design, defines standard terms, and describes common techniques. Iterated block ciphers Nearly all block ciphers are iterated block ciphers; they have multiple rounds, each applying the same transformation to the output of the previous round. At setup time, a number of round keys or subkeys are computed from the primary key; the method used is called the cipher's key schedule. In the actual encryption or decryption, each round uses its own round key. This allows the designer to define some relatively simple transformation, called a round function, and apply it repeatedly to create a cipher with enough overall complexity to thwart attacks. Three common ways to design iterated block ciphers — SP networks, Feistel structures and the Lai-Massey construction — and two important ways to look at the complexity requirements — avalanche and nonlinearity — are covered in following sections. Any iterated cipher can be made more secure by increasing the number of rounds or made faster by reducing the number. In choosing the number of rounds, the cipher designer tries to strike a balance that achieves both security and efficiency simultaneously. Often a safety margin is applied; if the cipher appears to be secure after a certain number of rounds, the designer specifies a somewhat larger number for actual use. There is a trade-off that can be made in the design. With a simple fast round function, many rounds may be required to achieve adequate security; for example, GOST and TEA both use 32 rounds. A more complex round function might allow fewer rounds; for example, IDEAuses only 8 rounds. Since the ciphers with fast round functions generally need more rounds and the ones with few rounds generally need slower round functions, neither strategy is clearly better. Secure and reasonably efficient ciphers can be designed either way, and compromises are common. In cryptanalysis it is common to attack reduced round versions of a cipher. For example, in attacking a 16-round cipher, the analyst might start by trying to break a two-round or four-round version. Such attacks are much easier. Success against the reduced round version may lead to insights that are useful in work against the full cipher, or even to an attack that can be extended to break the full cipher. Whitening and tweaking Nearly all block ciphers use the same basic design, an iterated block cipher with multiple rounds. However, some have additional things outside that basic structure. Whitening involves mixing additional material derived from the key into the plaintext before the first round, or into the ciphertext after the last round. or both. The technique was introduced by Ron Rivest in DES-X and has since been used in other ciphers such as RC6, Blowfish and Twofish. If the whitening material uses additional key bits, as in DES-X, then this greatly increases resistance to brute force attacks because of the larger key. If the whitening material is derived from the primary key during key scheduling, then resistance to brute force is not increased since the primary key remains the same size. However, using whitening is generally much cheaper than adding a round, and it does increase resistance to other attacks; see papers cited for DES-X. A recent development is the tweakable block cipher . Where a normal block cipher has only two inputs, plaintext and key, a tweakable block cipher has a third input called the tweak. The tweak, along with the key, controls the operation of the cipher. Whitening can be seen as one form of tweaking, but many others are possible. If changing tweaks is sufficiently lightweight, compared to the key scheduling operation which is often fairly expensive, then some new modes of operation become possible. Unlike the key, the tweak need not always be secret, though it should be somewhat random and in some applications it should change from block to block. Tweakable ciphers and the associated modes are an active area of current research. The Hasty Pudding Cipher was one of the first tweakable ciphers, pre-dating the Tweakable Block Ciphers paper and referring to what would now be called the tweak as "spice". The designer wants changes to quickly propagate through the cipher. This was named the avalanche effect in a paper by Horst Feistel. The idea is that changes should build up like an avalanche, so that a tiny initial change (consider a snowball tossed onto a mountain) quickly creates large effects. The term and its exact application were new, but the basic concept was not; avalanche is a variant of Claude Shannon's diffusion, and that in turn is a formalisation of ideas that were already in use. If a single bit of input or of the round key is changed at round , that should affect all bits of the ciphertext by round for some reasonably small . Ideally, would be 1, but this is not generally achieved in practice. Certainly must be much less than the total number of rounds; if is large, then the cipher will need more rounds to be secure. The strict avalanche criterion is a strong version of the requirement for good avalanche properties. Complementing any single bit of the input or the key should give exactly a 50% chance of a change in any given bit of output. In Claude Shannon's . terms, a cipher needs both confusion and diffusion, and a general design principle is that of the product cipher which combines several operations to achieve both goals. This goes back to the combination of substitution and transposition in various classical ciphers from before the advent of computers. All modern block ciphers are product ciphers. Two structures are very commonly used in building block ciphers — SP networks and the Feistel structure. The Lai-Massey construction is a third alternative, less common than the other two. In Shannon's terms, all of these are product ciphers. Any of these structures is a known quantity for a cipher designer, part of the toolkit. He or she gets big chunks of a design — an overall cipher structure with a well-defined hole for the round function to fit into— from the structure, This leaves him or her free to concentrate on the hard part, designing the actual round function. None of these structures gives ideal avalanche in a single round but, with any reasonable round function, all give excellent avalanche after a few rounds. Not all block ciphers use one of these structures, but most do. This section describes these common structures. A substitution-permutation network or SP network or SPN is Shannon's own design for a product cipher. It uses two layers in each round: a substitution layer provides confusion, then a permutation layer provides diffusion. The S-layer typically uses look-up tables called substitution boxes or S-boxes, though other mechanisms are also possible. The input is XOR-ed with a round key, split into parts and each part used as an index into an S-box. The S-box output then replaces that part so the combined S-box outputs become the S-layer output. S-boxes are discussed in more detail in their own section below. The P-layer permutes the resulting bits, providing diffusion or in Feistel's terms helping to ensure avalanche. A single round of an SP network does not provide ideal avalanche; output bits are affected only by inputs to their S-box, not by all input bits. However, the P-layer ensures that the output of one S-box in one round will affect several S-boxes in the next round so, after a few rounds, overall avalanche properties can be very good. Another way to build an iterated block cipher is to use the Feistel structure. This technique was devised byHorst Feistel of IBM and used in DES. Such ciphers are known as Feistel ciphers or Feistel networks. In Shannon's terms, they are another class of product cipher. Feistel ciphers are sometimes referred to as Luby-Rackoff ciphers after the authors of a theoretical paper analyzing some of their properties. Later work based on that shows that a Feistel cipher with seven rounds can be secure. In a Feistel cipher, each round uses an operation called the F-function whose input is half a block and a round key; the output is a half-block of scrambled data which is XOR-ed into the other half-block of text. The rounds alternate direction — in one data from the left half-block is input and the right half-block is changed, and in the next round that is reversed. Showing the half-blocks as left and right, bitwise XOR as (each bit of the output word is the XOR of the corresponding bits of the two input words) and round key for round as kn, even numbered rounds are then: and odd-numbered rounds are Since XOR is its own inverse (abb=a for any a,b) and the half-block that is used as input to the F-function is unchanged in each round, reversing a Feistel round is straightforward. Just calculate the F-function again with the same inputs and XOR the result into the ciphertext to cancel out the previous XOR. For example, the decryption step matching the first example above is: In some ciphers, including those based on SP networks, all operations must be reversible so that decryption can work. The main advantage of a Feistel cipher over an SP network is that the F-function itself need not be reversible, only repeatable. This gives the designer extra flexibility; almost any operation he can think up can be used in the F-function. On the other hand, in the Feistel construction, only half the output changes in each round while an SP network changes all of it in a single round. A single round in a Feistel cipher has less than ideal avalanche properties; only half the output is changed. However, the other half is changed in the next round so, with a good F-function, a Feistel cipher can have excellent overall avalanche properties within a few rounds. It is possible to design a Feistel cipher so that the F-function itself has ideal avalanche properties — every output bit depends nonlinearly on every input bit and every key bit —details are in a later section. There is a variant called an unbalanced Feistel cipher in which the block is split into two unequal-sized pieces rather than two equal halves. Skipjack was a well-known example. There are also variations which treat the text as four blocks rather than just two; MARS and CAST-256 are examples. The hard part of Feistel cipher design is of course the F-function. Design goals include efficiency, easy implementation, and good avalanche properties. Also, it is critically important that the F-function be highly nonlinear. All other operations in a Feistel cipher are linear and a cipher without enough nonlinearity is weak; see below. This structure was introduced in a thesis by Xuejia Lai, supervised by James Massey, in a cipher which later became the International Data Encryption Algorithm, IDEA. It has since been used in other ciphers such as FOX, later renamed IDEA NXT. Perhaps the best-known analysis is by Serge Vaudenay, one of the designers of FOX. One paper proposes a general class of "quasi-Feistel networks", with the Lai-Massey scheme as one instance, and shows that several of the well-known results on Feistel networks (such as the Luby-Rackoff and Patarin papers referenced above) can be generalised to the whole class. Another gives some specific results for the Lai-Massey scheme. To be secure, every cipher must contain nonlinear operations. If all operations in a cipher were linear then the cipher could be reduced to a system of linear equations and be broken by an algebraic attack. The attacker can choose which algebraic system to use; for example, against one cipher he might treat the text as a vector of bits and use Boolean algebra while for another he might choose to treat it as a vector of bytes and use arithmetic modulo 28. The attacker can also try linear cryptanalysis. If he can find a good enough linear approximation for the round function and has enough known plaintext/ciphertext pairs, then this will break the cipher. Defining "enough" in the two places where it occurs in the previous sentence is tricky; see linear cryptanalysis. What makes these attacks impractical is a combination of the sheer size of the system of equations used (large block size, whitening, and more rounds all increase this) and nonlinearity in the relations involved. In any algebra, solving a system of linear equations is more-or-less straightforward provided there are more equations than variables. However, solving nonlinear systems of equations is far harder, so the cipher designer strives to introduce nonlinearity to the system, preferably to have at least some components that are not even close to linear. Combined with good avalanche properties and enough rounds, this makes both direct algebraic analysis and linear cryptanalysis prohibitively difficult. There are several ways to add nonlinearity; some ciphers rely on only one while others use several. One method is mixing operations from different algebras. If the cipher relies only on Boolean operations, the cryptanalyst can try to attack using Boolean algebra; if it uses only arithmetic operations, he can try normal algebra. If it uses both, he has a problem. Of course arithmetic operations can be expressed in Boolean algebra or vice versa, but the expressions are inconveniently (for the cryptanalyst!) complex and nonlinear whichever way he tries it. For example, in the Blowfish F-function, it is necessary to combine four 32-bit words into one. This is not done with just addition, x = a+b+c+d or just Boolean operations x = abcd but instead with a mixture, x = ((a+b)c)+d. On most computers this costs no more, but it makes the analyst's job harder. Rotations, also called circular shifts, on words or registers are nonlinear in normal algebra, though they are easily described in Boolean algebra. GOST uses rotations by a constant amount, CAST-128 and CAST-256 use a key-dependent rotation in the F-function, and RC5, RC6 and MARS all use data-dependent rotations. A general operation for introducing nonlinearity is the substitution box or S-box; see following section. Nonlinearity is also an important consideration in the design of stream ciphers and cryptographic hashalgorithms. For hashes, much of the mathematics and many of the techniques used are similar to those for block ciphers. For stream ciphers, rather different mathematics and methods apply (see Berlekamp-Massey algorithm for example), but the basic principle is the same. S-boxes or substitution boxes are look-up tables. The basic operation involved is a = sbox[b] which, at least for reasonable sizes of a and b, is easily done on any computer. S-boxes are described as by , with representing the number of input bits and the number of output bits. For example, DES uses 6 by 4 S-boxes. The storage requirement for an by S-box is 2mn bits, so large values of (many input bits) are problematic. Values up to eight are common and MARS has a 9 by 32 S-box; going much beyond that would be expensive. Large values of (many output bits) are not a problem; 32 is common and at least one system, the Tiger hash algorithm , uses 64. S-boxes are often used in the S-layer of an SP Network. In this application, the S-box must have an inverse to be used in decryption. It must therefore have the same number of bits for input and output; only by S-boxes can be used. For example, AES is an SP network with a single 8 by 8 S-box and Serpent is one with eight 4 by 4 S-boxes. Another common application is in the F-function of a Feistel cipher. Since the F-function need not be reversible, there is no need to construct an inverse S-box for decryption and S-boxes of any size may be used. With either an SP network or a Feistel construction, nonlinear S-boxes and enough rounds give a highly nonlinear cipher. The first generation of Feistel ciphers used relatively small S-boxes, 6 by 4 for DES and 4 by 4 for GOST. In these ciphers the F-function is essentially one round of an SP Network. The eight S-boxes give 32 bits of S-box output. Those bits, reordered by a simple transformation, become the 32-bit output of the F-function. Avalanche properties are less than ideal since each output bit depends only on the inputs to one S-box. The output transformation (a bit permutation in DES, a rotation in GOST) compensates for this, ensuring that the output from one S-box in one round affects several S-boxes in the next round so that good avalanche is achieved after a few rounds. Later Feistel ciphers use larger S-boxes; CAST-128 or CAST-256 and Blowfishall use four 8 by 32 S-boxes. They do not use S-box bits directly as F-function output. Instead, they take a 32-bit word from each S-box, then combine them to form a 32-bit output. This gives an F-function with ideal avalanche properties — every output bit depends on all S-box output words, and therefore on all input bits and all key bits. With the Feistel structure and such an F-function, complete avalanche — all 64 output bits depend on all 64 input bits — is achieved in three rounds. No output transformation is required in such an F-function, and Blowfish has none. However, one may be used anyway; the CAST ciphers add a key-dependent rotation. These ciphers are primarily designed for software implementation, rather than the 1970s hardware DES was designed for, so looking up a full computer word at a time makes sense. An 8 by 32 S-box takes one K byte of storage; several can be used on a modern machine without difficulty. They need only four S-box lookups, rather than the eight in DES or GOST, so the F-function and therefore the whole cipher can be reasonably efficient. There is an extensive literature on the design of good S-boxes, much of it emphasizing achieving high nonlinearity though other criteria are also used. See external links. The CAST S-boxes use bent functions (the most highly nonlinear Boolean functions) as their columns. That is, the mapping from all the input bits to any single output bit is a bent function. Such S-boxes meet the strict avalanche criterion ; not only does every every bit of round input and every bit of round key affect every bit of round output, but complementing any input bit has exactly a 50% chance of changing any given output bit. A paper on generating the S-boxes is Mister & Adams "Practical S-box Design" . Bent functions are combined to get additional desirable traits — a balanced S-box (equal probability of 0 and 1 output), miniumum correlation among output bits, and high overall S-box nonlinearity. Blowfish uses a different approach, generating random S-boxes as part of the key scheduling operation at cipher setup time. Such S-boxes are not as nonlinear as the carefully constructed CAST ones, but they are nonlinear enough and, unlike the CAST S-boxes, they are unknown to an attacker. In perfectly nonlinear S-boxes , not only are all columns bent functions (the most nonlinear possible Boolean functions), but all linear combinations of columns are bent functions as well. This is possible only if , there are at least twice as many input bits as output bits. Such S-boxes are therefore not much used. S-boxes in analysis S-boxes are sometimes used as an analytic tool even for operations that are not actually implemented as S-boxes. Any operation whose output is fully determined by its inputs can be described by an S-box; concatenate all inputs into an index, look that index up, get the output. For example, the IDEA cipher uses a multiplication operation with two 16-bit inputs and one 16-bit output; it can be modeled as a 32 by 16 S-box. In an academic paper, one might use such a model in order to apply standard tools for measuring S-box nonlinearity. A well-funded cryptanalyst might actually build the S-box (8 gigabytes of memory) either to use in his analysis or to speed up an attack. Resisting linear & differential attacks Two very powerful cryptanalytic methods of attacking block ciphers are linear cryptanalysis anddifferential cryptanalysis. The former works by finding linear approximations for the nonlinear components of a cipher, then combining them using the piling-up lemma to attack the whole cipher. The latter looks at how small changes in the input affect the output, and how such changes propagate through multiple rounds. These are the only known attacks that break DES with less effort than brute force, and they are completely general attacks that apply to any block cipher.. Both these attacks, however, require large numbers of known or chosen plaintexts, so a simple defense against them is to re-key often enough that the enemy cannot collect sufficient texts. Techniques introduced for CAST go further, building a cipher that is provably immune to linear or differential analysis with any number of texts. The method, taking linear cryptanalysis as our example and abbreviating it LC, is as follows: - start from properties of the round function (for CAST, from bent functions in the S-boxes) - derive a limit , the maximum possible quality of any linear approximation to a single round - consider the number of rounds, , as a variable - derive an expression for , the effort required to break the cipher by LC, in terms of and - find the minimum such that exceeds the effort required for brute force, making LCimpractical - derive an expression for , the number of chosen plaintexts required for LC, also in terms of and (LC with only known plaintext requires more texts, so it can be ignored) - find the minimum such that exceeds the number of possible plaintexts, 2blocksize, making LC impossible A similar approach applied to differentials gives values for that make differential cryptanalysis impractical or impossible. Choose the actual number of rounds so that, at a minimum, both attacks are impractical. Ideally, make both impossible, then add a safety factor. This type of analysis is now a standard part of the cryptographer's toolkit. Many of the AES candidates, for example, included proofs along these lines in their design documentation, and AES itself uses such a calculation to determine the number of rounds required for various key sizes. DES and alternatives The Data Encryption Standard, DES, is among the the best known and most thoroughly analysed block ciphers. It was invented by IBM and was made a US government standard, for non-classified government data and for regulated industries such as banking, in the late 70s. From then until about the turn of the century, it was very widely used. It is now considered obsolete because its 56-bit key is too short to resist brute force attacks if the opponents have recent technology. The DES standard marked the beginning of an era in cryptography. Of course, much work continued to be done in secret by military and intelligence organisations of various nations, but from the time of DES cryptography also developed as an open academic discipline complete with journals, conferences, courses and textbooks. In particular, there was a lot of work related to block ciphers. For an entire generation, every student of cryptanalysis tried to find a way to break DES and every student of cryptography tried to devise a cipher that was demonstrably better than DES. Very few succeeded. Every new cryptanalytic technique invented since DES became a standard has been tested against DES. None of them have broken it completely, but two — differential cryptanalysis and linear cryptanalysis— give attacks theoretically significantly better than brute force. This does not appear to have much practical importance since both require enormous numbers of known or chosen plaintexts, all encrypted with the same key, so reasonably frequent key changes provide an effective defense. All the older publicly known cryptanalytic techniques have also been tried, or at least considered, for use against DES; none of them work. DES served as a sort of baseline for cipher design through the 80s and 90s; the design goal for almost any 20th century block cipher was to replace DES in some of its many applications with something faster, more secure, or both. All these ciphers used 64-bit blocks, like DES, but most used 128-bit or longer keys for better resistance to brute force attacks. Ciphers of this generation include: - The Data Encryption Standard itself, the first well-known Feistel cipher, using 16 rounds and eight 6 by 4 S-boxes. - The GOST cipher, a Soviet standard similar in design to DES, a 32-round Feistel cipher using eight 4 by 4 S-boxes. - IDEA, the International Data Encryption Algorithm, a European standard, not a Feistel cipher, with only 8 rounds and no S-boxes. - RC2, a Feistel cipher from RSA Security which was approved for easy export from the US (provided it was used with only a 40-bit key), so widely deployed. - RC5, a Feistel cipher from RSA security. This was fairly widely deployed, often replacing RC2 in applications. - CAST-128, a widely used 16-round Feistel cipher, with 8 by 32 S-boxes. - Blowfish, another widely used 16-round Feistel cipher with 8 by 32 S-boxes. - The Tiny Encryption Algorithm, or TEA, designed to be very small and fast but still secure, a 32-round Feistel cipher without S-boxes. - Skipjack, an algorithm designed by the NSA for use in the Clipper chip, a 32-round unbalanced Feistel cipher. - SAFER and LOKI, two families of ciphers which each included an original version against which Lars Knudsen found an attack and a revised version to block that attack. Each had a descendant which was an AES candidate. - Triple DES, applying DES three times with different keys Many of the techniques used in these ciphers came from DES and many of the design principles came from analysis of DES. However, there were also new design ideas. The CAST ciphers were the first to use large S-boxes which allow the F-function of a Feistel cipher to have ideal avalanche properties, and to use bent functions in the S-box columns. Blowfish introduced key-dependent S-boxes. Several introduced new ways to achieve nonlinearity: data-dependent rotations in RC5, key-dependent rotations in CAST-128, a clever variant on multiplication in IDEA, and the pseudo-Hadamard transform in SAFER. The era effectively ended when the US government began working on a new cipher standard to replace their Data Encryption Standard, the Advanced Encryption Standard or AES. A whole new generation of ciphers arose, the first 21st century block ciphers. Of course these designs still drew on the experience gained in the post-DES generation, but overall these ciphers are quite different. In particular, they all use 128-bit blocks and most support key sizes up to 256 bits. The AES generation By the 90s, the Data Encryption Standard was clearly obsolete; its small key size made it more and more vulnerable to brute force attacks as computers became faster. The US National Institute of Standards and Technology (NIST) therefore began work on an Advanced Encryption Standard, AES, a block cipher to replace DES in government applications and in regulated industries. To do this, they ran a very open international AES competition, starting in 1998. Their requirements specified a block cipher with 128-bit block size and support for 128, 192 or 256-bit key sizes. Evaluation criteria included security, performance on a range of platforms from 8-bit CPUs (e.g. in smart cards) up, and ease of implementation in both software and hardware. Fifteen submissions meeting basic criteria were received. All were iterated block ciphers; in Shannon's terms all were product ciphers. Most used an SP network or Feistel structure, or variations of those. Several had proofs of resistance to various attacks. The AES competitionarticle covers all candidates and many have their own articles as well. Here we give only a summary. After much analysis and testing, and two conferences, the field was narrowed to five finalists: - Twofish, a cipher with key-dependent S-boxes, from a team at Bruce Schneier's company Counterpane - MARS, a variant of Feistel cipher using data-dependent rotations, from IBM - Serpent, an SP network, from an international group of well-known players - RC6, a cipher using data-dependent rotations, from a team led by Ron Rivest - Rijndael. an SP network, from two Belgian designers An entire generation of block ciphers used the 64-bit block size of DES, but since AES many new designs use a 128-bit block size. As discussed under size parameters, if two or more ciphers have the same block and key sizes, then they are effectively interchangeable; replacing one cipher with another requires no other changes in an application. When asked to implement AES, the implementer might include the other finalists — Twofish, Serpent. RC6 and MARS — as well. This provides useful insurance against the (presumably unlikely) risk of someone finding a good attack on AES. Little extra effort is required since open source implementations of all these ciphers are readily available, seeexternal links. All except RC6 have completely open licenses. There are also many other ciphers that might be used. There were ten AES candidates that did not make it into the finals: - CAST-256, based on CAST-128 and with the same theoretical advantages - DFC, based on another theoretical analysis proving resistance to various attacks. - Hasty Pudding, a variable block size tweakable cipher - DEAL, a Feistel cipher using DES as the round function - FROG, an innovative cipher; interesting but weak - E2, from Japan - CRYPTON, a Korean cipher with some design similarities to AES - MAGENTA, Deutsche Telekom's candidate, quickly broken - LOKI97, one of the LOKI family of ciphers, from Australia - SAFER+, one of the SAFER family of ciphers, from Cylink Corporation Some should not be considered. Magenta and FROG have been broken, DEAL is slow, and E2 has been replaced by its descendant Camellia. There are also some newer 128-bit ciphers that are widely used in certain countries: - Camellia, an 18-round Feistel cipher widely used in Japan and one of the standard ciphers for the NESSIE (New European Schemes for Signatures, Integrity and Encryption) project. - SEED, developed by the Korean Information Security Agency (KISA) and widely used in Korea. For most applications a 64-bit or 128-bit block size is a fine choice; nearly all common block ciphers use one or the other. Such ciphers can be used to encrypt objects larger than their block size; just choose an appropriate mode of operation. For nearly all ciphers, the block size is a power of two. Joan Daemen's PhD thesis, though, had two exceptions:3-Way uses 96 bits (three 32-bit words) and BaseKing 192 (three 64-bit words). Neither cipher was widely used, but they did influence later designs. Daemen was one of the designers of Square and of Rijndael which became the Advanced Encryption Standard. A few ciphers supporting larger block sizes do exist; this section discusses them. A block cipher with larger blocks may be more efficient; it takes fewer block operations to encrypt a given amount of data. It may also be more secure in some ways; diffusion takes place across a larger block size, so data is more thoroughly mixed, and large blocks make a code book attack more difficult. On the other hand, great care must be taken to ensure adequate diffusion within a block so a large-block cipher may need more rounds, larger blocks require more padding, and there is not a great deal of literature on designing and attacking such ciphers so it is hard to know if one is secure. Large-block ciphers are inconvenient for some applications and simply do not fit in some protocols. Some block ciphers, such as Block TEA and Hasty Pudding, support variable block sizes. They may therefore be both efficient and convenient in applications that need to encrypt many items of a fixed size, for example disk blocks or database records. However, just using the cipher in ECB mode to encrypt each block under the same key is unwise, especially if encrypting many objects. With ECB mode, identical blocks will encrypt to the same ciphertext and give the enemy some information. One solution is to use a tweakable cipher such as Hasty Pudding with the block number or other object identifier as the tweak. Another is to useCBC mode with an initialisation vector derived from an object identifier. Cryptographic hash algorithms can be built using a block cipher as a component. There are general-purpose methods for this that can use existing block ciphers; Applied Cryptography gives a long list and describes weaknesses in many of them. However, some hashes include a specific-purpose block cipher as part of the hash design. One example is Whirlpool, a 512-bit hash using a block cipher similar in design to AES but with 512-bit blocks and a 512-bit key. Another is the Advanced Hash Standard candidateSkein which uses a tweakable block cipher calledThreefish. Threefish has 256-bit, 512-bit and 1024-bit versions; in each version block size and key size are both that number of bits. It is possible to go the other way and use any cryptographic hash to build a block cipher; again Applied Cryptography has a list of techniques and describes weaknesses. The simplest method is to make a Feistel cipher with double the hash's block size; the F-function is then just to hash text and round key together. This technique is rarely used, partly because a hash makes a rather expensive round function and partly because the block cipher block size would have to be inconveniently large; for example using a 160-bit bit hash such as SHA-1would give a 320-bit block cipher. The hash-to-cipher technique was, however, important in one legal proceeding, the Bernstein case. At the time, US law strictly controlled export of cryptography because of its possible military uses, but hash functions were allowed because they are designed to provide authentication rather than secrecy. Bernstein's code built a block cipher from a hash, effectively circumventing those regulations. Moreover, he sued the government over his right to publish his work, claiming the export regulations were an unconstitutional restriction on freedom of speech. The courts agreed, effectively striking down the export controls. It is also possible to use a public key operation as a block cipher. For example, one might use the RSA algorithm with 1024-bit keys as a block cipher with 1024-bit blocks. Since the round function is itself cryptographically secure, only one round is needed. However, this is rarely done; public key techniques are expensive so this would give a very slow block cipher. A much more common practice is to use public key methods, block ciphers, and cryptographic hashes together in a hybrid cryptosystem. - ↑ M. Liskov, R. Rivest, and D. Wagner (2002), "Tweakable Block Ciphers", LNCS, Crypto 2002 - ↑ Horst Feistel (1973), "Cryptography and Computer Privacy", Scientific American - ↑ 3.0 3.1 A. F. Webster and Stafford E. Tavares (1985), "On the design of S-boxes", Advances in Cryptology - Crypto '85 (Lecture Notes in Computer Science) - ↑ C. E. Shannon (1949), "Communication Theory of Secrecy Systems", Bell Systems Technical Journal 28: pp.656-715 - ↑ M. Luby and C. Rackoff, "How to Construct Pseudorandom Permutations and Pseudorandom Functions", SIAM J. Comput - ↑ Jacques Patarin (Oct 2003), "Luby-Rackoff: 7 Rounds Are Enough for Security", Lecture Notes in Computer Science 2729: 513 - 529 - ↑ X. Lai (1992), "On the Design and Security of Block Ciphers", ETH Series in Information Processing, v. 1 - ↑ S. Vaudenay (1999), On the Lai-Massey Scheme, Springer-Verlag, LCNS - ↑ Aaram Yun, Je Hong Park and Jooyoung Lee (2007), Lai-Massey Scheme and Quasi-Feistel Networks - ↑ Yiyuan Luo, Xuejia Lai, Zheng Gong and Zhongming Wu (2009), Pseudorandomness Analysis of the Lai-Massey Scheme - ↑ Ross Anderson & Eli Biham (1996), "Tiger: a fast new hash function", Fast Software Encryption, Third International Workshop Proceedings - ↑ S. Mister, C. Adams (August, 1996), "Practical S-Box Design", Selected Areas in Cryptography (SAC '96): 61-76 - ↑ Kaisa Nyberg (1991), "Perfect nonlinear S-boxes", Eurocrypt'91, LNCS 547 - ↑ Serge Vaudenay (2003), "Decorrelation: A Theory for Block Cipher Security", Journal of Cryptology - ↑ Kaissa Nyberg and Lars Knudsen (1995), "Provable security against a differential attack", Journal of Cryptology - ↑ 16.0 16.1 Schneier, Bruce (2nd edition, 1996,), Applied Cryptography, John Wiley & Sons, ISBN 0-471-11709-9
http://en.citizendium.org/wiki/Block_cipher
13
103
All Recent Changes - I organize a learning club and network... This website and all of its contents are in the Public Domain, copyright-free. Use them creatively! See also: Deep ideas Dials for Counting Dials for counting, three rows, be able to mount them, change them around, and of course, spin them - stitching together the number system - Distinguish between amounts (circular order, as in drumming, repetitive) and units (linear order, as with multiple units, what is most important, has priority). Zero is part of every amount, it is the reference for circular counting, whereas there is no zero in the units, but is a blank instead, a nothing. Note: A dial has "labels", whereas a circle is before the labels - A number system - the units don't matter, or are internal - everyone's number system is private - but then is shared, thus public, thus explicitly defined abstract units of counting (such as "ones", so that we have eight "ones", we can have "eight" anything, eight "whatevers") Dials (names become amounts) Circular Counting: Ordered list: Memory Loop - ABC's are a list, as in the ABC song. - rhythmic chants help for memory, for example, Dr.Seuss's mother's names of pies - What to do for memory, to memorize math facts? Circular counts leverage the way autoassociative memory works - The counting dial makes numbers work as actions. We are counting the counting. - Is there a 0 dial, a dial with just 0 on it, that represents an item abstractly, as a unit? Formality: Answers, Amounts and Units - The counting dial allows us to formally distinguish between amount and unit, thus have a formal answer. In this way a system is elaborated, formally. There is no longer any confusion as to whether an item corresponds to a unit. This is because what is now counted is the counting. Thus there must be a maximum to the dial. The unit is taken as given and equated to the dial. - A system defines what we can think about and in terms of. - System is definition. - We can think in terms of units, which are the atoms, nodes, thoughts, concepts, encapsulations of our thinking. - We can't think of what's particular to the units as those distinctions have been discarded; nor can we think in terms of whatever no units. - In physics, the basic atoms (protons, electrons) are indeed indistinguishable as such like such units. - We can think up to the equivalence of the units, so that if the unit is "apples", then all apples are the same. - We can think about what can be expressed in the units, the relationships between the units. - A system keeps track of "answers" and thus minimizes the need for recounting (or adding). - An "answer" is an amount and a unit - amount => what doesn't change, unit is what changes - amount is what you count with your fingers, unit is what you don't have to count, it's fixed - what is your finger worth? unit. 5 one dollar bills = 1 five dollar bill - A system is inherently ambiguous - Amount and Unit divide up the context into two parts, allowing for simplification - The counting dial shows the difference between amounts and units. - running out of fingers, thus decimal system, or simply, a dial - Why don't we use a system of elevens? for we have ten digits - We can't count past 10 - or whatever is our maximum. - Thus the Y2K problem with 2 digits - Computers define integers and other numeric types up to some maximum, after which, they won't work as desired. Repeating - Setting an external rhythm - Rhythm is set by a recurring activity - Setting a rhythm - converting an internal rhythm to an external rhythm that others can tune to - Example: stroke for a crew of rowers - obedience - Example: drummer for a band - Example: metronome - Recurring activity stretches from negative infinity to positive infinity - Christopher Alexander's Timeless Way of Building, and Pattern Languages - Make a list of every day activities - Make dials for them - How does recurring activity relate to the structure it evokes? - Creating marks, instances, events in time. - Following one event with another - Rhythm is driven by recursion, for example, plus one - Laying down "tracks" and superimposing them yields patterns upon patterns which then appear as groups of strokes, beats - Beats within beats - Internal verbalizations - Each is naturally distinct in time. - Different dials for different musical beats - study the beats (polka, waltz, etc.) - Processing - write out commas after every three units, starting from the right and going left - a helpful process for making numbers usable - and can be done without comprehending the numbers Canonizing - number system Time: Internal counting: Counting out an obligation - Counting the amount of work to be done - such as forty lashes - Counting the amount of coins to be paid out - such as thirty pieces of silver - Use your fingers to calculate 2**9 (keep track as you double 9 times). - Counting out the number of pull ups that you can do. (Arbitrary counting? Note the sequential integral review - each pull up is checked independently - and this check occurs in time)(Counting depends on a deep structure of "checks" of the count.) - Label on a dial lets you tally. - Dividing up is backwards counting on a dial. It is repeated subtraction and can yield a remainder. Whereas dividing out is the inverse of multiplying and can be seen on a grid, in two dimensions. - Division (100 divided by 5) can be counting (if you are dividing the amount, dividing by a clustered unit, thus "dividing up", yielding and counting 20 instances of 5, counting them each out, dividing evenly, bottom up, if you don't know the answer yet) or can be clustering, "dividing out" (if you are dividing 100 among 5, and you know the answer already, and so you give 5 instances of 20). - Dividing up can yield remainders. - 4 thousand divided by 4 = 1 thousand. Why? Because 4 divided by 4 is 1, and the thousand is just a unit. - Dividing is "counting by": 60 miles can be counted by hours, making a chart Dials that do not start from zero - counting by odds is counting by 2's but starting from 1 - division with remainder = dial that does not start from zero - Division is sharing - remainder is what you can't share Two dials: Counting by Counting by ... - we can count the dial turns if we have a second dial - in order to have a unique expression, we have the second dial's value be one more than the max of the first dial, thus we introduce 0 - we use 0-9 instead of 1-10 so we are forced to clump - because we are using a second dial, a dial to count the counting, to count the number of times the dials went around - Counting by 1's, 10's, 2's, 5's, 3's, 1/2's, fractions, decimals, percents, X's - multiplying by -1 - multiplying by i - roots of unity - Counting is the paradigm for linear growth. But such counting is stitched together from circular counting, and it is the stitching that makes it linear. - Tally - skip counting, repeated addition - "Tallying" is a two dial system, counting by 1s and 5s. - Tally marks used by shepherds - count by 1/2, in general, count by numeric units - 10 divided by 1/2 = 20, counting by 1/2s, grouping by 1/2s, We do that rarely! so it seems strange. - Exercises: counting game - Arithmetic sequence: counting by X - counting up to a number by a number = dividing - Exercise: Counting by 3's - double counting in your mind - multiplication by 9s patterns is 10 - 1 - Use your fingers to keep track of counting by 12s. How many times do they go into 120? - 10 x 3 vs. 9 x 3 which is bigger? and why? Multiplication turns counting up into counting - counting what (multiplication) by what (division) - we count up "what" = units = nouns, things, natural numbers - multiplication is "counting up" "by what" = an "answer", an amount and a unit, a grouping. This allows us to treat the amount as our unit and treat the unit as an "abstraction". We can then "count" (by numbers) rather than "count up" (actual things, units). We then relate back to the original unit. Thus it is accelerated counting, speed counting. Multiplication relates "counting up" (items in space) with "counting" (activity in time). - This works because multiplication is associative. - We can thus memorize and reuse multiplication problems. - multiply = distribute while you can and wait, lump if you can't, thus multiplication = distribution - multiplication is organized counting - multiplication is repetitive counting - Multiplication is super counting - multiplication makes sense when counting is organized, "correct", well defined - Multiply disks by simply placing one after the other - this works if you set each multiplied dial to zero because you are treating it as a unit - you can do this for multiple dials - and you can commute the dials - and you can combine dials - but how do you divide by a dial? you remove it - Illustrate multiplication: Label - splitting, sets, per each, symmetry - multiplying dials. Note that there is no inverse! no way to divide, except to remove dials. - multiplication patterns - Multiplication can be defined as one dial counting another dial. - Multiplication is the direct product. Division using dials - 3 divided by 5 can be done by adding a dial that is unit "1/5". Then count by 5's. This turns the next dial around to 1. Do this three times and you get 3. So it's a very straightforward way of getting "3/5". And it looks like sharing a pie 3 times, which it is. To get 3 pies divided you can also relate to the circle (for the fifths) and to tokens (for the 3 wholes). - 3 divided by 5 can be discussed in terms of two objects (3 apples divided by 5 people) whereas 3 out of 5 discusses in terms of one object, a whole (3 out of 5 slices). Note that 3 divided by 5 can lead to 3 out of 5 as its solution. Thus 3 divided by 5 builds on 3 out of 5. - Do we need to have 3 objects that we divide in 5? Or can we simply take the circle, divide it into 3, then divide it into 5? How do we then interpret 3/5? - 3 divided by 5 can be discussed as "dividing out" 3 apples to 5 people. You divide each apple to 5 people. Memory management - Copying - Dials as a number recording system - Memorize math facts, don't overburden the mind. - Math facts are meant to be like old friends - Levels of memory - human brain - Levels of memory - computer, artifacts - Reliability of memory - Obsolescence of memory artifacts, software, hardware, systems - example Windows 95 backup QIC - Exercise: Memorize math facts. Motivate yourself with math fact racing. - Writing (a system) is an aid to memory (but creates a new system, the context of which must be tracked and remembered) - Memory relates to the quality of signs, what is memorable - symbol=index - Clock, telling time - 10 + 4 = 2 - Giving change doesn't affect the total quantity, just rebalances the amount and the unit. - long division, 3 dimes = 30 pennies renaming Division and remainder - division with remainder = mixed fraction - introduce fractions as remainders - In division, we can drop the remainder, or not. - long division, divide out, relate to money, get change, remainder - Decimals are remainders 16/10 = 1 6/10 where the latter is 6 divided by 10, in other words, .6 - division by 10 (decimal) relates remainder and quotient Single units or multiple units - switch between single units and multiple units Single units and multiple units - Present your answer with multiple units. - Group by the largest units to present. Multiple units help people who lack math intuition, they allow for various degrees of mathematical intuition. - Single units help keep the answer exact, intact, we don't lose the minor units. - Convert to the same unit to calculate. - List in order the directions how to get somewhere, so that one can at least remember the first few steps. - Being thoughtful in keeping smaller amounts: paying so that it's convenient to give change - making subtraction easy - Exercises: give money using fewest number of bills and coins, give money so that the fewest number of bills and coins are returned as change - Time is written in a variety of formats around the world. (Nonordered multiple units) - Time formats can be read as a whole, or as in pieces, regardless of how they are disbursed. - Fractions of time: hours, minutes, seconds, months. - Subtraction - if you don't have to carry, then you can do it in your head, and you can do it from the left or the right or the middle. - "Irregular" counting systems, note the variety - Roman numerals - French numbers like 80+ - linguistic evidence - Suppose the sun went backward for a bit. How would that affect what we mean by day, hour, etc.? Stitching together (Multiple units) Ordered multiple units - Multiple units like 24hours, 60 minutes, 60 seconds or cups, tablespoons - build a tower from ones, tens, hundreds, thousands - Decimal numbers are multiple units read right to left, and single units read left to right. - decimals - well formed, easy comparison - regular (digits) all less than 10 - dollars, dimes, pennies, staples, confetti - Multiple units are the basis for: Rounding, addition with regrouping, subtraction with regrouping. - Multiple units allow for smaller amounts. - decimal numbers - multiple units, right to left, single units (ones) left to right - decimal system multiple units allows for approximation - Because we carry, you have to start from the right, if you are a poor man, nobody will borrow from you. Otherwise, you could start from the left or the right or the middle. - Abacus and Counting devices - Compare slots (units) with pigeon hole principle. Different names for the same thing (see also ambiguity) - Different ways of writing the same number: 37.50 cents, 37.5 dimes, 37 1/2, 37 r 1, 75/2 All mean the same but used in different ways. Max number on the dial - Decimal system: 9 is the most allowed for any unit, therefore the largest units dictate the comparisons - Truncating is discarding units when there are multiple units. Avoiding large units - 99 cents - the largest values for the units, without using a larger unit - Multiple units allow for counterexamples - remainder shows that not evenly divisible. - Rounding is expressing in terms of the largest unit when there are multiple units. - Round 79.99 to 80 etc. then add prices - Why choose decimals, why fractions? Decimal style vs. fraction style for problems like 10 / 20 / 30 - rounding = getting rid of units (when there are multiple units)* multiple units (show answer) single units (calculate) Integrating multiple units - Counting stacks of bills - need to be integrated Dealing with a problem from one direction in a way so that you don't have to go back - Start adding (multiple units) with the loose carrots because you can bag them, crate the bags - Systemic counting, organized counting - Reading from left or from right? - A dial can be thought of as a unit but also as an action. As an action it becomes the set of possible values, thus a unit of multiplication. - Multiplication can be thought of as adding dials between the unit dial on the left and the decimal point on the right, thus separating them. If the dial is 0,...,N-1 then we set the amount to 0 and we are multiplying by N. - Division can likewise be thought of as adding dials between the unit dial on the right and the decimal point on the left. - We can have two kinds of dial colors to distinguish multiplication and division. - Multiplication and division dials can be rearranged and can be cancelled. - Adding or removing such a dial can also be thought of as multiplying or dividing. - The dials can be in different amounts. - One dial can be broken up into two dials, or two can be joined as one. - If the dial has a non-zero number, then there is also addition. A scale of dials Keep adding a dial - Do the numbers ever end? Is there a largest number? 999,999,999,999. Then add one. So long as you can add a dial. - Parking spot - every family gets one of them - Reserved parking is given by zero, a place maker Need for commas - But this only helpful up to seven or eight commas. After that we need line breaks, then page breaks, then books, then bookshelves, then rooms, then floors, etc. - Perhaps this is why in large libraries you're not allowed to reshelve books. Also, they aren't able to check, so a book misshelved is a book lost. - Large numbers: Say forward, read it backward, by 3s. - two ways of looking at it, long division dividing 350 million, may have intermediate remainder 20, which means 20 millions and stands for 20,000,000 ones. - numbers have first and last names. At home they call themselves by the same first name 1-2-3, baby-mommy-daddy - When you multiply you can reorganize, as in 1/3 x 3,000,000 - Think of multiplication variously in terms of units, actions, as in 4 x 5 millions - Wallet for money (maximum of 9 for each slot), up to 9 ones, 9 tens, 9 hundreds, 9 thousands and so on. Remainders - Division algorithm - Justification for powers of 10: why is that how we can write down any number? division algorithm? - Multiply by 10 to shift units - dividing by 10 implies multiplying by 1/10 so division and multiplication can be combined - Show doubling of generations - shift by binary units - Fractal - shift unit - addition rule a**m a**n = a**(m+n) - zeros = scoot overs, pushies - We use zero so we don't confuse ourselves. - .8 x 80 decimal points move vs. zeroes added - folding paper = multiplying - Such a system of units is uniform in its conversions. - Such a system limits the size of any amount, for example, to less than 10. - binary numbers: switches, or: cups, pints, quarts, half-gallons, gallons - bits and bytes - binary multiplication: cups to gallons, whole notes to sixteenth notes - Addition formulas - extending the domain: 2**(x+y) = (2**x)(2**y) - Multiplication/division and addition/subtraction are groups and linked by the addition formula (N**X)(N**Y) = N**(X+Y). Adding the number of dials/units makes for multiplying the amounts/values of the dials. - "hundreds" family marries "millions" family equals "hundred millions" family - You can choose which dial you want to be the unit - for example, the millions - and you can ignore the other dials - Coins are spaced logarithmically, and in EU evenly .01,.02,.05,.10,.20,.50,1,2,5,10,20,50,100,200,500 whereas in the US unevenly .25 quarter and $20 but no $2 bill, no 50 cent. - Logarithmic scales are based on multiplying evenly - Numbers to memorize: such as the doubling sequence: 1, 2, 4, 8, 16, 32, ... , 1024 Name system for units - Roman numerals is like currency, bills, banknotes, each has a name. - Need to come up with a system of names - and even the decimal number system has limitations - we run out of names, at least, in theory Orders of magnitude - Show up to 45 orders of magnitude in three rows, such as size of electron/quark 10**-18 meter to ant 10**-3 to distance to Saturn 10**12 to diameter of known universe 10**27. - Model all different systems in life. How many are there? Make use of the different units, dimensions. Compare the units and have rate problems. Use the three rows for the dials. Orders of magnitude - a row of units, objects as units (sun, earth, dime, dollar) as labels for the dials - listed with pertinent facts- a personal database of facts - Degrees of imperfection - and multiple reasons - Earth is not perfect sphere; also Earth has Mt.Everest, sea gorges; even water surface is not even; tidal effects - Fortunately, forces of nature are almost intentionally spaced out to be of different orders of magnitude. - Orders of magnitude: "Penny wise and pound foolish" - Inflating errors: Get a calculator to create errors in various ways. - Racial caste system - and ultimately must be visible - Orders of Magnitude - Nikos Salingaros's Theory of Architecture - Orders of magnitude in various dimensions (relate with grid) - Scientific notation Compare binary and decimal - 2**10 is roughly 10**3 - kilobytes, megabytes, gigabytes, terabytes - Square roots, geometric mean (2**5 = 32 is halfway between 1 and 1,000) Two rows of dials Comparing two rows - Show how 2**10 is roughly 10**3, and then 2**20 is roughly 10**6, and 2**30 is roughly 10**9, and so on, and the story of the inventors of checkers and chess. Can use two rows to compare binary and decimal. - Show how exponential growth of 1.06 becomes doubling. Compare also with Pascal's triangle (1+x)**N. - Illustrate doubling problems - Adam and Eve, Manhattan - and how they break down. Lining up rows - setting the main unit of reference - When all in line (column of dogs) makes it easy to imagine, thus the value of units. - lining up the decimal points lines up all the units - If you don't line up the numbers, then you can get confused. Units can be actions - how do you change units? redeciding which dials is the unit - word problems convert fractions (units) what we're counting by - Fraction is a grouping instruction (translation) 600 miles / hour. For 4 hours "per" means substitution? Three rows of dials Computation on three rows - Three rows so that we could do problems and get answers. - You can multiply by rebalancing the first and second rows. (What is the natural way to do canceling? actions on the number line?) - Subtraction involves two or three different roles: brought, ate, how many left - Prove that .999999.... = 1.000000 (equivalence classes?)(subtract?) Choosing a sequence of steps - Choose how to do an 5,000 x 50,000 can be broken down in different ways such as 5 x 50 x 1,000 x 1,000 - Adding 256 + 256 in your head is easier to do from left to right than from right to left because the intermediate response (500) is simpler because there is nothing to carry - Show each step, write it down! - In math there are typically several ways to get the right answer, and there is an advantage to solving in the way that keeps the intermediate answer simplest - though other times there is an advantage to doing things in a general way that doesn't require case-by-case thinking Word problem rates and proportions - Speed = distance / time. A way of counting distance by time. I'll be there in half an hour. Synchronizing: Use the three rows of dials, like audio tracks - Consciously tracking a recurring activity - Joining in: clapping hands, tapping feet, marking a beat - Synchronizing, thus amplifying, channeling - synchronizing and compatible patterns - Synchronized counting = "per" means "counting by" 600 miles per hour Counting fosters intuition about systems and spirit - How do we foster intuition about spirit? We look inward by reflecting on our counting. We do this by division, which is "counting by", for example, dividing by 1/2 is counting by halves. - How do we foster intuition about systems? We use single units to calculate, but we present answers in multiple units (such as running a marathon in 2 hours 10 minutes 20 seconds). This lets us communicate with people to allow for different intuitions, and to foster our intuitions. We can truncate, round, approximate. We stitch together our intuition by addition. When the units are different, addition means list, and when units are the same, addition means combine. This leads to counting, which is a very sophisticated behavior, as it relates qualitatively different types of numbers into one system. Multiplying is a form of counting that makes this evident. We foster intuition about systems by counting, by getting practice in counting. Perhaps: Counting is extending the domain for the addition formula. Extending the domain is stitching together our various faculties for sizing things up. Counting stitches them together by way of the addition formula, which is the heart of counting. Perhaps the initial domain is the divisions and the operation +0, +1, +2, +3, thus whether, what, how, why. This is transformed from circular (mod 8) to linear counting by way of the addition formulas? - System has us present our thoughts to multiple audiences - to others, externally - whereas spirit has us present our thoughts to ourselves, internally. Thus system manages "breaking down" of answers, models, uses multiple units. Whereas spirit holds models, answers true, keeps true to them, internally, sees them through.
http://www.selflearners.net/Math/Dials
13
54
Chapter 10 Conformal Mapping The terminology "conformal mapping" should have a familiar sound. In 1569 the Flemish cartographer Gerardus Mercator (1512--1594) devised a cylindrical map projection that preserves angles. The Mercator projection is still used today for world maps. Another map projection known to the ancient Greeks is the stereographic projection. It is also conformal (i.e., angle preserving), and we introduced it in Section 2.5 when we defined the Riemann sphere. In complex analysis a function preserves angles if and only if it is analytic or anti-analytic (i.e., the conjugate of an analytic function). A significant result, known as Riemann mapping theorem, states that any simply connected domain (other than the entire complex plane) can be mapped conformally onto the unit disk. 10.1 Basic Properties of Conformal Mappings be an analytic function in the domain D, and let be a point in D. If , then we can express f(z) in the where . If z is near , then the transformation has the linear approximation where . Because when , for points near the transformation has an effect much like the linear mapping . The effect of the linear mapping S is a rotation of the plane through the angle , followed by a magnification by the factor , followed by a rigid translation by the vector . Consequently, the mapping preserves angles at the point . We now show that the mapping also preserves angles at . For a smooth curve that passes through the point , we use the notation A vector tangent to C at the point is given by where the complex number is expressed as a vector. The angle of inclination respect to the positive x axis The image of C under the mapping is the curve K in the w plane given by the formula We can use the chain rule to show that a vector tangent to K at the point is given by The angle of inclination of with respect to the positive u axis is Therefore the effect of the transformation is to rotate the angle of inclination of the tangent vector at through the angle to obtain the angle of inclination of the tangent vector at . This situation is illustrated in Figure 10.1. Figure 10.1 The tangents at the points , where f(z) is an analytic function and . A mapping is said to be angle preserving, or conformal at , if it preserves angles between oriented curves in magnitude as well as in orientation. Theorem 10.1 shows where a mapping by an analytic function is conformal. Theorem 10.1 (Conformal Mapping). Let f(z) be an analytic function in the domain D, and let be a point in D. If , then f(z) is conformal at . Figure 10.2 The analytic mapping is conformal at the point , where . Example 10.1. Show that the mapping is conformal at the points , , and , and determine the angle of rotation given by at the given points. Solution. Because , we conclude that the mapping is conformal at all points except , where n is an integer. Calculation reveals that Therefore the angle of rotation is given by Explore Solution 10.1. Let f(z) be a nonconstant analytic function. If , then is called a critical point of f(z), and the mapping is not conformal at . The next result shows what happens at a critical point. Theorem 10.2. Let f(z) be analytic at the point . If and , then the mapping magnifies angles at the vertex by the factor k, as shown in Figure 10.3. Figure 10.3 The analytic mapping at point , where and . Example 10.2. Show that the mapping maps the unit square onto the region in the upper half-plane , which lies under the parabolas as shown in Figure 10.4. Figure 10.4 The mapping . Solution. The derivative is , and we conclude that the mapping is conformal for all . Note that the right angles at the vertices , , and are mapped onto right angles at the vertices , , and , respectively. At the point , we have and . Hence angles at the vertex are magnified by the factor . In particular, the right angle at is mapped onto the straight angle at . Explore Solution 10.2. Another property of a conformal obtained by considering the modulus of . If is near , we can use the equation and neglect the term . We then have the approximation From Equation (10-9), the distance between the images of the points and given approximately by . Therefore we say that the transformation changes small distances near by the scale factor . For example, the scale factor of the transformation near the point is . We also need to say a few things about the inverse transformation of a conformal mapping near a point , A complete justification of the following assertions relies on theorems studied in advanced calculus. (See, for instance, R. Creighton Buck, Advanced Calculus, 3rd ed. (New York, McGraw-Hill), pp. 358-361, 1978.) We express the the coordinate form The mapping in Equations (10-10) represents a transformation from the xy plane into the uv plane, and the Jacobian determinant, is defined by The transformation in Equations (10-10) has a local inverse, provided . Expanding Equation (10-11) and using the Cauchy--Riemann equations, we obtain (10-11) imply that a local inverse exists in a neighborhood of the point . The derivative of g(w) at is given by the familiar expression Exercises for Section 10.1. Basic Properties of Conformal Mappings Mobius - Bilinear Transformation The Next Module is Return to the Complex Analysis Modules Return to the Complex Analysis Project This material is coordinated with our book Complex Analysis for Mathematics and Engineering. (c) 2012 John H. Mathews, Russell W. Howell
http://math.fullerton.edu/mathews/c2003/ConformalMappingMod.html
13
79
Law of sines where a, b, and c are the lengths of the sides of a triangle, and A, B, and C are the opposite angles (see the figure to the right), and D is the diameter of the triangle's circumcircle. When the last part of the equation is not used, sometimes the law is stated using the reciprocal: The law of sines can be used to compute the remaining sides of a triangle when two angles and a side are known—a technique known as triangulation. However calculating this may result in numerical error if an angle is close to 90 degrees. It can also be used when two sides and one of the non-enclosed angles are known. In some such cases, the formula gives two possible values for the enclosed angle, leading to an ambiguous case. The law of sines is one of two trigonometric equations commonly applied to find lengths and angles in a general triangle, with the other being the law of cosines. There are three cases to consider in proving the law of sines. The first is when all angles of the triangle are acute. The second is when one angle is a right angle. The third is when one angle is obtuse. For acute triangles We make a triangle with the sides a, b, and c, and angles A, B, and C. Then we draw the altitude from vertex B to side b; by definition it divides the original triangle into two right angle triangles: ABR and R'BC. Mark this line h1. Using the definition of we see that for angle A on the right angle triangle ABR and C on R'BC we have: Solving for h1 Equating h1 in both expressions: Doing the same thing from angle A to side a we call the altitude h2 and the two right angle triangles ABR and AR'C: Solving for h2 Equating the terms in both expressions above we have: For right angle triangles We make a triangle with the sides a, b, and c, and angles A, B, and C where C is a right angle. Since we already have a right angle triangle we can use the definition of sine: Solving for c: For the remaining angle C we need to remember that it is a right angle and in this case. Therefore we can rewrite c = c · 1 as: The above is equivalent to: Equating c in both the equations above we again have: For obtuse triangles We make a triangle with the sides a, b, and c, and angles A, B, and C where A is an obtuse angle.In this case if we try and draw an altitude from any angle other than A the point where this line will touch the base of the triangle ABC will lie outside any of the lines a, b or c. We draw the altitude from angle B, calling it h1 and create the two extended right triangles RBA' and RBC. From the definition of sine we again have: We use identity to express in terms of . By definition we have: We now draw an altitude from A calling it h2 and forming two right triangles ABR and AR'C. From this we straightforwardly get: Equating the in both equations above we again get: Proving the theorem in all cases. The ambiguous case When using the law of sines to solve triangles, there exists an ambiguous case where two separate triangles can be constructed (i.e., there are two different possible solutions to the triangle). In the case shown bellow they are triangle ABC and AB'C'. Given a general triangle the following conditions would need to be fulfilled for the case to be ambiguous: - The only information known about the triangle is the angle A and the sides a and c - The angle A is acute (i.e., A < 90°). - The side a is shorter than the side c (i.e., a < c). - The side a is longer than the altitude h from angle B, where h = c sin A (i.e., a > h). Given all of the above premises are true, then either of the angles C or C' may produce a valid triangle; meaning, both of the following are true: From there we can find the corresponding B and b or B' and b' if required, where b is the side bounded by angles A and C and b' bounded by A and C' . Without further information it is impossible to decide which is the triangle being asked for. The following are examples of how to solve a problem using the law of sines: Given: side a = 20, side c = 24, and angle C = 40° Using the law of sines, we conclude that Or another example of how to solve a problem using the law of sines: If two sides of the triangle are equal to x and the length of the third side, the chord, is given as 100 feet and the angle C opposite the chord is given in degrees, then Relation to the circumcircle In the identity where S is the area of the triangle and s is the semiperimeter The second equality above is essentially Heron's formula. In the spherical case, the formula is: Here, α, β, and γ are the angles at the center of the sphere subtended by the three arcs of the spherical surface triangle a, b, and c, respectively. A, B, and C are the surface angles opposite their respective arcs. It is easy to see how for small spherical triangles, when the radius of the sphere is much greater than the sides of the triangle, this formula becomes the planar formula at the limit, since and the same for and . In hyperbolic geometry when the curvature is −1, the law of sines becomes In the special case when B is a right angle, one gets which is the analog of the formula in Euclidean geometry expressing the sine of an angle as the opposite side divided by the hypotenuse. - See also hyperbolic triangle. Define a generalized sine function, depending also on a real parameter : The law of sines in constant curvature reads as By substituting , , and , one obtains respectively the euclidian, spherical, and hyperbolic cases of the law of sines described above. Let indicate the circumference of a circle of radius in a space of constant curvature . Then . Therefore the law of sines can also be expressed as: According to Ubiratàn D'Ambrosio and Helaine Selin, the spherical law of sines was discovered in the 10th century. It is variously attributed to al-Khujandi, Abul Wafa Bozjani, Nasir al-Din al-Tusi and Abu Nasr Mansur. Al-Jayyani's The book of unknown arcs of a sphere in the 11th century introduced the general law of sines. The plane law of sines was later described in the 13th century by Nasīr al-Dīn al-Tūsī. In his On the Sector Figure, he stated the law of sines for plane and spherical triangles, and provided proofs for this law. According to Glen Van Brummelen, "The Law of Sines is really Regiomontanus's foundation for his solutions of right-angled triangles in Book IV, and these solutions are in turn the bases for his solutions of general triangles." Regiomontanus was a 15th-century German mathematician. An equation with sines for tetrahedra An equation involving sine functions and tetrahedra is as follows. For a tetrahedron with vertices O, A, B, C, it is true that One may view the two sides of this identity as corresponding to clockwise and counterclockwise orientations of the surface. Putting any of the four vertices in the role of O yields four such identities, but in a sense at most three of them are independent: If the "clockwise" sides of three of them are multiplied and the product is inferred to be equal to the product of the "counterclockwise" sides of the same three identities, and then common factors are cancelled from both sides, the result is the fourth identity. One reason to be interested in this "independence" relation is this: It is widely known that three angles are the angles of some triangle if and only if their sum is a half-circle. What condition on 12 angles is necessary and sufficient for them to be the 12 angles of some tetrahedron? Clearly the sum of the angles of any side of the tetrahedron must be a half-circle. Since there are four such triangles, there are four such constraints on sums of angles, and the number of degrees of freedom is thereby reduced from 12 to 8. The four relations given by this sines law further reduce the number of degrees of freedom, not from 8 down to 4, but only from 8 down to 5, since the fourth constraint is not independent of the first three. Thus the space of all shapes of tetrahedra is 5-dimensional. - Half-side formula – for solving spherical triangles - Law of tangents - Mollweide's formula – for checking solutions of triangles - Solution of triangles - Coxeter, H. S. M. and Greitzer, S. L. Geometry Revisited. Washington, DC: Math. Assoc. Amer., pp. 1–3, 1967 - Russell, Robert A. "Generalized law of sines". Wolfram Mathworld. Retrieved 25 September 2011. - Katok, Svetlana (1992). Fuchsian groups. Chicago: University of Chicago Press. p. 22. ISBN 0-226-42583-5. - Sesiano just lists al-Wafa as a contributor. Sesiano, Jacques (2000) "Islamic mathematics" pp. 137— , page 157, in Selin, Helaine; D'Ambrosio, Ubiratan (2000), Mathematics Across Cultures: The History of Non-western Mathematics, Springer, ISBN 1-4020-0260-2 - O'Connor, John J.; Robertson, Edmund F., "Abu Abd Allah Muhammad ibn Muadh Al-Jayyani", MacTutor History of Mathematics archive, University of St Andrews. - Berggren, J. Lennart (2007). "Mathematics in Medieval Islam". The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook. Princeton University Press. p. 518. ISBN 978-0-691-11485-9. - Glen Van Brummelen (2009). "The mathematics of the heavens and the earth: the early history of trigonometry". Princeton University Press. p.259. ISBN 0-691-12973-8 - Hazewinkel, Michiel, ed. (2001), "Sine theorem", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 - The Law of Sines at cut-the-knot - Degree of Curvature - Finding the Sine of 1 Degree
http://en.wikipedia.org/wiki/Law_of_sines
13
150
Flow measurement is the quantification of bulk fluid movement. Flow can be measured in a variety of ways. Positive-displacement flow meters accumulate a fixed volume of fluid and then count the number of times the volume is filled to measure flow. Other flow measurement methods rely on forces produced by the flowing stream as it overcomes a known constriction, to indirectly calculate flow. Flow may be measured by measuring the velocity of fluid over a known area. Units of measurement Both gas and liquid flow can be measured in volumetric or mass flow rates, such as liters per second or kilograms per second. These measurements can be converted between one another if the material's density is known. The density for a liquid is almost independent of the liquid conditions; however, this is not the case for gas, the density of which depends greatly upon pressure, temperature and to a lesser extent, the gas composition. When gases or liquids are transferred for their energy content, such as the sale of natural gas, the flow rate may also be expressed in terms of energy flow, such as GJ/hour or BTU/day. The energy flow rate is the volume flow rate multiplied by the energy content per unit volume or mass flow rate multiplied by the energy content per unit mass. Where accurate energy comes to the time of the legit flow rate is desired, most flow meters will be used to calculate the volume or mass flow rate which is then adjusted to the energy flow rate by the use of a flow computer. Gases are compressible and change volume when placed under pressure, are heated or are cooled. A volume of gas under one set of pressure and temperature conditions is not equivalent to the same gas under different conditions. References will be made to "actual" flow rate through a meter and "standard" or "base" flow rate through a meter with units such as acm/h (actual cubic meters per hour), kscm/h (thousand standard cubic meters per hour), LFM (linear feet per minute), or MSCFD (million standard cubic feet per day). For liquids, various units are used depending upon the application and industry, but might include gallons (U.S. liquid or imperial) per minute, liters per second, bushels per minute or, when describing river flows, cumecs (cubic metres per second) or acre-feet per day. In oceanography a common unit to measure volume transport (volume of water transported by a current for example) is a sverdrup (Sv) equivalent to 106 m3 / s. Mechanical flow meters A bucket and a stopwatch is an analogy for the operation of a positive displacement meter The stopwatch is started when the flow starts, and stopped when the bucket reaches its limit. The volume divided by the time gives the flow rate. For continuous measurements, we need a system of continually filling and emptying buckets to divide the flow without letting it out of the pipe. These continuously forming and collapsing volumetric displacements may take the form of pistons reciprocating in cylinders, gear teeth mating against the internal wall of a meter or through a progressive cavity created by rotating oval gears or a helical screw. Piston meter/Rotary piston Because they are used for domestic water measurement, piston meters, also known as rotary piston or semi-positive displacement meters, are the most common flow measurement devices in the UK and are used for almost all meter sizes up to and including 40 mm (1½ʺ). The piston meter operates on the principle of a piston rotating within a chamber of known volume. For each rotation, an amount of water passes through the piston chamber. Through a gear mechanism and, sometimes, a magnetic drive, a needle dial and odometer type display are advanced. Gear meter Oval gear meter An oval gear meter is a positive displacement meter that uses two or more oblong gears configured to rotate at right angles to one another, forming a T shape. Such a meter has two sides, which can be called A and B. No fluid passes through the center of the meter, where the teeth of the two gears always mesh. On one side of the meter (A), the teeth of the gears close off the fluid flow because the elongated gear on side A is protruding into the measurement chamber, while on the other side of the meter (B), a cavity holds a fixed volume of fluid in a measurement chamber. As the fluid pushes the gears, it rotates them, allowing the fluid in the measurement chamber on side B to be released into the outlet port. Meanwhile, fluid entering the inlet port will be driven into the measurement chamber of side A, which is now open. The teeth on side B will now close off the fluid from entering side B. This cycle continues as the gears rotate and fluid is metered through alternating measurement chambers. Permanent magnets in the rotating gears can transmit a signal to an electric reed switch or current transducer for flow measurement. Helical gear Helical gear flow meters get their name from the shape of their gears or rotors. These rotors resemble the shape of a helix, which is a spiral-shaped structure. As the fluid flows through the meter, it enters the compartments in the rotors, causing the rotors to rotate. Flowrate is calculated from the speed of rotation. Nutating disk meter This is the most commonly used measurement system for measuring water supply. The fluid, most commonly water, enters in one side of the meter and strikes the nutating disk, which is eccentrically mounted. The disk must then "wobble" or nutate about the vertical axis, since the bottom and the top of the disk remain in contact with the mounting chamber. A partition separates the inlet and outlet chambers. As the disk nutates, it gives direct indication of the volume of the liquid that has passed through the meter as volumetric flow is indicated by a gearing and register arrangement, which is connected to the disk. It is reliable for flow measurements within 1 percent. Variable area meter The variable area (VA) meter, also commonly called a rotameter, consists of a tapered tube, typically made of glass, with a float inside that is pushed up by fluid flow and pulled down by gravity. As flow rate increases, greater viscous and pressure forces on the float cause it to rise until it becomes stationary at a location in the tube that is wide enough for the forces to balance. Floats are made in many different shapes, with spheres and spherical ellipses being the most common. Some are designed to spin visibly in the fluid stream to aid the user in determining whether the float is stuck or not. Rotameters are available for a wide range of liquids but are most commonly used with water or air. They can be made to reliably measure flow down to 1% accuracy. Turbine flow meter The turbine flow meter (better described as an axial turbine) translates the mechanical action of the turbine rotating in the liquid flow around an axis into a user-readable rate of flow (gpm, lpm, etc.). The turbine tends to have all the flow traveling around it. The turbine wheel is set in the path of a fluid stream. The flowing fluid impinges on the turbine blades, imparting a force to the blade surface and setting the rotor in motion. When a steady rotation speed has been reached, the speed is proportional to fluid velocity. Turbine flow meters are used for the measurement of natural gas and liquid flow. Turbine meters are less accurate than displacement and jet meters at low flow rates, but the measuring element does not occupy or severely restrict the entire path of flow. The flow direction is generally straight through the meter, allowing for higher flow rates and less pressure loss than displacement-type meters. They are the meter of choice for large commercial users, fire protection, and as master meters for the water distribution system. Strainers are generally required to be installed in front of the meter to protect the measuring element from gravel or other debris that could enter the water distribution system. Turbine meters are generally available for 1-1/2" to 12" or higher pipe sizes. Turbine meter bodies are commonly made of bronze, cast Iron, or ductile iron. Internal turbine elements can be plastic or non-corrosive metal alloys. they are accurate in normal working conditions to 0.2l/s however are affect greatly with dog mix interference. Fire meters are a specialized type of turbine meter with approvals for the high flow rates required in fire protection systems. They are often approved by Underwriters Laboratories (UL) or Factory Mutual (FM) or similar authorities for use in fire protection. Portable turbine meters may be temporarily installed to measure water used from a fire hydrant. The meters are normally made of aluminum to be light weight, and are usually 3" capacity. Water utilities often require them for measurement of water used in construction, pool filling, or where a permanent meter is not yet installed. Woltmann meter The Woltmann meter comprises a rotor with helical blades inserted axially in the flow, much like a ducted fan; it can be considered a type of turbine flow meter. They are commonly referred to as helix meters, and are popular at larger sizes. Single jet meter Paddle wheel meter This is similar to the single jet meter, except that the impeller is small with respect to the width of the pipe, and projects only partially into the flow, like the paddle wheel on a Mississippi riverboat. Multiple jet meter A multiple jet or multijet meter is a velocity type meter which has an impeller which rotates horizontally on a vertical shaft. The impeller element is in a housing in which multiple inlet ports direct the fluid flow at the impeller causing it to rotate in a specific direction in proportion to the flow velocity. This meter works mechanically much like a single jet meter except that the ports direct the flow at the impeller equally from several points around the circumference of the element, not just one point; this minimizes uneven wear on the impeller and its shaft. Pelton wheel The Pelton wheel turbine (better described as a radial turbine) translates the mechanical action of the Pelton wheel rotating in the liquid flow around an axis into a user-readable rate of flow (gpm, lpm, etc.). The Pelton wheel tends to have all the flow traveling around it with the inlet flow focused on the blades by a jet. The original Pelton wheels were used for the generation of power and consisted of a radial flow turbine with "reaction cups" which not only move with the force of the water on the face but return the flow in opposite direction using this change of fluid direction to further increase the efficiency of the turbine. Current meter Flow through a large penstock such as used at a hydroelectric power plant can be measured by averaging the flow velocity over the entire area. Propeller-type current meters (similar to the purely mechanical Ekman current meter, but now with electronic data acquisition) can be traversed over the area of the penstock and velocities averaged to calculate total flow. This may be on the order of hundreds of cubic meters per second. The flow must be kept steady during the traverse of the current meters. Methods for testing hydroelectric turbines are given in IEC standard 41. Such flow measurements are often commercially important when testing the efficiency of large turbines. Pressure-based meters There are several types of flow meter that rely on Bernoulli's principle, either by measuring the differential pressure within a constriction, or by measuring static and stagnation pressures to derive the dynamic pressure. Venturi meter A Venturi meter constricts the flow in some fashion, and pressure sensors measure the differential pressure before and within the constriction. This method is widely used to measure flow rate in the transmission of gas through pipelines, and has been used since Roman Empire times.The coefficient of discharge of Venturi meter ranges from 0.93 to 0.97. The first large-scale Venturi meters to measure liquid flows were developed by Clemens Herschel who used them to measure small and large flows of water and wastewater beginning at the end of the 19th century. Orifice plate An orifice plate is a plate with a hole through it, placed in the flow; it constricts the flow, and measuring the pressure differential across the constriction gives the flow rate. It is basically a crude form of Venturi meter, but with higher energy losses. There are three type of orifice: concentric, eccentric, and segmental. Dall tube The Dall tube is a shortened version of a Venturi meter, with a lower pressure drop than an orifice plate. As with these flow meters the flow rate in a Dall tube is determined by measuring the pressure drop caused by restriction in the conduit. The pressure differential is typically measured using diaphragm pressure transducers with digital readout. Since these meters have significantly lower permanent pressure losses than orifice meters, Dall tubes are widely used for measuring the flow rate of large pipeworks. Pitot tube A Pitot tube is a pressure measuring instrument used to measure fluid flow velocity by determining the stagnation pressure. Bernoulli's equation is used to calculate the dynamic pressure and hence fluid velocity. Also see Air flow meter. Multi-hole pressure probe Multi-hole pressure probes (also called impact probes) extend the theory of pitot tube to more than one dimension. A typical impact probe consists of three or more holes (depending on the type of probe) on the measuring tip arranged in a specific pattern. More holes allow the instrument to measure the direction of the flow velocity in addition to its magnitude (after appropriate calibration). Three holes arranged in a line allow the pressure probes to measure the velocity vector in two dimensions. Introduction of more holes, e.g. five holes arranged in a "plus" formation, allow measurement of the three-dimensional velocity vector. Also see Annubar. Cone Meters Cone meters are a newer differential pressure metering device first launched in 1985 by McCrometer in Hemet, CA. While working with the same basic principles as Ventrui and Orifice type DP meters, cone meters don’t require the same upstream and downstream piping. The cone acts as a conditioning device as well as a differential pressure producer. Upstream requirements are between 0-5 diameters compared to up to 44 diameters for an orifice plate or 22 diameters for a Venturi. Because cone meters are generally of welded construction, it is recommended they are always calibrated prior to service. Inevitably heat effects of welding cause distortions and other effects that prevent tabular data on discharge coefficients with respect to line size, beta ratio and operating Reynolds Numbers from being collected and published. Calibrated cone meters have an uncertainty up to +/-0.5%. Un-calibrated cone meters have an uncertainty of +/-5.0%. Optical flow meters Optical flow meters use light to determine flow rate. Small particles which accompany natural and industrial gases pass through two laser beams focused a short distance apart in the flow path. in a pipe by illuminating optics. Laser light is scattered when a particle crosses the first beam. The detecting optics collects scattered light on a photodetector, which then generates a pulse signal. As the same particle crosses the second beam, the detecting optics collect scattered light on a second photodetector, which converts the incoming light into a second electrical pulse. By measuring the time interval between these pulses, the gas velocity is calculated as where is the distance between the laser beams and is the time interval. Laser-based optical flow meters measure the actual speed of particles, a property which is not dependent on thermal conductivity of gases, variations in gas flow or composition of gases. The operating principle enables optical laser technology to deliver highly accurate flow data, even in challenging environments which may include high temperature, low flow rates, high pressure, high humidity, pipe vibration and acoustic noise. Optical flow meters are very stable with no moving parts and deliver a highly repeatable measurement over the life of the product. Because distance between the two laser sheets does not change, optical flow meters do not require periodic calibration after their initial commissioning. Optical flow meters require only one installation point, instead of the two installation points typically required by other types of meters. A single installation point is simpler, requires less maintenance and is less prone to errors. Commercially available optical flow meters are capable of measuring flow from 0.1 m/s to faster than 100 m/s (1000:1 turn down ratio) and have been demonstrated to be effective for the measurement of flare gases from oil wells and refineries, a contributor to atmospheric pollution. Open channel flow measurement Level to flow The level of the water is measured at a designated point behind weir or in flume a hydraulic structure using various secondary devices (bubblers, ultrasonic, float, and differential pressure are common methods). This depth is converted to a flow rate according to a theoretical formula of the form where is the flow rate, is a constant, is the water level, and is an exponent which varies with the device used; or it is converted according to empirically derived level/flow data points (a "flow curve"). The flow rate can then integrated over time into volumetric flow. Level to flow devices are commonly used to measure the flow of surface waters (springs, stream, and rivers), industrial discharges, and sewage. Of these, weirs are used on flow streams with low solids (typically surface waters), while flumes are used on flows containing low or high solids contents. Area / velocity The cross-sectional area of the flow is calculated from a depth measurement and the average velocity of the flow is measured directly (Doppler and propeller methods are common). Velocity times the cross-sectional area yields a flow rate which can be integrated into volumetric flow. Dye testing Acoustic Doppler velocimetry Acoustic Doppler velocimetry (ADV) is designed to record instantaneous velocity components at a single point with a relatively high frequency. Measurements are performed by measuring the velocity of particles in a remote sampling volume based upon the Doppler shift effect. Thermal mass flow meters Thermal mass flow meters generally use combinations of heated elements and temperature sensors to measure the difference between static and flowing heat transfer to a fluid and infer its flow with a knowledge of the fluid's specific heat and density. The fluid temperature is also measured and compensated for. If the density and specific heat characteristics of the fluid are constant, the meter can provide a direct mass flow readout, and does not need any additional pressure temperature compensation over their specified range. Technological progress has allowed the manufacture of thermal mass flow meters on a microscopic scale as MEMS sensors; these flow devices can be used to measure flow rates in the range of nanolitres or microlitres per minute. Thermal mass flow meter (also called thermal dispersion flowmeter) technology is used for compressed air, nitrogen, helium, argon, oxygen, and natural gas. In fact, most gases can be measured as long as they are fairly clean and non-corrosive. For more aggressive gases, the meter may be made out of special alloys (e.g. Hastelloy), and pre-drying the gas also helps to minimize corrosion. Today, thermal mass flow meters are used to measure the flow of gases in a growing range of applications, such as chemical reactions or thermal transfer applications that are difficult for other flow metering technologies. This is because thermal mass flow meters monitor variations in one or more of the thermal characteristics (temperature, thermal conductivity, and/or specific heat) of gaseous media to define the mass flow rate. The MAF sensor In many late model automobiles, a mass airflow sensor (MAF sensor) is used to accurately determine the mass flowrate of intake air used in the internal combustion engine. Many such mass flow sensors utilize a heated element and a downstream temperature sensor to indicate the air flowrate. Other sensors use a spring-loaded vane. In either case, the vehicle's electronic control unit interprets the sensor signals as a real time indication of an engine's fuel requirement. Vortex flow meters Another method of flow measurement involves placing a bluff body (called a shedder bar) in the path of the fluid. As the fluid passes this bar, disturbances in the flow called vortices are created. The vortices trail behind the cylinder, alternatively from each side of the bluff body. This vortex trail is called the Von Kármán vortex street after von Kármán's 1912 mathematical description of the phenomenon. The frequency at which these vortices alternate sides is essentially proportional to the flow rate of the fluid. Inside, atop, or downstream of the shedder bar is a sensor for measuring the frequency of the vortex shedding. This sensor is often a piezoelectric crystal, which produces a small, but measurable, voltage pulse every time a vortex is created. Since the frequency of such a voltage pulse is also proportional to the fluid velocity, a volumetric flow rate is calculated using the cross sectional area of the flow meter. The frequency is measured and the flow rate is calculated by the flowmeter electronics using the equation where is the frequency of the vortices, the characteristic length of the bluff body, is the velocity of the flow over the bluff body, and is the Strouhal number, which is essentially a constant for a given body shape within its operating limits. Electromagnetic, ultrasonic and coriolis flow meters Modern innovations in the measurement of flow rate incorporate electronic devices that can correct for varying pressure and temperature (i.e. density) conditions, non-linearities, and for the characteristics of the fluid. Magnetic flow meters Magnetic flow meters, often called "mag meter"s or "electromag"s, use a magnetic field applied to the metering tube, which results in a potential difference proportional to the flow velocity perpendicular to the flux lines. The potential difference is sensed by electrodes aligned perpendicular to the flow and the applied magnetic field. The physical principle at work is Faraday's law of electromagnetic induction. The magnetic flow meter requires a conducting fluid and a nonconducting pipe liner. The electrodes must not corrode in contact with the process fluid; some magnetic flowmeters have auxiliary transducers installed to clean the electrodes in place. The applied magnetic field is pulsed, which allows the flowmeter to cancel out the effect of stray voltage in the piping system. Ultrasonic flow meters (Doppler, transit time) There are two main types of Ultrasonic flow meters: Doppler and transit time. While they both utilize ultrasound to make measurements and can be non-invasive (measure flow from outside the tube, pipe or vessel), they measure flow by very different methods. Ultrasonic transit time flow meters measure the difference of the transit time of ultrasonic pulses propagating in and against the direction of flow. This time difference is a measure for the average velocity of the fluid along the path of the ultrasonic beam. By using the absolute transit times both the averaged fluid velocity and the speed of sound can be calculated. Using the two transit times and and the distance between receiving and transmitting transducers and the inclination angle one can write the equations: where is the average velocity of the fluid along the sound path and is the speed of sound. With wide-beam illumination transit time ultrasound can also be used to measure volume flow independent of the cross-sectional area of the vessel or tube. Ultrasonic Doppler flow meters measure the Doppler shift resulting from reflecting an ultrasonic beam off the particulates in flowing fluid. The frequency of the transmitted beam is affected by the movement of the particles; this frequency shift can be used to calculate the fluid velocity. For the Doppler principle to work there must be a high enough density of sonically reflective materials such as solid particles or air bubbles suspended in the fluid. This is in direct contrast to an ultrasonic transit time flow meter, where bubbles and solid particles reduce the accuracy of the measurement. Due to the dependency on these particles there are limited applications for Doppler flow meters. This technology is also known as acoustic Doppler velocimetry. One advantage of ultrasonic flow meters is that they can effectively measure the flow rates for a wide variety of fluids, as long as the speed of sound through that fluid is known. For example, ultrasonic flow meters are used for the measurement of such diverse fluids a liquid natural gas (LNG) and blood. One can also calculate the expected speed of sound for a given fluid; this can be compared to the speed of sound empirically measured by an ultrasonic flow meter for the purposes of monitoring the quality of the flow meter's measurements. A drop in quality (change in the measured speed of sound) is an indication that the meter needs servicing. Coriolis flow meters Using the Coriolis effect that causes a laterally vibrating tube to distort, a direct measurement of mass flow can be obtained in a coriolis flow meter. Furthermore a direct measure of the density of the fluid is obtained. Coriolis measurement can be very accurate irrespective of the type of gas or liquid that is measured; the same measurement tube can be used for hydrogen gas and bitumen without recalibration. Coriolis flow meters can be used for the measurement of natural gas flow. Laser Doppler flow measurement A beam of laser light impinging on a moving particle will be partially scattered with a change in wavelength proportional to the particle's speed (the Doppler effect). A Laser Doppler velocimeter (LDV), also called a laser Doppler anemometer (LDA), focuses a laser beam into a small volume in a flowing fluid containing small particles (naturally occurring or induced). The particles scatter the light with a Doppler shift. Analysis of this shifted wavelength can be used to directly, and with great precision, determine the speed of the particle and thus a close approximation of the fluid velocity. A number of different techniques and device configurations are available for determining the Doppler shift. All use a photodetector (typically an avalanche photodiode) to convert the light into an electrical waveform for analysis. In most devices, the original laser light is divided into two beams. In one general LDV class, the two beams are made to intersect at their focal points where they interfere and generate a set of straight fringes. The sensor is then aligned to the flow such that the fringes are perpendicular to the flow direction. As particles pass through the fringes, the Doppler-shifted light is collected into the photodetector. In another general LDV class, one beam is used as a reference and the other is Doppler-scattered. Both beams are then collected onto the photodetector where optical heterodyne detection is used to extract the Doppler signal. Even though ideally the flowmeter should be unaffected by its environment, in practice this is unlikely to be the case. Often measurement errors originate from incorrect installation or other environment dependent factors. In situ methods are used when flow meter is calibrated in the correct flow conditions. Transit time method For pipe flows a so-called transit time method is applied where a radiotracer is injected as a pulse into the measured flow. The transit time is defined with the help of radiation detectors placed on the outside of the pipe. The volume flow is obtained by multiplying the measured average fluid flow velocity by the inner pipe cross section. This reference flow value is compared with the simultaneous flow value given by the flow measurement to be calibrated. The procedure is standardised (ISO 2975/VII for liquids and BS 5857-2.4 for gases). The best accredited measurement uncertainty for liquids and gases is 0.5%. Tracer dilution method The radiotracer dilution method is used to calibrate open channel flow measurements. A solution with a known tracer concentration is injected at a constant known velocity into the channel flow. Downstream where the tracer solution is thoroughly mixed over the flow cross section, a continuous sample is taken and its tracer concentration in relation to that of the injected solution is determined. The flow reference value is determined by using the tracer balance condition between the injected tracer flow and the diluting flow.. The procedure is standardised (ISO 9555-1 and ISO 9555-2 for liquid flow in open channels). The best accredited measurement uncertainty is 1%. See also - Air flow meter - Airspeed indicator - Automatic meter reading - Flow meter error - Ford viscosity cup - Gas meter - Laser Doppler velocimetry - Mass flow meter - Mass flow rate - Orifice plate - Primary flow element - Stream gauge - Thorpe tube flowmeter - Volumetric flow rate - Water meter - Holman, J. Alan (2001). Experimental methods for engineers. Boston: McGraw-Hill. ISBN 978-0-07-366055-4. - American Gas Association Report Number 7 - Arregui, Cabrera, Cobacho, Integrated Water Meter Management, p. 33 - Herschel, Clemens. (1898). Measuring Water. Providence, RI:Builders Iron Foundry. - Lipták, Flow Measurement, p. 85 - American Gas Association Report Number 3 - Flare Metering with Optics - Hydraulic structures - Chanson, Hubert (2008). Acoustic Doppler Velocimetry (ADV) in the Field and in Laboratory: Practical Experiences. in Frédérique Larrarte and Hubert Chanson, Experiences and Challenges in Sewers: Measurements and Hydrodynamics. International Meeting on Measurements and Hydraulics of Sewers IMMHS'08, Summer School GEMCEA/LCPC, Bouguenais, France, 19–21 August 2008, Hydraulic Model Report No. CH70/08, Div. of Civil Engineering, The University of Queensland, Brisbane, Australia, Dec., pp. 49–66. ISBN 978-1-86499-928-0. - Drost, CJ (1978). "Vessel Diameter-Independent Volume Flow Measurements Using Ultrasound". Proceedings of San Diego Biomedical Symposium 17: 299–302. - American Gas Association Report Number 9 - American Gas Association Report Number 11 - Adrian, R. J., editor (1993); Selected on Laser Doppler Velocimetry, S.P.I.E. Milestone Series, ISBN 978-0-8194-1297-3 - Cornish,D (1994/5) Instrument performance.Meas.Control,27(10):323-8 - Roger C.Baker. Flow Measurement Handbook.Cambridge university press. ISBN 978-0-521-01765-7 - Finnish Accreditation Service
http://en.wikipedia.org/wiki/Flow_measurement
13
153
A diode is an electrical device allowing current to move through it in one direction with far greater ease than in the other. The most common kind of diode in modern circuit design is the semiconductor diode, although other diode technologies exist. Semiconductor diodes are symbolized in schematic diagrams such as Figure below. The term “diode” is customarily reserved for small signal devices, I ≤ 1 A. The term rectifier is used for power devices, I > 1 A. Semiconductor diode schematic symbol: Arrows indicate the direction of electron current flow. When placed in a simple battery-lamp circuit, the diode will either allow or prevent current through the lamp, depending on the polarity of the applied voltage. (Figure below) Diode operation: (a) Current flow is permitted; the diode is forward biased. (b) Current flow is prohibited; the diode is reversed biased. When the polarity of the battery is such that electrons are allowed to flow through the diode, the diode is said to be forward-biased. Conversely, when the battery is “backward” and the diode blocks current, the diode is said to be reverse-biased. A diode may be thought of as like a switch: “closed” when forward-biased and “open” when reverse-biased. Oddly enough, the direction of the diode symbol's “arrowhead” points against the direction of electron flow. This is because the diode symbol was invented by engineers, who predominantly use conventional flow notation in their schematics, showing current as a flow of charge from the positive (+) side of the voltage source to the negative (-). This convention holds true for all semiconductor symbols possessing “arrowheads:” the arrow points in the permitted direction of conventional flow, and against the permitted direction of electron flow. Diode behavior is analogous to the behavior of a hydraulic device called a check valve. A check valve allows fluid flow through it in only one direction as in Figure below. Hydraulic check valve analogy: (a) Electron current flow permitted. (b) Current flow prohibited. Check valves are essentially pressure-operated devices: they open and allow flow if the pressure across them is of the correct “polarity” to open the gate (in the analogy shown, greater fluid pressure on the right than on the left). If the pressure is of the opposite “polarity,” the pressure difference across the check valve will close and hold the gate so that no flow occurs. Like check valves, diodes are essentially “pressure-” operated (voltage-operated) devices. The essential difference between forward-bias and reverse-bias is the polarity of the voltage dropped across the diode. Let's take a closer look at the simple battery-diode-lamp circuit shown earlier, this time investigating voltage drops across the various components in Figure below. Diode circuit voltage measurements: (a) Forward biased. (b) Reverse biased. A forward-biased diode conducts current and drops a small voltage across it, leaving most of the battery voltage dropped across the lamp. If the battery's polarity is reversed, the diode becomes reverse-biased, and drops all of the battery's voltage leaving none for the lamp. If we consider the diode to be a self-actuating switch (closed in the forward-bias mode and open in the reverse-bias mode), this behavior makes sense. The most substantial difference is that the diode drops a lot more voltage when conducting than the average mechanical switch (0.7 volts versus tens of millivolts). This forward-bias voltage drop exhibited by the diode is due to the action of the depletion region formed by the P-N junction under the influence of an applied voltage. If no voltage applied is across a semiconductor diode, a thin depletion region exists around the region of the P-N junction, preventing current flow. (Figure below (a)) The depletion region is almost devoid of available charge carriers, and acts as an insulator: Diode representations: PN-junction model, schematic symbol, physical part. The schematic symbol of the diode is shown in Figure above (b) such that the anode (pointing end) corresponds to the P-type semiconductor at (a). The cathode bar, non-pointing end, at (b) corresponds to the N-type material at (a). Also note that the cathode stripe on the physical part (c) corresponds to the cathode on the symbol. If a reverse-biasing voltage is applied across the P-N junction, this depletion region expands, further resisting any current through it. (Figure below) Depletion region expands with reverse bias. Conversely, if a forward-biasing voltage is applied across the P-N junction, the depletion region collapses becoming thinner. The diode becomes less resistive to current through it. In order for a sustained current to go through the diode; though, the depletion region must be fully collapsed by the applied voltage. This takes a certain minimum voltage to accomplish, called the forward voltage as illustrated in Figure below. Inceasing forward bias from (a) to (b) decreases depletion region thickness. For silicon diodes, the typical forward voltage is 0.7 volts, nominal. For germanium diodes, the forward voltage is only 0.3 volts. The chemical constituency of the P-N junction comprising the diode accounts for its nominal forward voltage figure, which is why silicon and germanium diodes have such different forward voltages. Forward voltage drop remains approximately constant for a wide range of diode currents, meaning that diode voltage drop is not like that of a resistor or even a normal (closed) switch. For most simplified circuit analysis, the voltage drop across a conducting diode may be considered constant at the nominal figure and not related to the amount of current. Actually, forward voltage drop is more complex. An equation describes the exact current through a diode, given the voltage dropped across the junction, the temperature of the junction, and several physical constants. It is commonly known as the diode equation: The term kT/q describes the voltage produced within the P-N junction due to the action of temperature, and is called the thermal voltage, or Vt of the junction. At room temperature, this is about 26 millivolts. Knowing this, and assuming a “nonideality” coefficient of 1, we may simplify the diode equation and re-write it as such: You need not be familiar with the “diode equation” to analyze simple diode circuits. Just understand that the voltage dropped across a current-conducting diode does change with the amount of current going through it, but that this change is fairly small over a wide range of currents. This is why many textbooks simply say the voltage drop across a conducting, semiconductor diode remains constant at 0.7 volts for silicon and 0.3 volts for germanium. However, some circuits intentionally make use of the P-N junction's inherent exponential current/voltage relationship and thus can only be understood in the context of this equation. Also, since temperature is a factor in the diode equation, a forward-biased P-N junction may also be used as a temperature-sensing device, and thus can only be understood if one has a conceptual grasp on this mathematical relationship. A reverse-biased diode prevents current from going through it, due to the expanded depletion region. In actuality, a very small amount of current can and does go through a reverse-biased diode, called the leakage current, but it can be ignored for most purposes. The ability of a diode to withstand reverse-bias voltages is limited, as it is for any insulator. If the applied reverse-bias voltage becomes too great, the diode will experience a condition known as breakdown (Figure below), which is usually destructive. A diode's maximum reverse-bias voltage rating is known as the Peak Inverse Voltage, or PIV, and may be obtained from the manufacturer. Like forward voltage, the PIV rating of a diode varies with temperature, except that PIV increases with increased temperature and decreases as the diode becomes cooler -- exactly opposite that of forward voltage. Diode curve: showing knee at 0.7 V forward bias for Si, and reverse breakdown. Typically, the PIV rating of a generic “rectifier” diode is at least 50 volts at room temperature. Diodes with PIV ratings in the many thousands of volts are available for modest prices. Being able to determine the polarity (cathode versus anode) and basic functionality of a diode is a very important skill for the electronics hobbyist or technician to have. Since we know that a diode is essentially nothing more than a one-way valve for electricity, it makes sense we should be able to verify its one-way nature using a DC (battery-powered) ohmmeter as in Figure below. Connected one way across the diode, the meter should show a very low resistance at (a). Connected the other way across the diode, it should show a very high resistance at (b) (“OL” on some digital meter models). Determination of diode polarity: (a) Low resistance indicates forward bias, black lead is cathode and red lead anode (for most meters) (b) Reversing leads shows high resistance indicating reverse bias. Of course, to determine which end of the diode is the cathode and which is the anode, you must know with certainty which test lead of the meter is positive (+) and which is negative (-) when set to the “resistance” or “Ω” function. With most digital multimeters I've seen, the red lead becomes positive and the black lead negative when set to measure resistance, in accordance with standard electronics color-code convention. However, this is not guaranteed for all meters. Many analog multimeters, for example, actually make their black leads positive (+) and their red leads negative (-) when switched to the “resistance” function, because it is easier to manufacture it that way! One problem with using an ohmmeter to check a diode is that the readings obtained only have qualitative value, not quantitative. In other words, an ohmmeter only tells you which way the diode conducts; the low-value resistance indication obtained while conducting is useless. If an ohmmeter shows a value of “1.73 ohms” while forward-biasing a diode, that figure of 1.73 Ω doesn't represent any real-world quantity useful to us as technicians or circuit designers. It neither represents the forward voltage drop nor any “bulk” resistance in the semiconductor material of the diode itself, but rather is a figure dependent upon both quantities and will vary substantially with the particular ohmmeter used to take the reading. For this reason, some digital multimeter manufacturers equip their meters with a special “diode check” function which displays the actual forward voltage drop of the diode in volts, rather than a “resistance” figure in ohms. These meters work by forcing a small current through the diode and measuring the voltage dropped between the two test leads. (Figure below) Meter with a “Diode check” function displays the forward voltage drop of 0.548 volts instead of a low resistance. The forward voltage reading obtained with such a meter will typically be less than the “normal” drop of 0.7 volts for silicon and 0.3 volts for germanium, because the current provided by the meter is of trivial proportions. If a multimeter with diode-check function isn't available, or you would like to measure a diode's forward voltage drop at some non-trivial current, the circuit of Figure below may be constructed using a battery, resistor, and voltmeter Measuring forward voltage of a diode without“diode check” meter function: (a) Schematic diagram. (b) Pictorial diagram. Connecting the diode backwards to this testing circuit will simply result in the voltmeter indicating the full voltage of the battery. If this circuit were designed to provide a constant or nearly constant current through the diode despite changes in forward voltage drop, it could be used as the basis of a temperature-measurement instrument, the voltage measured across the diode being inversely proportional to diode junction temperature. Of course, diode current should be kept to a minimum to avoid self-heating (the diode dissipating substantial amounts of heat energy), which would interfere with temperature measurement. Beware that some digital multimeters equipped with a “diode check” function may output a very low test voltage (less than 0.3 volts) when set to the regular “resistance” (Ω) function: too low to fully collapse the depletion region of a PN junction. The philosophy here is that the “diode check” function is to be used for testing semiconductor devices, and the “resistance” function for anything else. By using a very low test voltage to measure resistance, it is easier for a technician to measure the resistance of non-semiconductor components connected to semiconductor components, since the semiconductor component junctions will not become forward-biased with such low voltages. Consider the example of a resistor and diode connected in parallel, soldered in place on a printed circuit board (PCB). Normally, one would have to unsolder the resistor from the circuit (disconnect it from all other components) before measuring its resistance, otherwise any parallel-connected components would affect the reading obtained. When using a multimeter which outputs a very low test voltage to the probes in the “resistance” function mode, the diode's PN junction will not have enough voltage impressed across it to become forward-biased, and will only pass negligible current. Consequently, the meter “sees” the diode as an open (no continuity), and only registers the resistor's resistance. (Figure below) Ohmmeter equipped with a low test voltage (<0.7 V) does not see diodes allowing it to measure parallel resistors. If such an ohmmeter were used to test a diode, it would indicate a very high resistance (many mega-ohms) even if connected to the diode in the “correct” (forward-biased) direction. (Figure below) Ohmmeter equipped with a low test voltage, too low to forward bias diodes, does not see diodes. Reverse voltage strength of a diode is not as easily tested, because exceeding a normal diode's PIV usually results in destruction of the diode. Special types of diodes, though, which are designed to “break down” in reverse-bias mode without damage (called zener diodes), which are tested with the same voltage source / resistor / voltmeter circuit, provided that the voltage source is of high enough value to force the diode into its breakdown region. More on this subject in a later section of this chapter. In addition to forward voltage drop (Vf) and peak inverse voltage (PIV), there are many other ratings of diodes important to circuit design and component selection. Semiconductor manufacturers provide detailed specifications on their products -- diodes included -- in publications known as datasheets. Datasheets for a wide variety of semiconductor components may be found in reference books and on the internet. I prefer the internet as a source of component specifications because all the data obtained from manufacturer websites are up-to-date. A typical diode datasheet will contain figures for the following parameters: Maximum repetitive reverse voltage = VRRM, the maximum amount of voltage the diode can withstand in reverse-bias mode, in repeated pulses. Ideally, this figure would be infinite. Maximum DC reverse voltage = VR or VDC, the maximum amount of voltage the diode can withstand in reverse-bias mode on a continual basis. Ideally, this figure would be infinite. Maximum forward voltage = VF, usually specified at the diode's rated forward current. Ideally, this figure would be zero: the diode providing no opposition whatsoever to forward current. In reality, the forward voltage is described by the “diode equation.” Maximum (average) forward current = IF(AV), the maximum average amount of current the diode is able to conduct in forward bias mode. This is fundamentally a thermal limitation: how much heat can the PN junction handle, given that dissipation power is equal to current (I) multiplied by voltage (V or E) and forward voltage is dependent upon both current and junction temperature. Ideally, this figure would be infinite. Maximum (peak or surge) forward current = IFSM or if(surge), the maximum peak amount of current the diode is able to conduct in forward bias mode. Again, this rating is limited by the diode junction's thermal capacity, and is usually much higher than the average current rating due to thermal inertia (the fact that it takes a finite amount of time for the diode to reach maximum temperature for a given current). Ideally, this figure would be infinite. Maximum total dissipation = PD, the amount of power (in watts) allowable for the diode to dissipate, given the dissipation (P=IE) of diode current multiplied by diode voltage drop, and also the dissipation (P=I2R) of diode current squared multiplied by bulk resistance. Fundamentally limited by the diode's thermal capacity (ability to tolerate high temperatures). Operating junction temperature = TJ, the maximum allowable temperature for the diode's PN junction, usually given in degrees Celsius (oC). Heat is the “Achilles' heel” of semiconductor devices: they must be kept cool to function properly and give long service life. Storage temperature range = TSTG, the range of allowable temperatures for storing a diode (unpowered). Sometimes given in conjunction with operating junction temperature (TJ), because the maximum storage temperature and the maximum operating temperature ratings are often identical. If anything, though, maximum storage temperature rating will be greater than the maximum operating temperature rating. Thermal resistance = R(Θ), the temperature difference between junction and outside air (R(Θ)JA) or between junction and leads (R(Θ)JL) for a given power dissipation. Expressed in units of degrees Celsius per watt (oC/W). Ideally, this figure would be zero, meaning that the diode package was a perfect thermal conductor and radiator, able to transfer all heat energy from the junction to the outside air (or to the leads) with no difference in temperature across the thickness of the diode package. A high thermal resistance means that the diode will build up excessive temperature at the junction (where its critical) despite best efforts at cooling the outside of the diode, and thus will limit its maximum power dissipation. Maximum reverse current = IR, the amount of current through the diode in reverse-bias operation, with the maximum rated inverse voltage applied (VDC). Sometimes referred to as leakage current. Ideally, this figure would be zero, as a perfect diode would block all current when reverse-biased. In reality, it is very small compared to the maximum forward current. Typical junction capacitance = CJ, the typical amount of capacitance intrinsic to the junction, due to the depletion region acting as a dielectric separating the anode and cathode connections. This is usually a very small figure, measured in the range of picofarads (pF). Reverse recovery time = trr, the amount of time it takes for a diode to “turn off” when the voltage across it alternates from forward-bias to reverse-bias polarity. Ideally, this figure would be zero: the diode halting conduction immediately upon polarity reversal. For a typical rectifier diode, reverse recovery time is in the range of tens of microseconds; for a “fast switching” diode, it may only be a few nanoseconds. Most of these parameters vary with temperature or other operating conditions, and so a single figure fails to fully describe any given rating. Therefore, manufacturers provide graphs of component ratings plotted against other variables (such as temperature), so that the circuit designer has a better idea of what the device is capable of. Now we come to the most popular application of the diode: rectification. Simply defined, rectification is the conversion of alternating current (AC) to direct current (DC). This involves a device that only allows one-way flow of electrons. As we have seen, this is exactly what a semiconductor diode does. The simplest kind of rectifier circuit is the half-wave rectifier. It only allows one half of an AC waveform to pass through to the load. (Figure below) Half-wave rectifier circuit. For most power applications, half-wave rectification is insufficient for the task. The harmonic content of the rectifier's output waveform is very large and consequently difficult to filter. Furthermore, the AC power source only supplies power to the load one half every full cycle, meaning that half of its capacity is unused. Half-wave rectification is, however, a very simple way to reduce power to a resistive load. Some two-position lamp dimmer switches apply full AC power to the lamp filament for “full” brightness and then half-wave rectify it for a lesser light output. (Figure below) Half-wave rectifier application: Two level lamp dimmer. In the “Dim” switch position, the incandescent lamp receives approximately one-half the power it would normally receive operating on full-wave AC. Because the half-wave rectified power pulses far more rapidly than the filament has time to heat up and cool down, the lamp does not blink. Instead, its filament merely operates at a lesser temperature than normal, providing less light output. This principle of “pulsing” power rapidly to a slow-responding load device to control the electrical power sent to it is common in the world of industrial electronics. Since the controlling device (the diode, in this case) is either fully conducting or fully nonconducting at any given time, it dissipates little heat energy while controlling load power, making this method of power control very energy-efficient. This circuit is perhaps the crudest possible method of pulsing power to a load, but it suffices as a proof-of-concept application. If we need to rectify AC power to obtain the full use of both half-cycles of the sine wave, a different rectifier circuit configuration must be used. Such a circuit is called a full-wave rectifier. One kind of full-wave rectifier, called the center-tap design, uses a transformer with a center-tapped secondary winding and two diodes, as in Figure below. Full-wave rectifier, center-tapped design. This circuit's operation is easily understood one half-cycle at a time. Consider the first half-cycle, when the source voltage polarity is positive (+) on top and negative (-) on bottom. At this time, only the top diode is conducting; the bottom diode is blocking current, and the load “sees” the first half of the sine wave, positive on top and negative on bottom. Only the top half of the transformer's secondary winding carries current during this half-cycle as in Figure below. Full-wave center-tap rectifier: Top half of secondary winding conducts during positive half-cycle of input, delivering positive half-cycle to load.. During the next half-cycle, the AC polarity reverses. Now, the other diode and the other half of the transformer's secondary winding carry current while the portions of the circuit formerly carrying current during the last half-cycle sit idle. The load still “sees” half of a sine wave, of the same polarity as before: positive on top and negative on bottom. (Figure below) Full-wave center-tap rectifier: During negative input half-cycle, bottom half of secondary winding conducts, delivering a positive half-cycle to the load. One disadvantage of this full-wave rectifier design is the necessity of a transformer with a center-tapped secondary winding. If the circuit in question is one of high power, the size and expense of a suitable transformer is significant. Consequently, the center-tap rectifier design is only seen in low-power applications. The full-wave center-tapped rectifier polarity at the load may be reversed by changing the direction of the diodes. Furthermore, the reversed diodes can be paralleled with an existing positive-output rectifier. The result is dual-polarity full-wave center-tapped rectifier in Figure below. Note that the connectivity of the diodes themselves is the same configuration as a bridge. Dual polarity full-wave center tap rectifier Another, more popular full-wave rectifier design exists, and it is built around a four-diode bridge configuration. For obvious reasons, this design is called a full-wave bridge. (Figure below) Full-wave bridge rectifier. Current directions for the full-wave bridge rectifier circuit are as shown in Figure below for positive half-cycle and Figure below for negative half-cycles of the AC source waveform. Note that regardless of the polarity of the input, the current flows in the same direction through the load. That is, the negative half-cycle of source is a positive half-cycle at the load. The current flow is through two diodes in series for both polarities. Thus, two diode drops of the source voltage are lost (0.7·2=1.4 V for Si) in the diodes. This is a disadvantage compared with a full-wave center-tap design. This disadvantage is only a problem in very low voltage power supplies. Full-wave bridge rectifier: Electron flow for positive half-cycles. Full-wave bridge rectifier: Electron flow for negative half=cycles. Remembering the proper layout of diodes in a full-wave bridge rectifier circuit can often be frustrating to the new student of electronics. I've found that an alternative representation of this circuit is easier both to remember and to comprehend. It's the exact same circuit, except all diodes are drawn in a horizontal attitude, all “pointing” the same direction. (Figure below) Alternative layout style for Full-wave bridge rectifier. One advantage of remembering this layout for a bridge rectifier circuit is that it expands easily into a polyphase version in Figure below. Three-phase full-wave bridge rectifier circuit. Each three-phase line connects between a pair of diodes: one to route power to the positive (+) side of the load, and the other to route power to the negative (-) side of the load. Polyphase systems with more than three phases are easily accommodated into a bridge rectifier scheme. Take for instance the six-phase bridge rectifier circuit in Figure below. Six-phase full-wave bridge rectifier circuit. When polyphase AC is rectified, the phase-shifted pulses overlap each other to produce a DC output that is much “smoother” (has less AC content) than that produced by the rectification of single-phase AC. This is a decided advantage in high-power rectifier circuits, where the sheer physical size of filtering components would be prohibitive but low-noise DC power must be obtained. The diagram in Figure below shows the full-wave rectification of three-phase AC. Three-phase AC and 3-phase full-wave rectifier output. In any case of rectification -- single-phase or polyphase -- the amount of AC voltage mixed with the rectifier's DC output is called ripple voltage. In most cases, since “pure” DC is the desired goal, ripple voltage is undesirable. If the power levels are not too great, filtering networks may be employed to reduce the amount of ripple in the output voltage. Sometimes, the method of rectification is referred to by counting the number of DC “pulses” output for every 360o of electrical “rotation.” A single-phase, half-wave rectifier circuit, then, would be called a 1-pulse rectifier, because it produces a single pulse during the time of one complete cycle (360o) of the AC waveform. A single-phase, full-wave rectifier (regardless of design, center-tap or bridge) would be called a 2-pulse rectifier, because it outputs two pulses of DC during one AC cycle's worth of time. A three-phase full-wave rectifier would be called a 6-pulse unit. Modern electrical engineering convention further describes the function of a rectifier circuit by using a three-field notation of phases, ways, and number of pulses. A single-phase, half-wave rectifier circuit is given the somewhat cryptic designation of 1Ph1W1P (1 phase, 1 way, 1 pulse), meaning that the AC supply voltage is single-phase, that current on each phase of the AC supply lines moves in only one direction (way), and that there is a single pulse of DC produced for every 360o of electrical rotation. A single-phase, full-wave, center-tap rectifier circuit would be designated as 1Ph1W2P in this notational system: 1 phase, 1 way or direction of current in each winding half, and 2 pulses or output voltage per cycle. A single-phase, full-wave, bridge rectifier would be designated as 1Ph2W2P: the same as for the center-tap design, except current can go both ways through the AC lines instead of just one way. The three-phase bridge rectifier circuit shown earlier would be called a 3Ph2W6P rectifier. Is it possible to obtain more pulses than twice the number of phases in a rectifier circuit? The answer to this question is yes: especially in polyphase circuits. Through the creative use of transformers, sets of full-wave rectifiers may be paralleled in such a way that more than six pulses of DC are produced for three phases of AC. A 30o phase shift is introduced from primary to secondary of a three-phase transformer when the winding configurations are not of the same type. In other words, a transformer connected either Y-Δ or Δ-Y will exhibit this 30o phase shift, while a transformer connected Y-Y or Δ-Δ will not. This phenomenon may be exploited by having one transformer connected Y-Y feed a bridge rectifier, and have another transformer connected Y-Δ feed a second bridge rectifier, then parallel the DC outputs of both rectifiers. (Figure below) Since the ripple voltage waveforms of the two rectifiers' outputs are phase-shifted 30o from one another, their superposition results in less ripple than either rectifier output considered separately: 12 pulses per 360o instead of just six: Polyphase rectifier circuit: 3-phase 2-way 12-pulse (3Ph2W12P) A peak detector is a series connection of a diode and a capacitor outputting a DC voltage equal to the peak value of the applied AC signal. The circuit is shown in Figure below with the corresponding SPICE net list. An AC voltage source applied to the peak detector, charges the capacitor to the peak of the input. The diode conducts positive “half cycles,” charging the capacitor to the waveform peak. When the input waveform falls below the DC “peak” stored on the capacitor, the diode is reverse biased, blocking current flow from capacitor back to the source. Thus, the capacitor retains the peak value even as the waveform drops to zero. Another view of the peak detector is that it is the same as a half-wave rectifier with a filter capacitor added to the output. *SPICE 03441.eps C1 2 0 0.1u R1 1 3 1.0k V1 1 0 SIN(0 5 1k) D1 3 2 diode .model diode d .tran 0.01m 50mm .end Peak detector: Diode conducts on positive half cycles charging capacitor to the peak voltage (less diode forward drop). It takes a few cycles for the capacitor to charge to the peak as in Figure below due to the series resistance (RC “time constant”). Why does the capacitor not charge all the way to 5 V? It would charge to 5 V if an “ideal diode” were obtainable. However, the silicon diode has a forward voltage drop of 0.7 V which subtracts from the 5 V peak of the input. Peak detector: Capacitor charges to peak within a few cycles. The circuit in Figure above could represent a DC power supply based on a half-wave rectifier. The resistance would be a few Ohms instead of 1 kΩ due to a transformer secondary winding replacing the voltage source and resistor. A larger “filter” capacitor would be used. A power supply based on a 60 Hz source with a filter of a few hundred µF could supply up to 100 mA. Half-wave supplies seldom supply more due to the difficulty of filtering a half-wave. The peak detector may be combined with other components to build a crystal radio A circuit which removes the peak of a waveform is known as a clipper. A negative clipper is shown in Figure below. This schematic diagram was produced with Xcircuit schematic capture program. Xcircuit produced the SPICE net list Figure below, except for the second, and next to last pair of lines which were inserted with a text editor. *SPICE 03437.eps * A K ModelName D1 0 2 diode R1 2 1 1.0k V1 1 0 SIN(0 5 1k) .model diode d .tran .05m 3m .end Clipper: clips negative peak at -0.7 V. During the positive half cycle of the 5 V peak input, the diode is reversed biased. The diode does not conduct. It is as if the diode were not there. The positive half cycle is unchanged at the output V(2) in Figure below. Since the output positive peaks actually overlays the input sinewave V(1), the input has been shifted upward in the plot for clarity. In Nutmeg, the SPICE display module, the command “plot v(1)+1)” accomplishes this. V(1)+1 is actually V(1), a 10 Vptp sinewave, offset by 1 V for display clarity. V(2) output is clipped at -0.7 V, by diode D1. During the negative half cycle of sinewave input of Figure above, the diode is forward biased, that is, conducting. The negative half cycle of the sinewave is shorted out. The negative half cycle of V(2) would be clipped at 0 V for an ideal diode. The waveform is clipped at -0.7 V due to the forward voltage drop of the silicon diode. The spice model defaults to 0.7 V unless parameters in the model statement specify otherwise. Germanium or Schottky diodes clip at lower voltages. Closer examination of the negative clipped peak (Figure above) reveals that it follows the input for a slight period of time while the sinewave is moving toward -0.7 V. The clipping action is only effective after the input sinewave exceeds -0.7 V. The diode is not conducting for the complete half cycle, though, during most of it. The addition of an anti-parallel diode to the existing diode in Figure above yields the symmetrical clipper in Figure below. *SPICE 03438.eps D1 0 2 diode D2 2 0 diode R1 2 1 1.0k V1 1 0 SIN(0 5 1k) .model diode d .tran 0.05m 3m .end Symmetrical clipper: Anti-parallel diodes clip both positive and negative peak, leaving a ± 0.7 V output. Diode D1 clips the negative peak at -0.7 V as before. The additional diode D2 conducts for positive half cycles of the sine wave as it exceeds 0.7 V, the forward diode drop. The remainder of the voltage drops across the series resistor. Thus, both peaks of the input sinewave are clipped in Figure below. The net list is in Figure above Diode D1 clips at -0.7 V as it conducts during negative peaks. D2 conducts for positive peaks, clipping at 0.7V. The most general form of the diode clipper is shown in Figure below. For an ideal diode, the clipping occurs at the level of the clipping voltage, V1 and V2. However, the voltage sources have been adjusted to account for the 0.7 V forward drop of the real silicon diodes. D1 clips at 1.3V +0.7V=2.0V when the diode begins to conduct. D2 clips at -2.3V -0.7V=-3.0V when D2 conducts. *SPICE 03439.eps V1 3 0 1.3 V2 4 0 -2.3 D1 2 3 diode D2 4 2 diode R1 2 1 1.0k V3 1 0 SIN(0 5 1k) .model diode d .tran 0.05m 3m .end D1 clips the input sinewave at 2V. D2 clips at -3V. The clipper in Figure above does not have to clip both levels. To clip at one level with one diode and one voltage source, remove the other diode and source. The net list is in Figure above. The waveforms in Figure below show the clipping of v(1) at output v(2). D1 clips the sinewave at 2V. D2 clips at -3V. There is also a zener diode clipper circuit in the “Zener diode” section. A zener diode replaces both the diode and the DC voltage source. A practical application of a clipper is to prevent an amplified speech signal from overdriving a radio transmitter in Figure below. Over driving the transmitter generates spurious radio signals which causes interference with other stations. The clipper is a protective measure. Clipper prevents over driving radio transmitter by voice peaks. A sinewave may be squared up by overdriving a clipper. Another clipper application is the protection of exposed inputs of integrated circuits. The input of the IC is connected to a pair of diodes as at node “2” of Figure above . The voltage sources are replaced by the power supply rails of the IC. For example, CMOS IC's use 0V and +5 V. Analog amplifiers might use ±12V for the V1 and V2 sources. The circuits in Figure below are known as clampers or DC restorers. The corresponding netlist is in Figure below. These circuits clamp a peak of a waveform to a specific DC level compared with a capacitively coupled signal which swings about its average DC level (usually 0V). If the diode is removed from the clamper, it defaults to a simple coupling capacitor– no clamping. What is the clamp voltage? And, which peak gets clamped? In Figure below (a) the clamp voltage is 0 V ignoring diode drop, (more exactly 0.7 V with Si diode drop). In Figure below, the positive peak of V(1) is clamped to the 0 V (0.7 V) clamp level. Why is this? On the first positive half cycle, the diode conducts charging the capacitor left end to +5 V (4.3 V). This is -5 V (-4.3 V) on the right end at V(1,4). Note the polarity marked on the capacitor in Figure below (a). The right end of the capacitor is -5 V DC (-4.3 V) with respect to ground. It also has an AC 5 V peak sinewave coupled across it from source V(4) to node 1. The sum of the two is a 5 V peak sine riding on a - 5 V DC (-4.3 V) level. The diode only conducts on successive positive excursions of source V(4) if the peak V(4) exceeds the charge on the capacitor. This only happens if the charge on the capacitor drained off due to a load, not shown. The charge on the capacitor is equal to the positive peak of V(4) (less 0.7 diode drop). The AC riding on the negative end, right end, is shifted down. The positive peak of the waveform is clamped to 0 V (0.7 V) because the diode conducts on the positive peak. Clampers: (a) Positive peak clamped to 0 V. (b) Negative peak clamped to 0 V. (c) Negative peak clamped to 5 V. *SPICE 03443.eps V1 6 0 5 D1 6 3 diode C1 4 3 1000p D2 0 2 diode C2 4 2 1000p C3 4 1 1000p D3 1 0 diode V2 4 0 SIN(0 5 1k) .model diode d .tran 0.01m 5m .end V(4) source voltage 5 V peak used in all clampers. V(1) clamper output from Figure above (a). V(1,4) DC voltage on capacitor in Figure (a). V(2) clamper output from Figure (b). V(3) clamper output from Figure (c). Suppose the polarity of the diode is reversed as in Figure above (b)? The diode conducts on the negative peak of source V(4). The negative peak is clamped to 0 V (-0.7 V). See V(2) in Figure above. The most general realization of the clamper is shown in Figure above (c) with the diode connected to a DC reference. The capacitor still charges during the negative peak of the source. Note that the polarities of the AC source and the DC reference are series aiding. Thus, the capacitor charges to the sum to the two, 10 V DC (9.3 V). Coupling the 5 V peak sinewave across the capacitor yields Figure above V(3), the sum of the charge on the capacitor and the sinewave. The negative peak appears to be clamped to 5 V DC (4.3V), the value of the DC clamp reference (less diode drop). Describe the waveform if the DC clamp reference is changed from 5 V to 10 V. The clamped waveform will shift up. The negative peak will be clamped to 10 V (9.3). Suppose that the amplitude of the sine wave source is increased from 5 V to 7 V? The negative peak clamp level will remain unchanged. Though, the amplitude of the sinewave output will increase. An application of the clamper circuit is as a “DC restorer” in “composite video” circuitry in both television transmitters and receivers. An NTSC (US video standard) video signal “white level” corresponds to minimum (12.5%) transmitted power. The video “black level” corresponds to a high level (75% of transmitter power. There is a “blacker than black level” corresponding to 100% transmitted power assigned to synchronization signals. The NTSC signal contains both video and synchronization pulses. The problem with the composite video is that its average DC level varies with the scene, dark vs light. The video itself is supposed to vary. However, the sync must always peak at 100%. To prevent the sync signals from drifting with changing scenes, a “DC restorer” clamps the top of the sync pulses to a voltage corresponding to 100% transmitter modulation. [ATCO] A voltage multiplier is a specialized rectifier circuit producing an output which is theoretically an integer times the AC peak input, for example, 2, 3, or 4 times the AC peak input. Thus, it is possible to get 200 VDC from a 100 Vpeak AC source using a doubler, 400 VDC from a quadrupler. Any load in a practical circuit will lower these voltages. A voltage doubler application is a DC power supply capable of using either a 240 VAC or 120 VAC source. The supply uses a switch selected full-wave bridge to produce about 300 VDC from a 240 VAC source. The 120 V position of the switch rewires the bridge as a doubler producing about 300 VDC from the 120 VAC. In both cases, 300 VDC is produced. This is the input to a switching regulator producing lower voltages for powering, say, a personal computer. The half-wave voltage doubler in Figure below (a) is composed of two circuits: a clamper at (b) and peak detector (half-wave rectifier) in Figure prior, which is shown in modified form in Figure below (c). C2 has been added to a peak detector (half-wave rectifier). Half-wave voltage doubler (a) is composed of (b) a clamper and (c) a half-wave rectifier. Referring to Figure above (b), C2 charges to 5 V (4.3 V considering the diode drop) on the negative half cycle of AC input. The right end is grounded by the conducting D2. The left end is charged at the negative peak of the AC input. This is the operation of the clamper. During the positive half cycle, the half-wave rectifier comes into play at Figure above (c). Diode D2 is out of the circuit since it is reverse biased. C2 is now in series with the voltage source. Note the polarities of the generator and C2, series aiding. Thus, rectifier D1 sees a total of 10 V at the peak of the sinewave, 5 V from generator and 5 V from C2. D1 conducts waveform v(1) (Figure below), charging C1 to the peak of the sine wave riding on 5 V DC (Figure below v(2)). Waveform v(2) is the output of the doubler, which stabilizes at 10 V (8.6 V with diode drops) after a few cycles of sinewave input. *SPICE 03255.eps C1 2 0 1000p D1 1 2 diode C2 4 1 1000p D2 0 1 diode V1 4 0 SIN(0 5 1k) .model diode d .tran 0.01m 5m .end Voltage doubler: v(4) input. v(1) clamper stage. v(2) half-wave rectifier stage, which is the doubler output. The full-wave voltage doubler is composed of a pair of series stacked half-wave rectifiers. (Figure below) The corresponding netlist is in Figure below. The bottom rectifier charges C1 on the negative half cycle of input. The top rectifier charges C2 on the positive halfcycle. Each capacitor takes on a charge of 5 V (4.3 V considering diode drop). The output at node 5 is the series total of C1 + C2 or 10 V (8.6 V with diode drops). *SPICE 03273.eps *R1 3 0 100k *R2 5 3 100k D1 0 2 diode D2 2 5 diode C1 3 0 1000p C2 5 3 1000p V1 2 3 SIN(0 5 1k) .model diode d .tran 0.01m 5m .end Full-wave voltage doubler consists of two half-wave rectifiers operating on alternating polarities. Note that the output v(5) Figure below reaches full value within one cycle of the input v(2) excursion. Full-wave voltage doubler: v(2) input, v(3)voltage at mid point, v(5) voltage at output Figure below illustrates the derivation of the full-wave doubler from a pair of opposite polarity half-wave rectifiers (a). The negative rectifier of the pair is redrawn for clarity (b). Both are combined at (c) sharing the same ground. At (d) the negative rectifier is re-wired to share one voltage source with the positive rectifier. This yields a ±5 V (4.3 V with diode drop) power supply; though, 10 V is measurable between the two outputs. The ground reference point is moved so that +10 V is available with respect to ground. Full-wave doubler: (a) Pair of doublers, (b) redrawn, (c) sharing the ground, (d) share the same voltage source. (e) move the ground point. A voltage tripler (Figure below) is built from a combination of a doubler and a half wave rectifier (C3, D3). The half-wave rectifier produces 5 V (4.3 V) at node 3. The doubler provides another 10 V (8.4 V) between nodes 2 and 3. for a total of 15 V (12.9 V) at the output node 2 with respect to ground. The netlist is in Figure below. Voltage tripler composed of doubler stacked atop a single stage rectifier. Note that V(3) in Figure below rises to 5 V (4.3 V) on the first negative half cycle. Input v(4) is shifted upward by 5 V (4.3 V) due to 5 V from the half-wave rectifier. And 5 V more at v(1) due to the clamper (C2, D2). D1 charges C1 (waveform v(2)) to the peak value of v(1). *SPICE 03283.eps C3 3 0 1000p D3 0 4 diode C1 2 3 1000p D1 1 2 diode C2 4 1 1000p D2 3 1 diode V1 4 3 SIN(0 5 1k) .model diode d .tran 0.01m 5m .end Voltage tripler: v(3) half-wave rectifier, v(4) input+ 5 V, v(1) clamper, v(2) final output. A voltage quadrupler is a stacked combination of two doublers shown in Figure below. Each doubler provides 10 V (8.6 V) for a series total at node 2 with respect to ground of 20 V (17.2 V). The netlist is in Figure below. Voltage quadrupler, composed of two doublers stacked in series, with output at node 2. The waveforms of the quadrupler are shown in Figure below. Two DC outputs are available: v(3), the doubler output, and v(2) the quadrupler output. Some of the intermediate voltages at clampers illustrate that the input sinewave (not shown), which swings by *SPICE 03441.eps *SPICE 03286.eps C22 4 5 1000p C11 3 0 1000p D11 0 5 diode D22 5 3 diode C1 2 3 1000p D1 1 2 diode C2 4 1 1000p D2 3 1 diode V1 4 3 SIN(0 5 1k) .model diode d .tran 0.01m 5m .end Voltage quadrupler: DC voltage available at v(3) and v(2). Intermediate waveforms: Clampers: v(5), v(4), v(1). Some notes on voltage multipliers are in order at this point. The circuit parameters used in the examples (V= 5 V 1 kHz, C=1000 pf) do not provide much current, microamps. Furthermore, load resistors have been omitted. Loading reduces the voltages from those shown. If the circuits are to be driven by a kHz source at low voltage, as in the examples, the capacitors are usually 0.1 to 1.0 µF so that milliamps of current are available at the output. If the multipliers are driven from 50/60 Hz, the capacitor are a few hundred to a few thousand microfarads to provide hundreds of milliamps of output current. If driven from line voltage, pay attention to the polarity and voltage ratings of the capacitors. Finally, any direct line driven power supply (no transformer) is dangerous to the experimenter and line operated test equipment. Commercial direct driven supplies are safe because the hazardous circuitry is in an enclosure to protect the user. When breadboarding these circuits with electrolytic capacitors of any voltage, the capacitors will explode if the polarity is reversed. Such circuits should be powered up behind a safety shield. A voltage multiplier of cascaded half-wave doublers of arbitrary length is known as a Cockcroft-Walton multiplier as shown in Figure below. This multiplier is used when a high voltage at low current is required. The advantage over a conventional supply is that an expensive high voltage transformer is not required– at least not as high as the output. Cockcroft-Walton x8 voltage multiplier; output at v(8). The pair of diodes and capacitors to the left of nodes 1 and 2 in Figure above constitute a half-wave doubler. Rotating the diodes by 45o counterclockwise, and the bottom capacitor by 90o makes it look like Figure prior (a). Four of the doubler sections are cascaded to the right for a theoretical x8 multiplication factor. Node 1 has a clamper waveform (not shown), a sinewave shifted up by 1x (5 V). The other odd numbered nodes are sinewaves clamped to successively higher voltages. Node 2, the output of the first doubler, is a 2x DC voltage v(2) in Figure below. Successive even numbered nodes charge to successively higher voltages: v(4), v(6), v(8) D1 7 8 diode C1 8 6 1000p D2 6 7 diode C2 5 7 1000p D3 5 6 diode C3 4 6 1000p D4 4 5 diode C4 3 5 1000p D5 3 4 diode C5 2 4 1000p D6 2 3 diode D7 1 2 diode C6 1 3 1000p C7 2 0 1000p C8 99 1 1000p D8 0 1 diode V1 99 0 SIN(0 5 1k) .model diode d .tran 0.01m 50m .end Cockcroft-Walton (x8) waveforms. Output is v(8). Without diode drops, each doubler yields 2Vin or 10 V, considering two diode drops (10-1.4)=8.6 V is realistic. For a total of 4 doublers one expects 4·8.6=34.4 V out of 40 V. Consulting Figure above, v(2) is about right;however, v(8) is <30 V instead of the anticipated 34.4 V. The bane of the Cockcroft-Walton multiplier is that each additional stage adds less than the previous stage. Thus, a practical limit to the number of stages exist. It is possible to overcome this limitation with a modification to the basic circuit. [ABR] Also note the time scale of 40 msec compared with 5 ms for previous circuits. It required 40 msec for the voltages to rise to a terminal value for this circuit. The netlist in Figure above has a “.tran 0.010m 50m” command to extend the simulation time to 50 msec; though, only 40 msec is plotted. The Cockcroft-Walton multiplier serves as a more efficient high voltage source for photomultiplier tubes requiring up to 2000 V. [ABR] Moreover, the tube has numerous dynodes, terminals requiring connection to the lower voltage “even numbered” nodes. The series string of multiplier taps replaces a heat generating resistive voltage divider of previous designs. An AC line operated Cockcroft-Walton multiplier provides high voltage to “ion generators” for neutralizing electrostatic charge and for air purifiers. A popular use of diodes is for the mitigation of inductive “kickback:” the pulses of high voltage produced when direct current through an inductor is interrupted. Take, for example, this simple circuit in Figure below with no protection against inductive kickback. Inductive kickback: (a) Switch open. (b) Switch closed, electron current flows from battery through coil which has polarity matching battery. Magnetic field stores energy. (c) Switch open, Current still flows in coil due to collapsing magnetic field. Note polarity change on coil. (d) Coil voltage vs time. When the pushbutton switch is actuated, current goes through the inductor, producing a magnetic field around it. When the switch is de-actuated, its contacts open, interrupting current through the inductor, and causing the magnetic field to rapidly collapse. Because the voltage induced in a coil of wire is directly proportional to the rate of change over time of magnetic flux (Faraday's Law: e = NdΦ/dt), this rapid collapse of magnetism around the coil produces a high voltage “spike”. If the inductor in question is an electromagnet coil, such as in a solenoid or relay (constructed for the purpose of creating a physical force via its magnetic field when energized), the effect of inductive “kickback” serves no useful purpose at all. In fact, it is quite detrimental to the switch, as it causes excessive arcing at the contacts, greatly reducing their service life. Of the practical methods for mitigating the high voltage transient created when the switch is opened, none so simple as the so-called commutating diode in Figure below. Inductive kickback with protection: (a) Switch open. (b)Switch closed, storing energy in magnetic field. (c) Switch open, inductive kickback is shorted by diode. In this circuit, the diode is placed in parallel with the coil, such that it will be reverse-biased when DC voltage is applied to the coil through the switch. Thus, when the coil is energized, the diode conducts no current in Figure above (b). However, when the switch is opened, the coil's inductance responds to the decrease in current by inducing a voltage of reverse polarity, in an effort to maintain current at the same magnitude and in the same direction. This sudden reversal of voltage polarity across the coil forward-biases the diode, and the diode provides a current path for the inductor's current, so that its stored energy is dissipated slowly rather than suddenly in Figure above (c). As a result, the voltage induced in the coil by its collapsing magnetic field is quite low: merely the forward voltage drop of the diode, rather than hundreds of volts as before. Thus, the switch contacts experience a voltage drop equal to the battery voltage plus about 0.7 volts (if the diode is silicon) during this discharge time. In electronics parlance, commutation refers to the reversal of voltage polarity or current direction. Thus, the purpose of a commutating diode is to act whenever voltage reverses polarity, for example, on an inductor coil when current through it is interrupted. A less formal term for a commutating diode is snubber, because it “snubs” or “squelches” the inductive kickback. A noteworthy disadvantage of this method is the extra time it imparts to the coil's discharge. Because the induced voltage is clamped to a very low value, its rate of magnetic flux change over time is comparatively slow. Remember that Faraday's Law describes the magnetic flux rate-of-change (dΦ/dt) as being proportional to the induced, instantaneous voltage (e or v). If the instantaneous voltage is limited to some low figure, then the rate of change of magnetic flux over time will likewise be limited to a low (slow) figure. If an electromagnet coil is “snubbed” with a commutating diode, the magnetic field will dissipate at a relatively slow rate compared to the original scenario (no diode) where the field disappeared almost instantly upon switch release. The amount of time in question will most likely be less than one second, but it will be measurably slower than without a commutating diode in place. This may be an intolerable consequence if the coil is used to actuate an electromechanical relay, because the relay will possess a natural “time delay” upon coil de-energization, and an unwanted delay of even a fraction of a second may wreak havoc in some circuits. Unfortunately, one cannot eliminate the high-voltage transient of inductive kickback and maintain fast de-magnetization of the coil: Faraday's Law will not be violated. However, if slow de-magnetization is unacceptable, a compromise may be struck between transient voltage and time by allowing the coil's voltage to rise to some higher level (but not so high as without a commutating diode in place). The schematic in Figure below shows how this can be done. (a) Commutating diode with series resistor. (b) Voltage waveform. (c) Level with no diode. (d) Level with diode, no resistor. (e) Compromise level with diode and resistor. A resistor placed in series with the commutating diode allows the coil's induced voltage to rise to a level greater than the diode's forward voltage drop, thus hastening the process of de-magnetization. This, of course, will place the switch contacts under greater stress, and so the resistor must be sized to limit that transient voltage at an acceptable maximum level. Diodes can perform switching and digital logic operations. Forward and reverse bias switch a diode between the low and high impedance states, respectively. Thus, it serves as a switch. Diodes can perform digital logic functions: AND, and OR. Diode logic was used in early digital computers. It only finds limited application today. Sometimes it is convenient to fashion a single logic gate from a few diodes. Diode AND gate An AND gate is shown in Figure above. Logic gates have inputs and an output (Y) which is a function of the inputs. The inputs to the gate are high (logic 1), say 10 V, or low, 0 V (logic 0). In the figure, the logic levels are generated by switches. If a switch is up, the input is effectively high (1). If the switch is down, it connects a diode cathode to ground, which is low (0). The output depends on the combination of inputs at A and B. The inputs and output are customarily recorded in a “truth table” at (c) to describe the logic of a gate. At (a) all inputs are high (1). This is recorded in the last line of the truth table at (c). The output, Y, is high (1) due to the V+ on the top of the resistor. It is unaffected by open switches. At (b) switch A pulls the cathode of the connected diode low, pulling output Y low (0.7 V). This is recorded in the third line of the truth table. The second line of the truth table describes the output with the switches reversed from (b). Switch B pulls the diode and output low. The first line of the truth table recordes the Output=0 for both input low (0). The truth table describes a logical AND function. Summary: both inputs A and B high yields a high (1) out. A two input OR gate composed of a pair of diodes is shown in Figure below. If both inputs are logic low at (a) as simulated by both switches “downward,” the output Y is pulled low by the resistor. This logic zero is recorded in the first line of the truth table at (c). If one of the inputs is high as at (b), or the other input is high, or both inputs high, the diode(s) conduct(s), pulling the output Y high. These results are reordered in the second through fourth lines of the truth table. Summary: any input “high” is a high out at Y. OR gate: (a) First line, truth table (TT). (b) Third line TT. (d) Logical OR of power line supply and back-up battery. A backup battery may be OR-wired with a line operated DC power supply in Figure above (d) to power a load, even during a power failure. With AC power present, the line supply powers the load, assuming that it is a higher voltage than the battery. In the event of a power failure, the line supply voltage drops to 0 V; the battery powers the load. The diodes must be in series with the power sources to prevent a failed line supply from draining the battery, and to prevent it from over charging the battery when line power is available. Does your PC computer retain its BIOS setting when powered off? Does your VCR (video cassette recorder) retain the clock setting after a power failure? (PC Yes, old VCR no, new VCR yes.) Diodes can switch analog signals. A reverse biased diode appears to be an open circuit. A forward biased diode is a low resistance conductor. The only problem is isolating the AC signal being switched from the DC control signal. The circuit in Figure below is a parallel resonant network: resonant tuning inductor paralleled by one (or more) of the switched resonator capacitors. This parallel LC resonant circuit could be a preselector filter for a radio receiver. It could be the frequency determining network of an oscillator (not shown). The digital control lines may be driven by a microprocessor interface. Diode switch: A digital control signal (low) selects a resonator capacitor by forward biasing the switching diode. The large value DC blocking capacitor grounds the resonant tuning inductor for AC while blocking DC. It would have a low reactance compared to the parallel LC reactances. This prevents the anode DC voltage from being shorted to ground by the resonant tuning inductor. A switched resonator capacitor is selected by pulling the corresponding digital control low. This forward biases the switching diode. The DC current path is from +5 V through an RF choke (RFC), a switching diode, and an RFC to ground via the digital control. The purpose of the RFC at the +5 V is to keep AC out of the +5 V supply. The RFC in series with the digital control is to keep AC out of the external control line. The decoupling capacitor shorts the little AC leaking through the RFC to ground, bypassing the external digital control line. With all three digital control lines high (≥+5 V), no switched resonator capacitors are selected due to diode reverse bias. Pulling one or more lines low, selects one or more switched resonator capacitors, respectively. As more capacitors are switched in parallel with the resonant tuning inductor, the resonant frequency decreases. The reverse biased diode capacitance may be substantial compared with very high frequency or ultra high frequency circuits. PIN diodes may be used as switches for lower capacitance. If we connect a diode and resistor in series with a DC voltage source so that the diode is forward-biased, the voltage drop across the diode will remain fairly constant over a wide range of power supply voltages as in Figure below (a). According to the “diode equation” here, the current through a forward-biased PN junction is proportional to e raised to the power of the forward voltage drop. Because this is an exponential function, current rises quite rapidly for modest increases in voltage drop. Another way of considering this is to say that voltage dropped across a forward-biased diode changes little for large variations in diode current. In the circuit shown in Figure below (a), diode current is limited by the voltage of the power supply, the series resistor, and the diode's voltage drop, which as we know doesn't vary much from 0.7 volts. If the power supply voltage were to be increased, the resistor's voltage drop would increase almost the same amount, and the diode's voltage drop just a little. Conversely, a decrease in power supply voltage would result in an almost equal decrease in resistor voltage drop, with just a little decrease in diode voltage drop. In a word, we could summarize this behavior by saying that the diode is regulating the voltage drop at approximately 0.7 volts. Voltage regulation is a useful diode property to exploit. Suppose we were building some kind of circuit which could not tolerate variations in power supply voltage, but needed to be powered by a chemical battery, whose voltage changes over its lifetime. We could form a circuit as shown and connect the circuit requiring steady voltage across the diode, where it would receive an unchanging 0.7 volts. This would certainly work, but most practical circuits of any kind require a power supply voltage in excess of 0.7 volts to properly function. One way we could increase our voltage regulation point would be to connect multiple diodes in series, so that their individual forward voltage drops of 0.7 volts each would add to create a larger total. For instance, if we had ten diodes in series, the regulated voltage would be ten times 0.7, or 7 volts in Figure below (b). Forward biased Si reference: (a) single diode, 0.7V, (b) 10-diodes in series 7.0V. So long as the battery voltage never sagged below 7 volts, there would always be about 7 volts dropped across the ten-diode “stack.” If larger regulated voltages are required, we could either use more diodes in series (an inelegant option, in my opinion), or try a fundamentally different approach. We know that diode forward voltage is a fairly constant figure under a wide range of conditions, but so is reverse breakdown voltage, and breakdown voltage is typically much, much greater than forward voltage. If we reversed the polarity of the diode in our single-diode regulator circuit and increased the power supply voltage to the point where the diode “broke down” (could no longer withstand the reverse-bias voltage impressed across it), the diode would similarly regulate the voltage at that breakdown point, not allowing it to increase further as in Figure below (a). (a) Reverse biased Si small-signal diode breaks down at about 100V. (b) Symbol for Zener diode. Unfortunately, when normal rectifying diodes “break down,” they usually do so destructively. However, it is possible to build a special type of diode that can handle breakdown without failing completely. This type of diode is called a zener diode, and its symbol looks like Figure above (b). When forward-biased, zener diodes behave much the same as standard rectifying diodes: they have a forward voltage drop which follows the “diode equation” and is about 0.7 volts. In reverse-bias mode, they do not conduct until the applied voltage reaches or exceeds the so-called zener voltage, at which point the diode is able to conduct substantial current, and in doing so will try to limit the voltage dropped across it to that zener voltage point. So long as the power dissipated by this reverse current does not exceed the diode's thermal limits, the diode will not be harmed. Zener diodes are manufactured with zener voltages ranging anywhere from a few volts to hundreds of volts. This zener voltage changes slightly with temperature, and like common carbon-composition resistor values, may be anywhere from 5 percent to 10 percent in error from the manufacturer's specifications. However, this stability and accuracy is generally good enough for the zener diode to be used as a voltage regulator device in common power supply circuit in Figure below. Zener diode regulator circuit, Zener voltage = 12.6V). Please take note of the zener diode's orientation in the above circuit: the diode is reverse-biased, and intentionally so. If we had oriented the diode in the “normal” way, so as to be forward-biased, it would only drop 0.7 volts, just like a regular rectifying diode. If we want to exploit this diode's reverse breakdown properties, we must operate it in its reverse-bias mode. So long as the power supply voltage remains above the zener voltage (12.6 volts, in this example), the voltage dropped across the zener diode will remain at approximately 12.6 volts. Like any semiconductor device, the zener diode is sensitive to temperature. Excessive temperature will destroy a zener diode, and because it both drops voltage and conducts current, it produces its own heat in accordance with Joule's Law (P=IE). Therefore, one must be careful to design the regulator circuit in such a way that the diode's power dissipation rating is not exceeded. Interestingly enough, when zener diodes fail due to excessive power dissipation, they usually fail shorted rather than open. A diode failed in this manner is readily detected: it drops almost zero voltage when biased either way, like a piece of wire. Let's examine a zener diode regulating circuit mathematically, determining all voltages, currents, and power dissipations. Taking the same form of circuit shown earlier, we'll perform calculations assuming a zener voltage of 12.6 volts, a power supply voltage of 45 volts, and a series resistor value of 1000 Ω (we'll regard the zener voltage to be exactly 12.6 volts so as to avoid having to qualify all figures as “approximate” in Figure below (a) If the zener diode's voltage is 12.6 volts and the power supply's voltage is 45 volts, there will be 32.4 volts dropped across the resistor (45 volts - 12.6 volts = 32.4 volts). 32.4 volts dropped across 1000 Ω gives 32.4 mA of current in the circuit. (Figure below (b)) (a) Zener Voltage regulator with 1000 Ω resistor. (b) Calculation of voltage drops and current. Power is calculated by multiplying current by voltage (P=IE), so we can calculate power dissipations for both the resistor and the zener diode quite easily: A zener diode with a power rating of 0.5 watt would be adequate, as would a resistor rated for 1.5 or 2 watts of dissipation. If excessive power dissipation is detrimental, then why not design the circuit for the least amount of dissipation possible? Why not just size the resistor for a very high value of resistance, thus severely limiting current and keeping power dissipation figures very low? Take this circuit, for example, with a 100 kΩ resistor instead of a 1 kΩ resistor. Note that both the power supply voltage and the diode's zener voltage in Figure below are identical to the last example: Zener regulator with 100 kΩ resistor. With only 1/100 of the current we had before (324 µA instead of 32.4 mA), both power dissipation figures should be 100 times smaller: Seems ideal, doesn't it? Less power dissipation means lower operating temperatures for both the diode and the resistor, and also less wasted energy in the system, right? A higher resistance value does reduce power dissipation levels in the circuit, but it unfortunately introduces another problem. Remember that the purpose of a regulator circuit is to provide a stable voltage for another circuit. In other words, we're eventually going to power something with 12.6 volts, and this something will have a current draw of its own. Consider our first regulator circuit, this time with a 500 Ω load connected in parallel with the zener diode in Figure below. Zener regulator with 1000 Ω series resistor and 500 Ω load. If 12.6 volts is maintained across a 500 Ω load, the load will draw 25.2 mA of current. In order for the 1 kΩ series “dropping” resistor to drop 32.4 volts (reducing the power supply's voltage of 45 volts down to 12.6 across the zener), it still must conduct 32.4 mA of current. This leaves 7.2 mA of current through the zener diode. Now consider our “power-saving” regulator circuit with the 100 kΩ dropping resistor, delivering power to the same 500 Ω load. What it is supposed to do is maintain 12.6 volts across the load, just like the last circuit. However, as we will see, it cannot accomplish this task. (Figure below) Zener non-regulator with 100 KΩ series resistor with 500 Ω load.> With the larger value of dropping resistor in place, there will only be about 224 mV of voltage across the 500 Ω load, far less than the expected value of 12.6 volts! Why is this? If we actually had 12.6 volts across the load, it would draw 25.2 mA of current, as before. This load current would have to go through the series dropping resistor as it did before, but with a new (much larger!) dropping resistor in place, the voltage dropped across that resistor with 25.2 mA of current going through it would be 2,520 volts! Since we obviously don't have that much voltage supplied by the battery, this cannot happen. The situation is easier to comprehend if we temporarily remove the zener diode from the circuit and analyze the behavior of the two resistors alone in Figure below. Non-regulator with Zener removed. Both the 100 kΩ dropping resistor and the 500 Ω load resistance are in series with each other, giving a total circuit resistance of 100.5 kΩ. With a total voltage of 45 volts and a total resistance of 100.5 kΩ, Ohm's Law (I=E/R) tells us that the current will be 447.76 µA. Figuring voltage drops across both resistors (E=IR), we arrive at 44.776 volts and 224 mV, respectively. If we were to re-install the zener diode at this point, it would “see” 224 mV across it as well, being in parallel with the load resistance. This is far below the zener breakdown voltage of the diode and so it will not “break down” and conduct current. For that matter, at this low voltage the diode wouldn't conduct even if it were forward-biased! Thus, the diode ceases to regulate voltage. At least 12.6 volts must be dropped across to “activate” it. The analytical technique of removing a zener diode from a circuit and seeing whether or not enough voltage is present to make it conduct is a sound one. Just because a zener diode happens to be connected in a circuit doesn't guarantee that the full zener voltage will always be dropped across it! Remember that zener diodes work by limiting voltage to some maximum level; they cannot make up for a lack of voltage. In summary, any zener diode regulating circuit will function so long as the load's resistance is equal to or greater than some minimum value. If the load resistance is too low, it will draw too much current, dropping too much voltage across the series dropping resistor, leaving insufficient voltage across the zener diode to make it conduct. When the zener diode stops conducting current, it can no longer regulate voltage, and the load voltage will fall below the regulation point. Our regulator circuit with the 100 kΩ dropping resistor must be good for some value of load resistance, though. To find this acceptable load resistance value, we can use a table to calculate resistance in the two-resistor series circuit (no diode), inserting the known values of total voltage and dropping resistor resistance, and calculating for an expected load voltage of 12.6 volts: With 45 volts of total voltage and 12.6 volts across the load, we should have 32.4 volts across Rdropping: With 32.4 volts across the dropping resistor, and 100 kΩ worth of resistance in it, the current through it will be 324 µA: Being a series circuit, the current is equal through all components at any given time: Calculating load resistance is now a simple matter of Ohm's Law (R = E/I), giving us 38.889 kΩ: Thus, if the load resistance is exactly 38.889 kΩ, there will be 12.6 volts across it, diode or no diode. Any load resistance smaller than 38.889 kΩ will result in a load voltage less than 12.6 volts, diode or no diode. With the diode in place, the load voltage will be regulated to a maximum of 12.6 volts for any load resistance greater than 38.889 kΩ. With the original value of 1 kΩ for the dropping resistor, our regulator circuit was able to adequately regulate voltage even for a load resistance as low as 500 Ω. What we see is a tradeoff between power dissipation and acceptable load resistance. The higher-value dropping resistor gave us less power dissipation, at the expense of raising the acceptable minimum load resistance value. If we wish to regulate voltage for low-value load resistances, the circuit must be prepared to handle higher power dissipation. Zener diodes regulate voltage by acting as complementary loads, drawing more or less current as necessary to ensure a constant voltage drop across the load. This is analogous to regulating the speed of an automobile by braking rather than by varying the throttle position: not only is it wasteful, but the brakes must be built to handle all the engine's power when the driving conditions don't demand it. Despite this fundamental inefficiency of design, zener diode regulator circuits are widely employed due to their sheer simplicity. In high-power applications where the inefficiencies would be unacceptable, other voltage-regulating techniques are applied. But even then, small zener-based circuits are often used to provide a “reference” voltage to drive a more efficient amplifier circuit controlling the main power. Zener diodes are manufactured in standard voltage ratings listed in Table below. The table “Common zener diode voltages” lists common voltages for 0.3W and 1.3W parts. The wattage corresponds to die and package size, and is the power that the diode may dissipate without damage. Common zener diode voltages Zener diode clipper: A clipping circuit which clips the peaks of waveform at approximately the zener voltage of the diodes. The circuit of Figure below has two zeners connected series opposing to symmetrically clip a waveform at nearly the Zener voltage. The resistor limits current drawn by the zeners to a safe value. *SPICE 03445.eps D1 4 0 diode D2 4 2 diode R1 2 1 1.0k V1 1 0 SIN(0 20 1k) .model diode d bv=10 .tran 0.001m 2m .end Zener diode clipper: The zener breakdown voltage for the diodes is set at 10 V by the diode model parameter “bv=10” in the spice net list in Figure above. This causes the zeners to clip at about 10 V. The back-to-back diodes clip both peaks. For a positive half-cycle, the top zener is reverse biased, breaking down at the zener voltage of 10 V. The lower zener drops approximately 0.7 V since it is forward biased. Thus, a more accurate clipping level is 10+0.7=10.7V. Similar negative half-cycle clipping occurs a -10.7 V. (Figure below) shows the clipping level at a little over ±10 V. Zener diode clipper: v(1) input is clipped at waveform v(2). Schottky diodes are constructed of a metal-to-N junction rather than a P-N semiconductor junction. Also known as hot-carrier diodes, Schottky diodes are characterized by fast switching times (low reverse-recovery time), low forward voltage drop (typically 0.25 to 0.4 volts for a metal-silicon junction), and low junction capacitance. The schematic symbol for a schottky diode is shown in Figure below. Schottky diode schematic symbol. The forward voltage drop (VF), reverse-recovery time (trr), and junction capacitance (CJ) of Schottky diodes are closer to ideal than the average “rectifying” diode. This makes them well suited for high-frequency applications. Unfortunately, though, Schottky diodes typically have lower forward current (IF) and reverse voltage (VRRM and VDC) ratings than rectifying diodes and are thus unsuitable for applications involving substantial amounts of power. Though they are used in low voltage switching regulator power supplies. Schottky diode technology finds broad application in high-speed computer circuits, where the fast switching time equates to high speed capability, and the low forward voltage drop equates to less power dissipation when conducting. Switching regulator power supplies operating at 100's of kHz cannot use conventional silicon diodes as rectifiers because of their slow switching speed . When the signal applied to a diode changes from forward to reverse bias, conduction continues for a short time, while carriers are being swept out of the depletion region. Conduction only ceases after this tr reverse recovery time has expired. Schottky diodes have a shorter reverse recovery time. Regardless of switching speed, the 0.7 V forward voltage drop of silicon diodes causes poor efficiency in low voltage supplies. This is not a problem in, say, a 10 V supply. In a 1 V supply the 0.7 V drop is a substantial portion of the output. One solution is to use a schottky power diode which has a lower forward drop. Tunnel diodes exploit a strange quantum phenomenon called resonant tunneling to provide a negative resistance forward-bias characteristics. When a small forward-bias voltage is applied across a tunnel diode, it begins to conduct current. (Figure below(b)) As the voltage is increased, the current increases and reaches a peak value called the peak current (IP). If the voltage is increased a little more, the current actually begins to decrease until it reaches a low point called the valley current (IV). If the voltage is increased further yet, the current begins to increase again, this time without decreasing into another “valley.” The schematic symbol for the tunnel diode shown in Figure below(a). Tunnel diode (a) Schematic symbol. (b) Current vs voltage plot (c) Oscillator. The forward voltages necessary to drive a tunnel diode to its peak and valley currents are known as peak voltage (VP) and valley voltage (VV), respectively. The region on the graph where current is decreasing while applied voltage is increasing (between VP and VV on the horizontal scale) is known as the region of negative resistance. Tunnel diodes, also known as Esaki diodes in honor of their Japanese inventor Leo Esaki, are able to transition between peak and valley current levels very quickly, “switching” between high and low states of conduction much faster than even Schottky diodes. Tunnel diode characteristics are also relatively unaffected by changes in temperature. Reverse breakdown voltage versus doping level. After Sze [SGG] Tunnel diodes are heavily doped in both the P and N regions, 1000 times the level in a rectifier. This can be seen in Figure above. Standard diodes are to the far left, zener diodes near to the left, and tunnel diodes to the right of the dashed line. The heavy doping produces an unusually thin depletion region. This produces an unusually low reverse breakdown voltage with high leakage. The thin depletion region causes high capacitance. To overcome this, the tunnel diode junction area must be tiny. The forward diode characteristic consists of two regions: a normal forward diode characteristic with current rising exponentially beyond VF, 0.3 V for Ge, 0.7 V for Si. Between 0 V and VF is an additional “negative resistance” characteristic peak. This is due to quantum mechanical tunneling involving the dual particle-wave nature of electrons. The depletion region is thin enough compared with the equivalent wavelength of the electron that they can tunnel through. They do not have to overcome the normal forward diode voltage VF. The energy level of the conduction band of the N-type material overlaps the level of the valence band in the P-type region. With increasing voltage, tunneling begins; the levels overlap; current increases, up to a point. As current increases further, the energy levels overlap less; current decreases with increasing voltage. This is the “negative resistance” portion of the curve. Tunnel diodes are not good rectifiers, as they have relatively high “leakage” current when reverse-biased. Consequently, they find application only in special circuits where their unique tunnel effect has value. To exploit the tunnel effect, these diodes are maintained at a bias voltage somewhere between the peak and valley voltage levels, always in a forward-biased polarity (anode positive, and cathode negative). Perhaps the most common application of a tunnel diode is in simple high-frequency oscillator circuits as in Figure above(c), where it allows a DC voltage source to contribute power to an LC “tank” circuit, the diode conducting when the voltage across it reaches the peak (tunnel) level and effectively insulating at all other voltages. The resistors bias the tunnel diode at a few tenths of a volt centered on the negative resistance portion of the characteristic curve. The L-C resonant circuit may be a section of waveguide for microwave operation. Oscillation to 5 GHz is possible. At one time the tunnel diode was the only solid-state microwave amplifier available. Tunnel diodes were popular starting in the 1960's. They were longer lived than traveling wave tube amplifiers, an important consideration in satellite transmitters. Tunnel diodes are also resistant to radiation because of the heavy doping. Today various transistors operate at microwave frequencies. Even small signal tunnel diodes are expensive and difficult to find today. There is one remaining manufacturer of germanium tunnel diodes, and none for silicon devices. They are sometimes used in military equipment because they are insensitive to radiation and large temperature changes. There has been some research involving possible integration of silicon tunnel diodes into CMOS integrated circuits. They are thought to be capable of switching at 100 GHz in digital circuits. The sole manufacturer of germanium devices produces them one at a time. A batch process for silicon tunnel diodes must be developed, then integrated with conventional CMOS processes. [SZL] The Esaki tunnel diode should not be confused with the resonant tunneling diode CH 2, of more complex construction from compound semiconductors. The RTD is a more recent development capable of higher speed. Diodes, like all semiconductor devices, are governed by the principles described in quantum physics. One of these principles is the emission of specific-frequency radiant energy whenever electrons fall from a higher energy level to a lower energy level. This is the same principle at work in a neon lamp, the characteristic pink-orange glow of ionized neon due to the specific energy transitions of its electrons in the midst of an electric current. The unique color of a neon lamp's glow is due to the fact that its neon gas inside the tube, and not due to the particular amount of current through the tube or voltage between the two electrodes. Neon gas glows pinkish-orange over a wide range of ionizing voltages and currents. Each chemical element has its own “signature” emission of radiant energy when its electrons “jump” between different, quantized energy levels. Hydrogen gas, for example, glows red when ionized; mercury vapor glows blue. This is what makes spectrographic identification of elements possible. Electrons flowing through a PN junction experience similar transitions in energy level, and emit radiant energy as they do so. The frequency of this radiant energy is determined by the crystal structure of the semiconductor material, and the elements comprising it. Some semiconductor junctions, composed of special chemical combinations, emit radiant energy within the spectrum of visible light as the electrons change energy levels. Simply put, these junctions glow when forward biased. A diode intentionally designed to glow like a lamp is called a light-emitting diode, or LED. Forward biased silicon diodes give off heat as electron and holes from the N-type and P-type regions, respectively, recombine at the junction. In a forward biased LED, the recombination of electrons and holes in the active region in Figure below (c) yields photons. This process is known as electroluminescence. To give off photons, the potential barrier through which the electrons fall must be higher than for a silicon diode. The forward diode drop can range to a few volts for some color LEDs. Diodes made from a combination of the elements gallium, arsenic, and phosphorus (called gallium-arsenide-phosphide) glow bright red, and are some of the most common LEDs manufactured. By altering the chemical constituency of the PN junction, different colors may be obtained. Early generations of LEDs were red, green, yellow, orange, and infra-red, later generations included blue and ultraviolet, with violet being the latest color added to the selection. Other colors may be obtained by combining two or more primary-color (red, green, and blue) LEDs together in the same package, sharing the same optical lens. This allowed for multicolor LEDs, such as tricolor LEDs (commercially available in the 1980's) using red and green (which can create yellow) and later RGB LEDs (red, green, and blue), which cover the entire color spectrum. The schematic symbol for an LED is a regular diode shape inside of a circle, with two small arrows pointing away (indicating emitted light), shown in Figure below. LED, Light Emitting Diode: (a) schematic symbol. (b) Flat side and short lead of device correspond to cathode, as well as the internal arrangement of the cathode. (c) Cross section of Led die. This notation of having two small arrows pointing away from the device is common to the schematic symbols of all light-emitting semiconductor devices. Conversely, if a device is light-activated (meaning that incoming light stimulates it), then the symbol will have two small arrows pointing toward it. LEDs can sense light. They generate a small voltage when exposed to light, much like a solar cell on a small scale. This property can be gainfully applied in a variety of light-sensing circuits. Because LEDs are made of different chemical substances than silicon diodes, their forward voltage drops will be different. Typically, LEDs have much larger forward voltage drops than rectifying diodes, anywhere from about 1.6 volts to over 3 volts, depending on the color. Typical operating current for a standard-sized LED is around 20 mA. When operating an LED from a DC voltage source greater than the LED's forward voltage, a series-connected “dropping” resistor must be included to prevent full source voltage from damaging the LED. Consider the example circuit in Figure below (a) using a 6 V source. Setting LED current at 20 ma. (a) for a 6 V source, (b) for a 24 V source. With the LED dropping 1.6 volts, there will be 4.4 volts dropped across the resistor. Sizing the resistor for an LED current of 20 mA is as simple as taking its voltage drop (4.4 volts) and dividing by circuit current (20 mA), in accordance with Ohm's Law (R=E/I). This gives us a figure of 220 Ω. Calculating power dissipation for this resistor, we take its voltage drop and multiply by its current (P=IE), and end up with 88 mW, well within the rating of a 1/8 watt resistor. Higher battery voltages will require larger-value dropping resistors, and possibly higher-power rating resistors as well. Consider the example in Figure above (b) for a supply voltage of 24 volts: Here, the dropping resistor must be increased to a size of 1.12 kΩ to drop 22.4 volts at 20 mA so that the LED still receives only 1.6 volts. This also makes for a higher resistor power dissipation: 448 mW, nearly one-half a watt of power! Obviously, a resistor rated for 1/8 watt power dissipation or even 1/4 watt dissipation will overheat if used here. Dropping resistor values need not be precise for LED circuits. Suppose we were to use a 1 kΩ resistor instead of a 1.12 kΩ resistor in the circuit shown above. The result would be a slightly greater circuit current and LED voltage drop, resulting in a brighter light from the LED and slightly reduced service life. A dropping resistor with too much resistance (say, 1.5 kΩ instead of 1.12 kΩ) will result in less circuit current, less LED voltage, and a dimmer light. LEDs are quite tolerant of variation in applied power, so you need not strive for perfection in sizing the dropping resistor. Multiple LEDs are sometimes required, say in lighting. If LEDs are operated in parallel, each must have its own current limiting resistor as in Figure below (a) to ensure currents dividing more equally. However, it is more efficient to operate LEDs in series (Figure below (b)) with a single dropping resistor. As the number of series LEDs increases the series resistor value must decrease to maintain current, to a point. The number of LEDs in series (Vf) cannot exceed the capability of the power supply. Multiple series strings may be employed as in Figure below (c). In spite of equalizing the currents in multiple LEDs, the brightness of the devices may not match due to variations in the individual parts. Parts can be selected for brightness matching for critical applications. Multiple LEDs: (a) In parallel, (b) in series, (c) series-parallel Also because of their unique chemical makeup, LEDs have much, much lower peak-inverse voltage (PIV) ratings than ordinary rectifying diodes. A typical LED might only be rated at 5 volts in reverse-bias mode. Therefore, when using alternating current to power an LED, connect a protective rectifying diode anti-parallel with the LED to prevent reverse breakdown every other half-cycle as in Figure below (a). Driving an LED with AC The anti-parallel diode in Figure above can be replaced with an anti-parallel LED. The resulting pair of anti-parallel LED's illuminate on alternating half-cycles of the AC sinewave. This configuration draws 20 ma, splitting it equally between the LED's on alternating AC half cycles. Each LED only receives 10 mA due to this sharing. The same is true of the LED anti-parallel combination with a rectifier. The LED only receives 10 ma. If 20 mA was required for the LED(s), The resistor value could be halved. The forward voltage drop of LED's is inversely proportional to the wavelength (λ). As wavelength decreases going from infrared to visible colors to ultraviolet, Vf increases. While this trend is most obvious in the various devices from a single manufacturer, The voltage range for a particular color LED from various manufacturers varies. This range of voltages is shown in Table below. Optical and electrical properties of LED's |LED||λ nm (= 10 -9m)||Vf(from)||Vf (to)| |white, blue, violet||-||3||4| As lamps, LEDs are superior to incandescent bulbs in many ways. First and foremost is efficiency: LEDs output far more light power per watt of electrical input than an incandescent lamp. This is a significant advantage if the circuit in question is battery-powered, efficiency translating to longer battery life. Second is the fact that LEDs are far more reliable, having a much greater service life than incandescent lamps. This is because LEDs are “cold” devices: they operate at much cooler temperatures than an incandescent lamp with a white-hot metal filament, susceptible to breakage from mechanical and thermal shock. Third is the high speed at which LEDs may be turned on and off. This advantage is also due to the “cold” operation of LEDs: they don't have to overcome thermal inertia in transitioning from off to on or vice versa. For this reason, LEDs are used to transmit digital (on/off) information as pulses of light, conducted in empty space or through fiber-optic cable, at very high rates of speed (millions of pulses per second). LEDs excel in monochromatic lighting applications like traffic signals and automotive tail lights. Incandescents are abysmal in this application since they require filtering, decreasing efficiency. LEDs do not require filtering. One major disadvantage of using LEDs as sources of illumination is their monochromatic (single-color) emission. No one wants to read a book under the light of a red, green, or blue LED. However, if used in combination, LED colors may be mixed for a more broad-spectrum glow. A new broad spectrum light source is the white LED. While small white panel indicators have been available for many years, illumination grade devices are still in development. Efficiency of lighting |Lamp type||Efficiency lumen/watt||Life hrs||notes| |White LED, future||100||100,000||R&D target| |Halogen||15-17||2000||high quality light| |Compact fluorescent||50-100||10,000||cost effective| |Sodium vapor, lp||70-200||20,000||outdoor| A white LED is a blue LED exciting a phosphor which emits yellow light. The blue plus yellow approximates white light. The nature of the phosphor determines the characteristics of the light. A red phosphor may be added to improve the quality of the yellow plus blue mixture at the expense of efficiency. Table above compares white illumination LEDs to expected future devices and other conventional lamps. Efficiency is measured in lumens of light output per watt of input power. If the 50 lumens/watt device can be improved to 100 lumens/watt, white LEDs will be comparable to compact fluorescent lamps in efficiency. LEDs in general have been a major subject of R&D since the 1960's. Because of this it is impractical to cover all geometries, chemistries, and characteristics that have been created over the decades. The early devices were relatively dim and took moderate currents. The efficiencies have been improved in later generations to the point it is hazardous to look closely and directly into an illuminated LED. This can result in eye damage, and the LEDs only required a minor increase in dropping voltage (Vf) and current. Modern high intensity devices have reached 180 lumens using 0.7 Amps (82 lumens/watt, Luxeon Rebel series cool white), and even higher intensity models can use even higher currents with a corresponding increase in brightness. Other developments, such as quantum dots, are the subject of current research, so expect to see new things for these devices in the future The laser diode is a further development upon the regular light-emitting diode, or LED. The term “laser” itself is actually an acronym, despite the fact its often written in lower-case letters. “Laser” stands for Light Amplification by Stimulated Emission of Radiation, and refers to another strange quantum process whereby characteristic light emitted by electrons falling from high-level to low-level energy states in a material stimulate other electrons in a substance to make similar “jumps,” the result being a synchronized output of light from the material. This synchronization extends to the actual phase of the emitted light, so that all light waves emitted from a “lasing” material are not just the same frequency (color), but also the same phase as each other, so that they reinforce one another and are able to travel in a very tightly-confined, nondispersing beam. This is why laser light stays so remarkably focused over long distances: each and every light wave coming from the laser is in step with each other. (a) White light of many wavelengths. (b) Mono-chromatic LED light, a single wavelength. (c) Phase coherent laser light. Incandescent lamps produce “white” (mixed-frequency, or mixed-color) light as in Figure above (a). Regular LEDs produce monochromatic light: same frequency (color), but different phases, resulting in similar beam dispersion in Figure above (b). Laser LEDs produce coherent light: light that is both monochromatic (single-color) and monophasic (single-phase), resulting in precise beam confinement as in Figure above (c). Laser light finds wide application in the modern world: everything from surveying, where a straight and nondispersing light beam is very useful for precise sighting of measurement markers, to the reading and writing of optical disks, where only the narrowness of a focused laser beam is able to resolve the microscopic “pits” in the disk's surface comprising the binary 1's and 0's of digital information. Some laser diodes require special high-power “pulsing” circuits to deliver large quantities of voltage and current in short bursts. Other laser diodes may be operated continuously at lower power. In the continuous laser, laser action occurs only within a certain range of diode current, necessitating some form of current-regulator circuit. As laser diodes age, their power requirements may change (more current required for less output power), but it should be remembered that low-power laser diodes, like LEDs, are fairly long-lived devices, with typical service lives in the tens of thousands of hours. A photodiode is a diode optimized to produce an electron current flow in response to irradiation by ultraviolet, visible, or infrared light. Silicon is most often used to fabricate photodiodes; though, germanium and gallium arsenide can be used. The junction through which light enters the semiconductor must be thin enough to pass most of the light on to the active region (depletion region) where light is converted to electron hole pairs. In Figure below a shallow P-type diffusion into an N-type wafer produces a PN junction near the surface of the wafer. The P-type layer needs to be thin to pass as much light as possible. A heavy N+ diffusion on the back of the wafer makes contact with metalization. The top metalization may be a fine grid of metallic fingers on the top of the wafer for large cells. In small photodiodes, the top contact might be a sole bond wire contacting the bare P-type silicon top. Photodiode: Schematic symbol and cross section. Light entering the top of the photodiode stack fall off exponentially in with depth of the silicon. A thin top P-type layer allows most photons to pass into the depletion region where electron-hole pairs are formed. The electric field across the depletion region due to the built in diode potential causes electrons to be swept into the N-layer, holes into the P-layer. Actually electron-hole pairs may be formed in any of the semiconductor regions. However, those formed in the depletion region are most likely to be separated into the respective N and P-regions. Many of the electron-hole pairs formed in the P and N-regions recombine. Only a few do so in the depletion region. Thus, a few electron-hole pairs in the N and P-regions, and most in the depletion region contribute to photocurrent, that current resulting from light falling on the photodiode. The voltage out of a photodiode may be observed. Operation in this photovoltaic (PV) mode is not linear over a large dynamic range, though it is sensitive and has low noise at frequencies less than 100 kHz. The preferred mode of operation is often photocurrent (PC) mode because the current is linearly proportional to light flux over several decades of intensity, and higher frequency response can be achieved. PC mode is achieved with reverse bias or zero bias on the photodiode. A current amplifier (transimpedance amplifier) should be used with a photodiode in PC mode. Linearity and PC mode are achieved as long as the diode does not become forward biased. High speed operation is often required of photodiodes, as opposed to solar cells. Speed is a function of diode capacitance, which can be minimized by decreasing cell area. Thus, a sensor for a high speed fiber optic link will use an area no larger than necessary, say 1 mm2. Capacitance may also be decreased by increasing the thickness of the depletion region, in the manufacturing process or by increasing the reverse bias on the diode. PIN diode The p-i-n diode or PIN diode is a photodiode with an intrinsic layer between the P and N-regions as in Figure below. The P-Intrinsic-N structure increases the distance between the P and N conductive layers, decreasing capacitance, increasing speed. The volume of the photo sensitive region also increases, enhancing conversion efficiency. The bandwidth can extend to 10's of gHz. PIN photodiodes are the preferred for high sensitivity, and high speed at moderate cost. PIN photodiode: The intrinsic region increases the thickness of the depletion region. Avalanche photo diode:An avalanche photodiode (APD)designed to operate at high reverse bias exhibits an electron multiplier effect analogous to a photomultiplier tube. The reverse bias can run from 10's of volts to nearly 2000 V. The high level of reverse bias accelerates photon created electron-hole pairs in the intrinsic region to a high enough velocity to free additional carriers from collisions with the crystal lattice. Thus, many electrons per photon result. The motivation for the APD is to achieve amplification within the photodiode to overcome noise in external amplifiers. This works to some extent. However, the APD creates noise of its own. At high speed the APD is superior to a PIN diode amplifier combination, though not for low speed applications. APD's are expensive, roughly the price of a photomultiplier tube. So, they are only competitive with PIN photodiodes for niche applications. One such application is single photon counting as applied to nuclear physics. A photodiode optimized for efficiently delivering power to a load is the solar cell. It operates in photovoltaic mode (PV) because it is forward biased by the voltage developed across the load resistance. Monocrystalline solar cells are manufactured in a process similar to semiconductor processing. This involves growing a single crystal boule from molten high purity silicon (P-type), though, not as high purity as for semiconductors. The boule is diamond sawed or wire sawed into wafers. The ends of the boule must be discarded or recycled, and silicon is lost in the saw kerf. Since modern cells are nearly square, silicon is lost in squaring the boule. Cells may be etched to texture (roughen) the surface to help trap light within the cell. Considerable silicon is lost in producing the 10 or 15 cm square wafers. These days (2007) it is common for solar cell manufacturer to purchase the wafers at this stage from a supplier to the semiconductor industry. P-type Wafers are loaded back-to-back into fused silica boats exposing only the outer surface to the N-type dopant in the diffusion furnace. The diffusion process forms a thin n-type layer on the top of the cell. The diffusion also shorts the edges of the cell front to back. The periphery must be removed by plasma etching to unshort the cell. Silver and or aluminum paste is screened on the back of the cell, and a silver grid on the front. These are sintered in a furnace for good electrical contact. (Figure below) The cells are wired in series with metal ribbons. For charging 12 V batteries, 36 cells at approximately 0.5 V are vacuum laminated between glass, and a polymer metal back. The glass may have a textured surface to help trap light. Silicon Solar cell The ultimate commercial high efficiency (21.5%) single crystal silicon solar cells have all contacts on the back of the cell. The active area of the cell is increased by moving the top (-) contact conductors to the back of the cell. The top (-) contacts are normally made to the N-type silicon on top of the cell. In Figure below the (-) contacts are made to N+ diffusions on the bottom interleaved with (+) contacts. The top surface is textured to aid in trapping light within the cell.. [VSW] High efficiency solar cell with all contacts on the back. Adapted from Figure 1 [VSW] Multicyrstalline silicon cells start out as molten silicon cast into a rectangular mold. As the silicon cools, it crystallizes into a few large (mm to cm sized) randomly oriented crystals instead of a single one. The remainder of the process is the same as for single crystal cells. The finished cells show lines dividing the individual crystals, as if the cells were cracked. The high efficiency is not quite as high as single crystal cells due to losses at crystal grain boundaries. The cell surface cannot be roughened by etching due to the random orientation of the crystals. However, an antireflectrive coating improves efficiency. These cells are competitive for all but space applications. Three layer cell: The highest efficiency solar cell is a stack of three cells tuned to absorb different portions of the solar spectrum. Though three cells can be stacked atop one another, a monolithic single crystal structure of 20 semiconductor layers is more compact. At 32 % efficiency, it is now (2007) favored over silicon for space application. The high cost prevents it from finding many earth bound applications other than concentrators based on lenses or mirrors. Intensive research has recently produced a version enhanced for terrestrial concentrators at 400 - 1000 suns and 40.7% efficiency. This requires either a big inexpensive Fresnel lens or reflector and a small area of the expensive semiconductor. This combination is thought to be competitive with inexpensive silicon cells for solar power plants. [RRK] [LZy] Metal organic chemical vapor deposition (MOCVD) deposits the layers atop a P-type germanium substrate. The top layers of N and P-type gallium indium phosphide (GaInP) having a band gap of 1.85 eV, absorbs ultraviolet and visible light. These wavelengths have enough energy to exceed the band gap. Longer wavelengths (lower energy) do not have enough energy to create electron-hole pairs, and pass on through to the next layer. A gallium arsenide layers having a band gap of 1.42 eV, absorbs near infrared light. Finally the germanium layer and substrate absorb far infrared. The series of three cells produce a voltage which is the sum of the voltages of the three cells. The voltage developed by each material is 0.4 V less than the band gap energy listed in Table below. For example, for GaInP: 1.8 eV/e - 0.4 V = 1.4 V. For all three the voltage is 1.4 V + 1.0 V + 0.3 V = 2.7 V. [BRB] High efficiency triple layer solar cell. |Layer||Band gap||Light absorbed| |Gallium indium phosphide||1.8 eV||UV, visible| |Gallium arsenide||1.4 eV||near infrared| |Germanium||0.7 eV||far infrared| Crystalline solar cell arrays have a long usable life. Many arrays are guaranteed for 25 years, and believed to be good for 40 years. They do not suffer initial degradation compared with amorphous silicon. Both single and multicrystalline solar cells are based on silicon wafers. The silicon is both the substrate and the active device layers. Much silicon is consumed. This kind of cell has been around for decades, and takes approximately 86% of the solar electric market. For further information about crystalline solar cells see Honsberg. [CHS] Amorphous silicon thin film solar cells use tiny amounts of the active raw material, silicon. Approximately half the cost of conventional crystalline solar cells is the solar cell grade silicon. The thin film deposition process reduces this cost. The downside is that efficiency is about half that of conventional crystalline cells. Moreover, efficiency degrades by 15-35% upon exposure to sunlight. A 7% efficient cell soon ages to 5% efficiency. Thin film amorphous silicon cells work better than crystalline cells in dim light. They are put to good use in solar powered calculators. Non-silicon based solar cells make up about 7% of the market. These are thin-film polycrystalline products. Various compound semiconductors are the subject of research and development. Some non-silicon products are in production. Generally, the efficiency is better than amorphous silicon, but not nearly as good as crystalline silicon. Cadmium telluride as a polycrystalline thin film on metal or glass can have a higher efficiency than amorphous silicon thin films. If deposited on metal, that layer is the negative contact to the cadmium telluride thin film. The transparent P-type cadmium sulfide atop the cadmium telluride serves as a buffer layer. The positive top contact is transparent, electrically conductive fluorine doped tin oxide. These layers may be laid down on a sacrificial foil in place of the glass in the process in the following pargraph. The sacrificial foil is removed after the cell is mounted to a permanent substrate. Cadmium telluride solar cell on glass or metal. A process for depositing cadmium telluride on glass begins with the deposition of N-type transparent, electrically conducive, tin oxide on a glass substrate. The next layer is P-type cadmium telluride; though, N-type or intrinsic may be used. These two layers constitute the NP junction. A P+ (heavy P-type) layer of lead telluride aids in establishing a low resistance contact. A metal layer makes the final contact to the lead telluride. These layers may be laid down by vacuum deposition, chemical vapor deposition (CVD), screen printing, electrodeposition, or atmospheric pressure chemical vapor deposition (APCVD) in helium. [KWM] A variation of cadmium telluride is mercury cadmium telluride. Having lower bulk resistance and lower contact resistance improves efficiency over cadmium telluride. Cadmium Indium Gallium diSelenide solar cell (CIGS) Cadmium Indium Gallium diSelenide: A most promising thin film solar cell at this time (2007) is manufactured on a ten inch wide roll of flexible polyimide– Cadmium Indium Gallium diSelenide (CIGS). It has a spectacular efficiency of 10%. Though, commercial grade crystalline silicon cells surpassed this decades ago, CIGS should be cost competitive. The deposition processes are at a low enough temperature to use a polyimide polymer as a substrate instead of metal or glass. (Figure above) The CIGS is manufactured in a roll to roll process, which should drive down costs. GIGS cells may also be produced by an inherently low cost electrochemical process. [EET] Solar cell properties |Solar cell type||Maximum efficiency||Practical efficiency||Notes| |Selenium, polycrystalline||0.7%||-||1883, Charles Fritts| |Silicon, single crystal||-||4%||1950's, first silicon solar cell| |Silicon, single crystal PERL, terrestrial, space||25%||-||solar cars, cost=100x commercial| |Silicon, single crystal, commercial terrestrial||24%||14-17%||$5-$10/peak watt| |Cypress Semiconductor, Sunpower, silicon single crystal||21.5%||19%||all contacts on cell back| |Gallium Indium Phosphide/ Gallium Arsenide/ Germanium, single crystal, multilayer||-||32%||Preferred for space.| |Advanced terrestrial version of above.||-||40.7%||Uses optical concentrator.| |Silicon, amorphous||13%||5-7%||Degrades in sun light. Good indoors for calculators or cloudy outdoors.| |Cadmium telluride, polycrystalline||16%||-||glass or metal substrate| |Copper indium arsenide diselenide, polycrystalline||18%||10%||10 inch flexible polymer web. [NTH]| |Organic polymer, 100% plastic||4.5%||-||R&D project| A variable capacitance diode is known as a varicap diode or as a varactor. If a diode is reverse biased, an insulating depletion region forms between the two semiconductive layers. In many diodes the width of the depletion region may be changed by varying the reverse bias. This varies the capacitance. This effect is accentuated in varicap diodes. The schematic symbols is shown in Figure below, one of which is packaged as common cathode dual diode. Varicap diode: Capacitance varies with reverse bias. This varies the frequency of a resonant network. If a varicap diode is part of a resonant circuit as in Figure above, the frequency may be varied with a control voltage, Vcontrol. A large capacitance, low Xc, in series with the varicap prevents Vcontrol from being shorted out by inductor L. As long as the series capacitor is large, it has minimal effect on the frequency of resonant circuit. Coptional may be used to set the center resonant frequency. Vcontrol can then vary the frequency about this point. Note that the required active circuitry to make the resonant network oscillate is not shown. For an example of a varicap diode tuned AM radio receiver see “electronic varicap diode tuning,” Ch 9 Some varicap diodes may be referred to as abrupt, hyperabrupt, or super hyper abrupt. These refer to the change in junction capacitance with changing reverse bias as being abrupt or hyper-abrupt, or super hyperabrupt. These diodes offer a relatively large change in capacitance. This is useful when oscillators or filters are swept over a large frequency range. Varying the bias of abrupt varicaps over the rated limits, changes capacitance by a 4:1 ratio, hyperabrupt by 10:1, super hyperabrupt by 20:1. Varactor diodes may be used in frequency multiplier circuits. See “Practical analog semiconductor circuits,” Varactor multiplier The snap diode, also known as the step recovery diode is designed for use in high ratio frequency multipliers up to 20 gHz. When the diode is forward biased, charge is stored in the PN junction. This charge is drawn out as the diode is reverse biased. The diode looks like a low impedance current source during forward bias. When reverse bias is applied it still looks like a low impedance source until all the charge is withdrawn. It then “snaps” to a high impedance state causing a voltage impulse, rich in harmonics. An applications is a comb generator, a generator of many harmonics. Moderate power 2x and 4x multipliers are another application. A PIN diode is a fast low capacitance switching diode. Do not confuse a PIN switching diode with a PIN photo diode here. A PIN diode is manufactured like a silicon switching diode with an intrinsic region added between the PN junction layers. This yields a thicker depletion region, the insulating layer at the junction of a reverse biased diode. This results in lower capacitance than a reverse biased switching diode. Pin diode: Cross section aligned with schematic symbol. PIN diodes are used in place of switching diodes in radio frequency (RF) applications, for example, a T/R switch here. The 1n4007 1000 V, 1 A general purpose power diode is reported to be usable as a PIN switching diode. The high voltage rating of this diode is achieved by the inclusion of an intrinsic layer dividing the PN junction. This intrinsic layer makes the 1n4007 a PIN diode. Another PIN diode application is as the antenna switch here for a direction finder receiver. PIN diodes serve as variable resistors when the forward bias is varied. One such application is the voltage variable attenuator here. The low capacitance characteristic of PIN diodes, extends the frequency flat response of the attenuator to microwave frequencies. An IMPATT diode is reverse biased above the breakdown voltage. The high doping levels produce a thin depletion region. The resulting high electric field rapidly accelerates carriers which free other carriers in collisions with the crystal lattice. Holes are swept into the P+ region. Electrons drift toward the N regions. The cascading effect creates an avalanche current which increases even as voltage across the junction decreases. The pulses of current lag the voltage peak across the junction. A “negative resistance” effect in conjunction with a resonant circuit produces oscillations at high power levels (high for semiconductors). IMPATT diode: Oscillator circuit and heavily doped P and N layers. The resonant circuit in the schematic diagram of Figure above is the lumped circuit equivalent of a waveguide section, where the IMPATT diode is mounted. DC reverse bias is applied through a choke which keeps RF from being lost in the bias supply. This may be a section of waveguide known as a bias Tee. Low power RADAR transmitters may use an IMPATT diode as a power source. They are too noisy for use in the receiver. [YMCW] A gunn diode is solely composed of N-type semiconductor. As such, it is not a true diode. Figure below shows a lightly doped N- layer surrounded by heavily doped N+ layers. A voltage applied across the N-type gallium arsenide gunn diode creates a strong electric field across the lightly doped N- layer. Gunn diode: Oscillator circuit and cross section of only N-type semiconductor diode. As voltage is increased, conduction increases due to electrons in a low energy conduction band. As voltage is increased beyond the threshold of approximately 1 V, electrons move from the lower conduction band to the higher energy conduction band where they no longer contribute to conduction. In other words, as voltage increases, current decreases, a negative resistance condition. The oscillation frequency is determined by the transit time of the conduction electrons, which is inversely related to the thickness of the N- layer. The frequency may be controlled to some extent by embedding the gunn diode into a resonant circuit. The lumped circuit equivalent shown in Figure above is actually a coaxial transmission line or waveguide. Gallium arsenide gunn diodes are available for operation from 10 to 200 gHz at 5 to 65 mw power. Gunn diodes may also serve as amplifiers. [CHW] [IAP] The Shockley diodeis a 4-layer thyristor used to trigger larger thyristors. It only conducts in one direction when triggered by a voltage exceeding the breakover voltage, about 20 V. See “Thyristors,” The Shockley Diode. The bidirectional version is called a diac. See “Thyristors,” The DIAC. A constant-current diode, also known as a current-limiting diode, or current-regulating diode, does exactly what its name implies: it regulates current through it to some maximum level. The constant current diode is a two terminal version of a JFET. If we try to force more current through a constant-current diode than its current-regulation point, it simply “fights back” by dropping more voltage. If we were to build the circuit in Figure below(a) and plot diode current against diode voltage, we'd get a graph that rises at first and then levels off at the current regulation point as in Figure below(b). Constant current diode: (a) Test circuit, (b) current vs voltage characteristic. One application for a constant-current diode is to automatically limit current through an LED or laser diode over a wide range of power supply voltages as in Figure below. Of course, the constant-current diode's regulation point should be chosen to match the LED or laser diode's optimum forward current. This is especially important for the laser diode, not so much for the LED, as regular LEDs tend to be more tolerant of forward current variations. Another application is in the charging of small secondary-cell batteries, where a constant charging current leads to predictable charging times. Of course, large secondary-cell battery banks might also benefit from constant-current charging, but constant-current diodes tend to be very small devices, limited to regulating currents in the milliamp range. Diodes manufactured from silicon carbide are capable of high temperature operation to 400oC. This could be in a high temperature environment: down hole oil well logging, gas turbine engines, auto engines. Or, operation in a moderate environment at high power dissipation. Nuclear and space applications are promising as SiC is 100 times more resistant to radiation compared with silicon. SiC is a better conductor of heat than any metal. Thus, SiC is better than silicon at conducting away heat. Breakdown voltage is several kV. SiC power devices are expected to reduce electrical energy losses in the power industry by a factor of 100. Diodes based on organic chemicals have been produced using low temperature processes. Hole rich and electron rich conductive polymers may be ink jet printed in layers. Most of the research and development is of the organic LED (OLED). However, development of inexpensive printable organic RFID (radio frequency identification) tags is on going. In this effort, a pentacene organic rectifier has been operated at 50 MHz. Rectification to 800 MHz is a development goal. An inexpensive metal insulator metal (MIM) diode acting like a back-to-back zener diode clipper has been delveloped. Also, a tunnel diode like device has been fabricated. The SPICE circuit simulation program provides for modeling diodes in circuit simulations. The diode model is based on characterization of individual devices as described in a product data sheet and manufacturing process characteristics not listed. Some information has been extracted from a 1N4004 data sheet in Figure below. Data sheet 1N4004 excerpt, after [DI4]. The diode statement begins with a diode element name which must begin with “d” plus optional characters. Example diode element names include: d1, d2, dtest, da, db, d101. Two node numbers specify the connection of the anode and cathode, respectively, to other components. The node numbers are followed by a model name, referring to a subsequent “.model” statement. The model statement line begins with “.model,” followed by the model name matching one or more diode statements. Next, a “d” indicates a diode is being modeled. The remainder of the model statement is a list of optional diode parameters of the form ParameterName=ParameterValue. None are used in Example below. Example2 has some parameters defined. For a list of diode parameters, see Table below. General form: d[name] [anode] [cathode] [modelname] .model ([modelname] d [parmtr1=x] [parmtr2=y] . . .) Example: d1 1 2 mod1 .model mod1 d Example2: D2 1 2 Da1N4004 .model Da1N4004 D (IS=18.8n RS=0 BV=400 IBV=5.00u CJO=30 M=0.333 N=2) The easiest approach to take for a SPICE model is the same as for a data sheet: consult the manufacturer's web site. Table below lists the model parameters for some selected diodes. A fallback strategy is to build a SPICE model from those parameters listed on the data sheet. A third strategy, not considered here, is to take measurements of an actual device. Then, calculate, compare and adjust the SPICE parameters to the measurements. Diode SPICE parameters |IS||IS||Saturation current (diode equation)||A||1E-14| |RS||RS||Parsitic resistance (series resistance)||Ω||0| |n||N||Emission coefficient, 1 to 2||-||1| |CD(0)||CJO||Zero-bias junction capacitance||F||0| |m||M||Junction grading coefficient||-||0.5| |-||-||0.33 for linearly graded junction||-||-| |-||-||0.5 for abrupt junction||-||-| |pi||XTI||IS temperature exponent||-||3.0| |-||-||pn junction: 3.0||-||-| |kf||KF||Flicker noise coefficient||-||0| |af||AF||Flicker noise exponent||-||1| |FC||FC||Forward bias depletion capacitance coefficient||-||0.5| |BV||BV||Reverse breakdown voltage||V||∞| |IBV||IBV||Reverse breakdown current||A||1E-3| If diode parameters are not specified as in “Example” model above, the parameters take on the default values listed in Table above and Table below. These defaults model integrated circuit diodes. These are certainly adequate for preliminary work with discrete devices For more critical work, use SPICE models supplied by the manufacturer [DIn], SPICE vendors, and other sources. [smi] SPICE parameters for selected diodes; sk=schottky Ge=germanium; else silicon. |1N4004 data sheet||18.8n||-||2||-||30p||0.333||-||-||-||400||5u| Otherwise, derive some of the parameters from the data sheet. First select a value for spice parameter N between 1 and 2. It is required for the diode equation (n). Massobrio [PAGM] pp 9, recommends ".. n, the emission coefficient is usually about 2." In Table above, we see that power rectifiers 1N3891 (12 A), and 10A04 (10 A) both use about 2. The first four in the table are not relevant because they are schottky, schottky, germanium, and silicon small signal, respectively. The saturation current, IS, is derived from the diode equation, a value of (VD, ID) on the graph in Figure above, and N=2 (n in the diode equation). ID = IS(eVD/nVT -1) VT = 26 mV at 25oC n = 2.0 VD = 0.925 V at 1 A from graph 1 A = IS(e(0.925 V)/(2)(26 mV) -1) IS = 18.8E-9 The numerical values of IS=18.8n and N=2 are entered in last line of Table above for comparison to the manufacturers model for 1N4004, which is considerably different. RS defaults to 0 for now. It will be estimated later. The important DC static parameters are N, IS, and RS. Rashid [MHR] suggests that TT, τD, the transit time, be approximated from the reverse recovery stored charge QRR, a data sheet parameter (not available on our data sheet) and IF, forward current. ID = IS(eVD/nVT -1) τD = QRR/IF We take the TT=0 default for lack of QRR. Though it would be reasonable to take TT for a similar rectifier like the 10A04 at 4.32u. The 1N3891 TT is not a valid choice because it is a fast recovery rectifier. CJO, the zero bias junction capacitance is estimated from the VR vs CJ graph in Figure above. The capacitance at the nearest to zero voltage on the graph is 30 pF at 1 V. If simulating high speed transient response, as in switching regulator power supplies, TT and CJO parameters must be provided. The junction grading coefficient M is related to the doping profile of the junction. This is not a data sheet item. The default is 0.5 for an abrupt junction. We opt for M=0.333 corresponding to a linearly graded junction. The power rectifiers in Table above use lower values for M than 0.5. We take the default values for VJ and EG. Many more diodes use VJ=0.6 than shown in Table above. However the 10A04 rectifier uses the default, which we use for our 1N4004 model (Da1N4001 in Table above). Use the default EG=1.11 for silicon diodes and rectifiers. Table above lists values for schottky and germanium diodes. Take the XTI=3, the default IS temperature coefficient for silicon devices. See Table above for XTI for schottky diodes. The abbreviated data sheet, Figure above, lists IR = 5 µA @ VR = 400 V, corresponding to IBV=5u and BV=400 respectively. The 1n4004 SPICE parameters derived from the data sheet are listed in the last line of Table above for comparison to the manufacturer's model listed above it. BV is only necessary if the simulation exceeds the reverse breakdown voltage of the diode, as is the case for zener diodes. IBV, reverse breakdown current, is frequently omitted, but may be entered if provided with BV. Figure below shows a circuit to compare the manufacturers model, the model derived from the datasheet, and the default model using default parameters. The three dummy 0 V sources are necessary for diode current measurement. The 1 V source is swept from 0 to 1.4 V in 0.2 mV steps. See .DC statement in the netlist in Table below. DI1N4004 is the manufacturer's diode model, Da1N4004 is our derived diode model. SPICE circuit for comparison of manufacturer model (D1), calculated datasheet model (D2), and default model (D3). SPICE netlist parameters: (D1) DI1N4004 manufacturer's model, (D2) Da1N40004 datasheet derived, (D3) default diode model. *SPICE circuit <03468.eps> from XCircuit v3.20 D1 1 5 DI1N4004 V1 5 0 0 D2 1 3 Da1N4004 V2 3 0 0 D3 1 4 Default V3 4 0 0 V4 1 0 1 .DC V4 0 1400mV 0.2m .model Da1N4004 D (IS=18.8n RS=0 BV=400 IBV=5.00u CJO=30 +M=0.333 N=2.0 TT=0) .MODEL DI1N4004 D (IS=76.9n RS=42.0m BV=400 IBV=5.00u CJO=39.8p +M=0.333 N=1.45 TT=4.32u) .MODEL Default D .end We compare the three models in Figure below. and to the datasheet graph data in Table below. VD is the diode voltage versus the diode currents for the manufacturer's model, our calculated datasheet model and the default diode model. The last column “1N4004 graph” is from the datasheet voltage versus current curve in Figure above which we attempt to match. Comparison of the currents for the three model to the last column shows that the default model is good at low currents, the manufacturer's model is good at high currents, and our calculated datasheet model is best of all up to 1 A. Agreement is almost perfect at 1 A because the IS calculation is based on diode voltage at 1 A. Our model grossly over states current above 1 A. First trial of manufacturer model, calculated datasheet model, and default model. Comparison of manufacturer model, calculated datasheet model, and default model to 1N4004 datasheet graph of V vs I. model model model 1N4004 index VD manufacturer datasheet default graph 3500 7.000000e-01 1.612924e+00 1.416211e-02 5.674683e-03 0.01 4001 8.002000e-01 3.346832e+00 9.825960e-02 2.731709e-01 0.13 4500 9.000000e-01 5.310740e+00 6.764928e-01 1.294824e+01 0.7 4625 9.250000e-01 5.823654e+00 1.096870e+00 3.404037e+01 1.0 5000 1.000000e-00 7.395953e+00 4.675526e+00 6.185078e+02 2.0 5500 1.100000e+00 9.548779e+00 3.231452e+01 2.954471e+04 3.3 6000 1.200000e+00 1.174489e+01 2.233392e+02 1.411283e+06 5.3 6500 1.300000e+00 1.397087e+01 1.543591e+03 6.741379e+07 8.0 7000 1.400000e+00 1.621861e+01 1.066840e+04 3.220203e+09 12. The solution is to increase RS from the default RS=0. Changing RS from 0 to 8m in the datasheet model causes the curve to intersect 10 A (not shown) at the same voltage as the manufacturer's model. Increasing RS to 28.6m shifts the curve further to the right as shown in Figure below. This has the effect of more closely matching our datasheet model to the datasheet graph (Figure above). Table below shows that the current 1.224470e+01 A at 1.4 V matches the graph at 12 A. However, the current at 0.925 V has degraded from 1.096870e+00 above to 7.318536e-01. Second trial to improve calculated datasheet model compared with manufacturer model and default model. Changing Da1N4004 model statement RS=0 to RS=28.6m decreases the current at VD=1.4 V to 12.2 A. .model Da1N4004 D (IS=18.8n RS=28.6m BV=400 IBV=5.00u CJO=30 +M=0.333 N=2.0 TT=0) model model 1N4001 index VD manufacturer datasheet graph 3505 7.010000e-01 1.628276e+00 1.432463e-02 0.01 4000 8.000000e-01 3.343072e+00 9.297594e-02 0.13 4500 9.000000e-01 5.310740e+00 5.102139e-01 0.7 4625 9.250000e-01 5.823654e+00 7.318536e-01 1.0 5000 1.000000e-00 7.395953e+00 1.763520e+00 2.0 5500 1.100000e+00 9.548779e+00 3.848553e+00 3.3 6000 1.200000e+00 1.174489e+01 6.419621e+00 5.3 6500 1.300000e+00 1.397087e+01 9.254581e+00 8.0 7000 1.400000e+00 1.621861e+01 1.224470e+01 12. Suggested reader exercise: decrease N so that the current at VD=0.925 V is restored to 1 A. This may increase the current (12.2 A) at VD=1.4 V requiring an increase of RS to decrease current to 12 A. Zener diode: There are two approaches to modeling a zener diode: set the BV parameter to the zener voltage in the model statement, or model the zener with a subcircuit containing a diode clamper set to the zener voltage. An example of the first approach sets the breakdown voltage BV to 15 for the 1n4469 15 V zener diode model (IBV optional): .model D1N4469 D ( BV=15 IBV=17m ) The second approach models the zener with a subcircuit. Clamper D1 and VZ in Figure below models the 15 V reverse breakdown voltage of a 1N4477A zener diode. Diode DR accounts for the forward conduction of the zener in the subcircuit. .SUBCKT DI-1N4744A 1 2 * Terminals A K D1 1 2 DF DZ 3 1 DR VZ 2 3 13.7 .MODEL DF D ( IS=27.5p RS=0.620 N=1.10 + CJO=78.3p VJ=1.00 M=0.330 TT=50.1n ) .MODEL DR D ( IS=5.49f RS=0.804 N=1.77 ) .ENDS Zener diode subcircuit uses clamper (D1 and VZ) to model zener. Tunnel diode: A tunnel diode may be modeled by a pair of field effect transistors (JFET) in a SPICE subcircuit. [KHM] An oscillator circuit is also shown in this reference. Gunn diode: A Gunn diode may also be modeled by a pair of JFET's. [ISG] This reference shows a microwave relaxation oscillator. Contributors to this chapter are listed in chronological order of their contributions, from most recent to first. See Appendix 2 (Contributor List) for dates and contact information. Jered Wierzbicki (December 2002): Pointed out error in diode equation -- Boltzmann's constant shown incorrectly. Lessons In Electric Circuits copyright (C) 2000-2013 Tony R. Kuphaldt, under the terms and conditions of the Design Science License.
http://openbookproject.net/electricCircuits/Semi/SEMI_3.html
13
55
Three years later, Halley and Hooke rode to Cambridge to again try to persuade Newton to work on the problem of planetary motions. This time, when they arrived, Newton rummaged through his old papers, and showed Halley and Hooke the solution to the problem. (Well, he didn't show them the entire solution. To do the problem, Newton had to invent a whole new field of mathematics, which he called "flextures" and which we now call Calculus. Newton didn't want to share his tricks with the rest of the world, so he could only show Halley and Hooke part of the solution. As for the rest, Newton had to spend the next several years re-deriving everything he did in tedious geometry. And, if you ever read his book, Principia, you'll see how tedious the geometry can be!) Newton began with three laws of motion. They concern mass, (how much stuff there is), velocity (the speed and direction of an object's motion, acceleration (the change in an object's velocity, and force (what it takes to change an object's motion). Newton's three laws of motion are Law 1: Bodies in motion will continue in a straight line motion (at the same speed), and bodies at rest will stay at rest, unless acted upon by an outside force. Law 2: A body's change in motion is proportional to the force acting on it, and will be in the direction of the force. In mathematical terms, F = m a , where F is the force, m is the mass, and a is the acceleration. The larger the mass of the object, the more force it takes to accelerate it. Law 3: For every force, there exists an equal and opposite force. For example, when you push the floor, the floor pushes back at you. That is how you jump off the ground. In addition, Newton stated his law of gravity. Mathematically, the law is F = G M m / r^2 . In other words, the force due to gravity is an attractive force between two bodies, which depends only on the mass of the two bodies (M and m) and inversely on the square of the separation between the two bodies. (If you double the mass of the earth, its gravitational force will become twice as big; if you get 3 times further away from the earth, its gravitational force will be 3 x 3 = 9 times weaker.) Note that the force of gravity does not depend on the object's shape, density, or what it is made of. GRAVITY DEPENDS ON MASS AND DISTANCE AND NOTHING ELSE! Newton's laws have many implications. First, they say that many of the motions we see every day are complex combinations of two motions. For instance, consider a baseball thrown in an arch. There is no force to stop it from going sideways, so it moves sideways at a constant speed. However, the earth's gravity pulls the ball downward. So at each moment, the motion of the ball is a combination of a constant sideways motion, and an acceleration downward towards the center of the earth. Consider the orbit of the Moon about the earth. The gravitational attraction between the earth and Moon is causing the Moon to fall towards the earth. If the Moon were completely at rest, it would hit the earth. However, the Moon is also going sideways, so while the Moon is falling, it is also continuing to move sideways. As a result, it falls, but ``misses'' the earth. It is thus in orbit. According to Newton's Law of Gravity, the attraction one body has on another body is F = G M m / r^2. This formula says that the attractive force never goes to zero --- unless you are at an infinite distance from a source, there is always some attraction. Here's a simple question. You are standing on the earth. But, how far as you from the earth? The distance you are from soil you are standing on is essentially zero, but you are several thousand miles from the soil in Australia. What distance does the law of gravity refer to? One of Newton's triumphs was his mathematical proof to showed that, when you calculate the total gravity associated with a round object, it's as if all the mass of the object resided at the object's center. In other words, when you talk about the distance you are from something, you are talking about the distance to the center of the object. So, although you are now standing (or sitting) on the earth's surface, for purposes of gravitational calculations, you are about 4000 miles from the earth. But this now presents a puzzle. When you stand on the earth, you are about 4000 miles from the earth's center. When astronauts orbit the earth, they are about 150 miles above the earth's surface. In other words, they are 4150 miles from the earth's center. That's not much of a difference. So how come you feel weight, while the astronauts float around weightless? Even Jules Verne, in his novel From the Earth to the Moon, got that wrong. When you stand on the earth, the earth's gravity is pulling you down. But the ground doesn't let you move. You are pushing on the ground (due to gravity), and the ground is pushing on you (Newton's third law). The push of the ground causes you to feel weight. Now consider an astronaut in space. The earth's gravity is pulling him down, just like it is pulling you down. However, he is in a spacecraft, and gravity is pulling the spacecraft down as well. Both are falling together, and nothing is pushing back. The astronaut is weightless. Mass is also important in gravitational calculations. Consider the orbit of the earth. The earth is going around the Sun. However, the Sun is also going around the earth. In fact, both the Sun and the Earth are going around a center of mass. Because the Sun is so much more massive than the earth, the orbit it takes due to the earth is very small. However, if the earth had the same mass of the Sun, both would orbit a point half-way between the two. This has an implication for Kepler's third law. In fact, the law is wrong. Instead of P^2 = a^3, where P is the orbital period and a is the semi-major axis of the orbit, the correct equation is (M + m) P^2 = a^3 , where M is the mass of the Sun, and m is the mass of the planet that is in orbit. In other words, the relation between period and orbital size depends also on the masses involved. In the solar system, the mass of the Sun is so much greater than the mass of any planet, that m in the above equation can be neglected. (Note that for ease of math, astronomers measure orbital periods in years, orbital sizes in astronomical units, and masses in solar masses.) Now, suppose the Sun were expanded out to twice its size. If it contained the same amount of matter, then its gravitational force would remain the same. Similarly, if the Sun were squeezed into the size of a basketball, its gravitational pull would remain the same, as long as no mass was lost. Finally, let's consider the topic of tides. The formula for gravity says that the force due to gravity depends on mass and distance. The Moon has mass, and it is some finite distance from the earth, so the earth feels its gravity. But consider: the side of the earth which faces the Moon is about 4000 miles closer to the Moon than the center of the earth. Therefore, it feels a greater gravitational force, and material on this near side of the earth is actually pulled away from the earth's center. Similarly, the center of the earth is about 4000 miles closer to the Moon than the side facing away from the Moon. So the center of the earth is pulled away from the material on the earth's far side. The result is that the earth becomes elongated, with tidal bulges on the sides toward and away from the Moon. In fact, the Moon has relatively little mass and is moderately far away, so its ``tidal force'' isn't great enough to cause the rocks on earth to overcome friction. However, water can move much more easily. The result is that the water is continually pulled toward (and away) from the Moon. These are the tides. If you are on the side of the earth facing the Moon (or away from the Moon), that's where the water is, and you have high tide. If you are in between, you have low tide. Any object with finite size can be affected by tides. If you do the math, you will find that the ``tidal'' force, i.e., the difference between the pull on one side of an object and that on the other is F(tide) = G M s / r^3 , where M is the mass of the body causing the tides, r is the distance to that body, and s is the size of the body being affected. So, if the Moon were twice as far away as it is now, the earth's tides would be 2 x 2 x 2 = 8 times less. The Sun's gravity also pulls the earth, so it also causes tides. The Sun is about 30,000,000 times more massive than the Moon, but it is also 400 times further away. Accordingly, its gravitational force on the earth is almost 200 times greater than that of the Moon, but its tides are only half as big. When the Sun and Moon line up, so that they tidal bulges in the same direction, the tides are extra high and extra low. These are called spring tides. When the Moon and Sun are at right angles, their effects cancel, and we get neap tides.
http://www2.astro.psu.edu/users/caryl/a10/lec4_2d.html
13
57
Coordination Compounds are the backbone of modern inorganic and bio–inorganic chemistry and chemical industry. In the previous Unit we learnt that the transition metals form a large number of complex compounds in which the metal atoms are bound to a number of anions or neutral molecules. In modern terminology such compounds are called coordination compounds. The chemistry of coordination compounds is an important and challenging area of modern inorganic chemistry. New concepts of chemical bonding and molecular structure have provided insights into the functioning of vital components of biological systems. Chlorophyll, haemoglobin and vitamin B12 are coordination compounds of magnesium, iron and cobalt respectively. Variety of metallurgical processes, industrial catalysts and analytical reagents involve the use of coordination compounds. Coordination compounds also find many applications in electroplating, textile dyeing and medicinal chemistry. 9.1 Werner’s Theory of Coordination Compounds Alfred Werner (1866-1919), a Swiss chemist was the first to formulate his ideas about the structures of coordination compounds. He prepared and characterised a large number of coordination compounds and studied their physical and chemical behaviour by simple experimental techniques. Werner proposed the concept of a primary valence and a secondary valence for a metal ion. Binary compounds such as CrCl3, CoCl2 or PdCl2 have primary valence of 3, 2 and 2 respectively. In a series of compounds of cobalt(III) chloride with ammonia, it was found that some of the chloride ions could be precipitated as AgCl on adding excess silver nitrate solution in cold but some remained in solution. 1 mol CoCl3.6NH3 (Yellow) gave 3 mol AgCl 1 mol CoCl3.5NH3 (Purple) gave 2 mol AgCl 1 mol CoCl3.4NH3 (Green) gave 1 mol AgCl 1 mol CoCl3.4NH3 (Violet) gave 1 mol AgCl These observations, together with the results of conductivity measurements in solution can be explained if (i) six groups in all, either chloride ions or ammonia molecules or both, remain bonded to the cobalt ion during the reaction and (ii) the compounds are formulated as shown in Table 9.1, where the atoms within the square brackets form a single entity which does not dissociate under the reaction conditions. Werner proposed the term secondary valence for the number of groups bound directly to the metal ion; in each of these examples the secondary valences are six. |Colour||Formula||Solution conductivity corresponds to| Note that the last two compounds in Table 9.1 have identical empirical formula, CoCl3.4NH3, but distinct properties. Such compounds are termed as isomers. Werner in 1898, propounded his theory of coordination compounds. The main postulates are: 1. In coordination compounds metals show two types of linkages (valences)-primary and secondary. 2. The primary valences are normally ionisable and are satisfied by negative ions. 3. The secondary valences are non ionisable. These are satisfied by neutral molecules or negative ions. The secondary valence is equal to the coordination number and is fixed for a metal. 4. The ions/groups bound by the secondary linkages to the metal have characteristic spatial arrangements corresponding to different coordination numbers. In modern formulations, such spatial arrangements are called coordination polyhedra. The species within the square bracket are coordination entities or complexes and the ions outside the square bracket are called counter ions. He further postulated that octahedral, tetrahedral and square planar geometrical shapes are more common in coordination compounds of transition metals. Thus, [Co(NH3)6]3+, [CoCl(NH3)5]2+ and [CoCl2(NH3)4]+ are octahedral entities, while [Ni(CO)4] and [PtCl4]2− are tetrahedral and square planar, respectively. On the basis of the following observations made with aqueous solutions, assign secondary valences to metals in the following compounds: Formula Moles of AgCl precipitated per mole of the compounds with excess AgNO3 |Formula||Moles of AgCl precipitated per mole of the compounds with excess AgNO3| (i) Secondary 4 (ii) Secondary 6 (iii) Secondary 6 (iv) Secondary 6 (v) Secondary 4 Difference between a double salt and a complex Both double salts as well as complexes are formed by the combination of two or more stable compounds in stoichiometric ratio. However, they differ in the fact that double salts such as carnallite, KCl.MgCl2.6H2O, Mohr’s salt, FeSO4.(NH4)2SO4.6H2O, potash alum, KAl(SO4)2.12H2O, etc. dissociate into simple ions completely when dissolved in water. However, complex ions such as [Fe(CN)6]4− of K4Fe(CN)6, do not dissociate into Fe2+ and CN− ions. Werner was born on December 12, 1866, in Mülhouse, a small community in the French province of Alsace. His study of chemistry began in Karlsruhe (Germany) and continued in Zurich (Switzerland), where in his doctoral thesis in 1890, he explained the difference in properties of certain nitrogen containing organic substances on the basis of isomerism. He extended vant Hoff’s theory of tetrahedral carbon atom and modified it for nitrogen. Wer ner showed optical and electrical differences between complex compounds based on physical measurements. In fact, Werner was the first to discover optical activity in certain coordination compounds. He, at the age of 29 years became a full professor at Technische Hochschule in Zurich in 1895. Alfred Werner was a chemist and educationist. His accomplishments included the development of the theory of coordination compounds. This theory, in which Werner proposed evolutionary ideas about how atoms and molecules are linked together, was formulated in a span of only three years, from 1890 to 1893. The remainder of his career was spent gathering the experimental support required to validate his new ideas. Werner became the first Swiss chemist to win the Nobel Prize in 1913 for his work on the linkage of atoms and the coordination theory. 9.2 Definations of Some Important Terms Pertaining to Coordination Compounds (a) Coordination entity A coordination entity constitutes a central metal atom or ion bonded to a fixed number of ions or molecules. For example, [CoCl3(NH3)3] is a coordination entity in which the cobalt ion is surrounded by three ammonia molecules and three chloride ions. Other examples are [Ni(CO)4], [PtCl2(NH3)2], [Fe(CN)6]4− , [Co(NH3)6]3+ . (b) Central atom/ion In a coordination entity, the atom/ion to which a fixed number of ions/groups are bound in a definite geometrical arrangement around it, is called the central atom or ion. For example, the central atom/ion in the coordination entities: [NiCl2(H2O)4], [CoCl(NH3)5]2+ and [Fe(CN)6]3– are Ni2+, Co3+ and Fe3+, respectively. These central atoms/ions are also referred to as Lewis acids. The ions or molecules bound to the central atom/ion in the coordination entity are called ligands. These may be simple ions such as Cl− , small molecules such as H2O or NH3, larger molecules such as H2NCH2CH2NH2 or N(CH2CH2NH2)3 or even macromolecules, such as proteins. When a ligand is bound to a metal ion through a single donor atom, as with Cl− , H2O or NH3, the ligand is said to be unidentate. When a ligand can bind through two donor atoms as in H2NCH2CH2NH2 (ethane-1,2-diamine) or C2O42− (oxalate), the ligand is said to be didentate and when several donor atoms are present in a single ligand as in N(CH2CH2NH2)3, the ligand is said to be polydentate. Ethylenediaminetetraacetate ion (EDTA ) is an important hexadentate ligand. It can bind through two nitrogen and four oxygen atoms to a central metal ion. When a di- or polydentate ligand uses its two or more donor atoms to bind a single metal ion, it is said to be a chelate ligand. The number of such ligating groups is called the denticity of the ligand. Such complexes, called chelate complexes tend to be more stable than similar complexes containing unidentate ligands (for reasons see Section 9.8). Ligand which can ligate through two different atoms is called ambidentate ligand. Examples of such ligands are the NO2− and SCN− ions. NO2−ion can coordinate either through nitrogen or through oxygen to a central metal atom/ion. Similarly, SCN− ion can coordinate through the sulphur or nitrogen atom. (d) Coordination number The coordination number (CN) of a metal ion in a complex can be defined as the number of ligand donor atoms to which the metal is directly bonded. For example, in the complex ions, [PtCl6]2– and [Ni(NH3)2+] , the coordination number of Pt and Ni are 6 and 4 respectively. Similarly, in the complex ions, [Fe(C2O4)3]3– and [Co(en)3]3+ , the coordination number of both, Fe and Co, is 6 because C2O42– and en (ethane-1,2-diamine) are didentate ligands. It is important to note here that coordination number of the central atom/ion is determined only by the number of sigma bonds formed by the ligand with the central atom/ion. Pi bonds, if formed between the ligand and the central atom/ion, are not counted for this purpose. (e) Coordination sphere The central atom/ion and the ligands attached to it are enclosed in square bracket and is collectively termed as the coordination sphere. The ionisable groups are written outside the bracket and are called counter ions. For example, in the complex K4[Fe(CN)6], the coordination sphere is [Fe(CN)6]4– and the counter ion is K+ . The spatial arrangement of the ligand atoms which are directly attached to the central atom/ion defines a coordination polyhedron about the central atom. The most common coordination polyhedra are octahedral, square planar and tetrahedral. For example, [Co(NH3)6]3+ is octahedral, [Ni(CO)4] is tetrahedral and [PtCl4]2− is square planar. Fig. 9.1 shows the shapes of different coordination polyhedra. (g) Oxidation number of central atom The oxidation number of the central atom in a complex is defined as the charge it would carry if all the ligands are removed along with the electron pairs that are shared with the central atom. The oxidation number is represented by a Roman numeral in parenthesis following the name of the coordination entity. For example, oxidation number of copper in [Cu(CN)4]3– is +1 and it is written as Cu(I). (h) Homoleptic and heteroleptic complexes Complexes in which a metal is bound to only one kind of donor groups, e.g., [Co(NH3)6]3+ , are known as homoleptic. Complexes in which a metal is bound to more than one kind of donor groups, e.g., [Co(NH3)4Cl2]+ , are known as heteroleptic. 9.3 Nomenclature of Coodination Compounds Nomenclature is important in Coordination Chemistry because of the need to have an unambiguous method of describing formulas and writing systematic names, particularly when dealing with isomers. The formulas and names adopted for coordination entities are based on the recommendations of the International Union of Pure and Applied Chemistry (IUPAC). 9.3.1 Formulas of Mononuclear Coordination Entities Information about the constitution of the compound in a concise and convenient manner. Mononuclear coordination entities contain a single central metal atom. The following rules are applied while writing the formulas: (i) The central atom is listed first. (ii) The ligands are then listed in alphabetical order. The placement of a ligand in the list does not depend on its charge. (iii) Polydentate ligands are also listed alphabetically. In case of abbreviated ligand, the first letter of the abbreviation is used to determine the position of the ligand in the alphabetical order. (iv) The formula for the entire coordination entity, whether charged or not, is enclosed in square brackets. When ligands are polyatomic, their formulas are enclosed in parentheses. Ligand abbreviations are also enclosed in parentheses. (v) There should be no space between the ligan ds and the metal within a coordination sphere. (vi) When the formula of a charged coordination entity is to be written without that of the counter ion, the charge is indicated outside the square brackets as a right superscript with the number before the sign. For example, [Co(CN)63−] , [Cr(H2O)6]3+ , etc. (vii) The charge of the cation(s) is balanced by the charge of the anion(s). 9.3.2 Naming of Mononuclear Coordination Compounds The names of coordination compounds are derived by following the principles of additive nomenclature. Thus, the groups that surround the central atom must be identified in the name. They are listed as prefixes to the name of the central atom along with any appropriate multipliers. The following rules are used when naming coordination compounds: (i) The cation is named first in both positively and negatively charged coordination entities. (ii) The ligands are named in an alphabetical order before the name of the central atom/ion. (This procedure is reversed from writing formula). (iii) Names of the anionic ligands end in –o, those of neutral and cationic ligands are the same except aqua for H2O, ammine for NH3, carbonyl for CO and nitrosyl for NO. These are placed within enclosing marks ( ). (iv) Prefixes mono, di, tri, etc., are used to indicate the number of the individual ligands in the coordination entity. When the names of the ligands include a numerical prefix, then the terms, bis, tris, tetrakis are used, the ligand to which they refer being placed in parentheses. For example, [NiCl2(PPh3)2] is named as dichlorobis(triphenylphosphine)nickel(II). (v) Oxidation state of the metal in cation, anion or neutral coordination entity is indicated by Roman numeral in parenthesis. (vi) If the complex ion is a cation, the metal is named same as the element. For example, Co in a complex cation is called cobalt and Pt is called platinum. If the complex ion is an anion, the name of the metal ends with the suffix – ate. For example, Co in a complex anion, [Co (SCN)4]2− is called cobaltate. For some metals, the Latin names are used in the complex anions, e.g., ferrate for Fe. (vii) The neutral complex molecule is named similar to that of the complex cation. The following examples illustrate the nomenclature for coordination compounds. 1. [Cr(NH3)3(H2O)3]Cl3 is named as: Explanation: The complex ion is inside the square bracket, which is a cation. The amine ligands are named before the aqua ligands according to alphabetical order. Since there are three chloride ions in the compound, the charge on the complex ion must be +3 (since the compound is electrically neutral). From the charge on the complex ion and the charge on the ligands, we can calculate the oxidation number of the metal. In this example, all the ligands are neutral molecules. Therefore, the oxidation number of chromium must be the same as the charge of the complex ion, +3. 2. [Co(H2NCH2CH2NH2)3]2(SO4)3 is named as: tris(ethane-1,2–diammine)cobalt(III) sulphate Explanation: The sulphate is the counter anion in this molecule. Since it takes 3 sulphates to bond with two complex cations, the charge on each complex cation must be +3. Further, ethane-1,2– diamine is a neutral molecule, so the oxidation number of cobalt in the complex ion must be +3. Remember that you never have to indicate the number of cations and anions in the name of an ionic compound. 3. [Ag(NH3)2][Ag(CN)2] is named as: diamminesilver(I) dicyanoargentate(I) Example 9.2 Write the formulas for the following coordination compounds: (i) Tetraammineaquachloridocobalt(III) chloride (ii) Potassium tetrahydroxozincate(II) (iii) Potassium trioxalatoaluminate(III) Example 9.3 Write the IUPAC names of the following coordination compounds: (ii) Potassium trioxalatochromate(III) (iii) Dichloridobis(ethane-1,2-diamine)cobalt(III) chloride (iv) Pentaamminecarbonatocobalt(III) chloride (v) Mercury tetrathiocyanatocobaltate(III) 9.1 Write the formulas for the following coordination compounds: (i) Tetraamminediaquacobalt(III) chloride (ii) Potassium tetracyanonickelate(II) (iii) Tris(ethane–1,2–diamine) chromium(III) chloride (v) Dichloridobis(ethane–1,2–diamine)platinum(IV) nitrate (vi) Iron(III) hexacyanoferrate(II) 9.2 Write the IUPAC names of the following coordination compounds: 9.4 Isomerism in Coordination Compounds Isomers are two or more compounds that have the same chemical formula but a different arrangement of atoms. Because of the different arrangement of atoms, they differ in one or more physical or chemical properties. Two principal types of isomerism are known among coordination compounds. Each of which can be further subdivided. (i) Geometrical isomerism (ii) Optical isomerism (b) Structural isomerism (i) Linkage isomerism (ii) Coordination isomerism (iii) Ionisation isomerism (iv) Solvate isomerism Stereoisomers have the same chemical formula and chemical bonds but they have different spatial arrangement. Structural isomers have different bonds. A detailed account of these isomers are given below. 9.4.1 Geometric Isomerism This type of isomerism arises in heteroleptic complexes due to different possible geometric arrangements of the ligands. Important examples of this behaviour are found with coordination numbers 4 and 6. In a square planar complex of formula [MX2L2] (X and L are unidentate), the two ligands X may be arranged adjacent to each other in a cis isomer, or opposite to each other in a trans isomer as depicted in Fig. 9.2. Other square planar complex of the type MABXL (where A, B, X, L are unidentates) shows three isomers-two cis and one trans. You may attempt to draw these structures. Such isomerism is not possible for a tetrahedral geometry but similar behaviour is possible in octahedral complexes of formula [MX2L4] in which the two ligands X may be oriented cis or trans to each other (Fig. 9.3). Another type of geometrical isomerism occurs in octahedral coordination entities of the type [Ma3b3] like [Co(NH3)3(NO2)3]. If three donor atoms of the same ligands occupy adjacent positions at the corners of an octahedral face, we have the facial (fac) isomer. When the positions are around the meridian of the octahedron, we get the meridional (mer) isomer (Fig. 9.5). Why is geometrical isomerism not possible in tetrahedral complexes having two different types of unidentate ligands coordinated with the central metal ion ? Tetrahedral complexes do not show geometrical isomerism because the relative positions of the unidentate ligands attached to the central metal atom are the same with respect to each other. 9.4.2 Optical Isomerism Optical isomers are mirror images that cannot be superimposed on one another. These are called as enantiomers. The molecules or ions that cannot be superimposed are called chiral. The two forms are called dextro (d) and laevo (l) depending upon the direction they rotate the plane of polarised light in a polarimeter (d rotates to the right, l to the left). Optical isomerism is common in octahedral complexes involving didentate ligands (Fig. 9.6). In a coordination entity of the type [PtCl2(en)2]2+ , only the cis-isomer shows optical activity (Fig. 9.7). Out of the following two coordination entities which is chiral (optically active)? Solution The two entities are represented as Out of the two, (a) cis – [CrCl2(ox)2] is chiral (optically active). 9.4.3 Linkage Isomerism Linkage isomerism arises in a coordination compound containing ambidentate ligand. A simple example is provided by complexes containing the thiocyanate ligand, NCS–, which may bind through the nitrogen to give M–NCS or through sulphur to give M–SCN. Jørgensen discovered such behaviour in the complex [Co(NH3)5(NO2)]Cl2, which is obtained as the red form, in which the nitrite ligand is bound through oxygen (–ONO), and as the yellow form, in which the nitrite ligand is bound through nitrogen (–NO2). 9.4.4 Coordination Isomerism This type of isomerism arises from the interchange of ligands between cationic and anionic entities of different metal ions present in a complex. An example is provided by [Co(NH3)6][Cr(CN)6], in which the NH3 ligands are bound to Co3+ and the CN– ligands to Cr3+ . In its coordination isomer [Cr(NH3)6][Co(CN)6], the NH3 ligands are bound to Cr3+ and the CN– ligands to Co3+ . 9.4.5 Ionisation Isomerism This form of isomerism arises when the counter ion in a complex salt is itself a potential ligand and can displace a ligand which can then become the counter ion. An example is provided by the ionisation isomers [Co(NH3)5SO4]Br and [Co(NH3)5Br]SO4. 9.4.6 Solvate Isomerism This form of isomerism is known as ‘hydrate isomerism’ in case where water is involved as a solvent. This is similar to ionisation isomerism. Solvate isomers differ by whether or not a solvent molecule is directly bonded to the metal ion or merely present as free solvent molecules in the crystal lattice. An example is provided by the aqua complex [Cr(H2O)6]Cl3 (violet) and its solvate isomer [Cr(H2O)5Cl]Cl2.H2O (grey-green). 9.3 Indicate the types of isomerism exhibited by the following complexes and draw the structures for these isomers: 9.4 Give evidence that [Co(NH3)5Cl]SO4 and [Co(NH3)5SO4]Cl are ionisation isomers. Werner was the first to describe the bonding features in coordination compounds. But his theory could not answer basic questions like: (i) Why only certain elements possess the remarkable property of forming coordination compounds? (ii) Why the bonds in coordination compounds have directional properties? (iii) Why coordination compounds have characteristic magnetic and optical properties? Many approaches have been put forth to explain the nature of bonding in coordination compounds viz. Valence Bond Theory (VBT),Crystal Field Theory (CFT), Ligand Field Theory (LFT) and Molecular Orbital Theory (MOT). We shall focus our attention on elementary treatment of the application of VBT and CFT to coordination compounds. 9.5.1 Valence Bond Theory According to this theory, the metal atom or ion under the influence of ligands can use its (n-1)d, ns, np or ns, np, nd orbitals for hybridisation to yield a set of equivalent orbitals of definite geometry such as octahedral, tetrahedral, square planar and so on (Table 9.2). These hybridised orbitals are allowed to overlap with ligand orbitals that can donate electron pairs for bonding. This is illustrated by the following examples. |Coordination number||Type of hybridisation||Distribution of hybrid orbitals in space| It is usually possible to predict the geometry of a complex from the knowledge of its magnetic behaviour on the basis of the valence bond theory. In the diamagnetic octahedral complex,[Co(NH3)6]3+ , the cobalt ion is in +3 oxidation state and has the electronic configuration 3d6. The hybridisation scheme is as shown in diagram. Six pairs of electrons, one from each NH3 molecule, occupy the six hybrid orbitals. Thus, the complex has octahedral geometry and is diamagnetic because of the absence of unpaired electron. In the formation of this complex, since the inner d orbital (3d) is used in hybridisation, the complex, [Co(NH3)6]3+ is called an inner orbital or low spin or spin paired complex. The paramagnetic octahedral complex, [CoF6]3− uses outer orbital (4d ) in hybridisation (sp3d ). It is thus called outer orbital or high spin or spin free complex. Thus: In tetrahedral complexes one s and three p orbitals are hybridised to form four equivalent orbitals oriented tetrahedrally. This is ill-ustrated below for [NiCl42−]. Here nickel is in +2 oxidation state and the ion has the electronic configuration 3d8. The hybridisation scheme is as shown in diagram. Each Cl- ion donates a pair of electrons. The compound is paramagnetic since it contains two unpaired electrons. Similarly, [Ni(CO)4] has tetrahedral geometry but is diamagnetic since nickel is in zero oxidation state and contains no unpaired electron. In the square planar complexes, the hybridisation involved is dsp2. An example is [Ni(CN)4]2–. Here nickel is in +2 oxidation state and has the electronic configuration 3d8. The hybridisation scheme is as shown in diagram: Each of the hybridised orbitals receives a pair of electrons from a cyanide ion. The compound is diamagnetic as evident from the absence of unpaired electron. It is important to note that the hybrid orbitals do not actually exist. In fact, hybridisation is a mathematical manipulation of wave equation for the atomic orbitals involved. 9.5.2 Magnetic Properties of Coordination Compounds The magnetic moment of coordination compounds can be measured by the magnetic susceptibility experiments. The results can be used to obtain information about the structures adopted by metal complexes. A critical study of the magnetic data of coordination compounds of metals of the first transition series reveals some complications. For metal ions with upto three electrons in the d orbitals, like Ti3+(d1); V3+(d2); Cr3+(d3); two vacant d orbitals are available for octahedral hybridisation with 4s and 4p orbitals. The magnetic behaviour of these free ions and their coordination entities is similar. When more than three 3d electrons are present, the required pair of 3d orbitals for octahedral hybridisation is not directly available (as a consequence of Hund’s rule). Thus, for d (Cr2+, Mn3+), d5(Mn2+, Fe3+), d6(Fe2+,Co3+) cases, a vacant pair of d orbitals results only by pairing of 3d electrons which leaves two, one and zero unpaired electrons, respectively. The magnetic data agree with maximum spin pairing in many cases, especially with coordination compounds containing d6 ions. However, with species containing d4 and d5 ions there are complications. [Mn(CN)6]3– has magnetic moment of two unpaired electrons while [MnCl6]3– has a paramagnetic moment of four unpaired electrons. [Fe(CN)6]3– has magnetic moment of a single unpaired electron while [FeF6]3– has a paramagnetic moment of five unpaired electrons. [CoF6]3– is paramagnetic with four unpaired electrons while [Co(C2O4)3]3− is diamagnetic. This apparent anomaly is explained by valence bond theory in terms of formation of inner orbital and outer orbital coordination entities. [Mn(CN)6]3– , [Fe(CN)6]3– and [Co(C2O4)3]3– are inner orbital complexes involving d2sp3 hybridisation, the former two complexes are paramagnetic and the latter diamagnetic. On the other hand, [MnCl6]3– , [FeF6]3– and [CoF6-]3– are outer orbital complexes involving sp3d2 hybridisation and are paramagnetic corresponding to four, five and four unpaired electrons. The spin only magnetic moment of [MnBr4]2– is 5.9 BM. Predict the geometry of the complexion ? Since the coordination number of Mn2+ ion in the complex ion is 4, it will be either tetrahedral (sp3 hybridisation) or square planar (dsp2 hybridisation). But the fact that the magnetic moment of the complex ion is 5.9 BM, it should be tetrahedral in shape rather than square planar because of the presence of five unpaired electrons in the d orbitals. 9.5.3 Limitations of Valence Bond Theory While the VB theory, to a larger extent, explains the formation, structures and magnetic behaviour of coordination compounds, it suffers from the following shortcomings: (i) It involves a number of assumptions. (ii) It does not give quantitative interpretation of magnetic data. (iii) It does not explain the colour exhibited by coordination compounds. (iv) It does not give a quantitative interpretation of the thermodynamic or kinetic stabilities of coordination compounds. (v) It does not make exact predictions regarding the tetrahedral and square planar structures of 4-coordinate complexes. (vi) It does not distinguish between weak and strong ligands. 9.5.4 Crystal Field Theory The crystal field theory (CFT) is an electrostatic model which considers the metal-ligand bond to be ionic arising purely from electrostatic interactions between the metal ion and the ligand. Ligands are treated as point charges in case of anions or dipoles in case of neutral molecules. The five d orbitals in an isolated gaseous metal atom/ion have same energy, i.e., they are degenerate. This degeneracy is maintained if a spherically symmetrical field of negative charges surrounds the metal atom/ion. However, when this negative field is due to ligands (either anions or the negative ends of dipolar molecules like NH3 and H2O) in a complex, it becomes asymmetrical and the degeneracy of the d orbitals is lifted. It results in splitting of the d orbitals. The pattern of splitting depends upon the nature of the crystal field. Let us explain this splitting in different crystal fields. (a) Crystal field splitting in octahedral coordination entities In an octahedral coordination entity with six ligands surrounding the metal atom/ion, there will be repulsion between the electrons in metal d orbitals and the electrons (or negative charges) of the ligands. Such a repulsion is more when the metal d orbital is directed towards the ligand than when it is away from the ligand. Thus, the dx2 − y2 and dz2 orbitals which point towards the axes along the direction of the ligand will experience more repulsion and will be raised in energy; and the dxy, dyz and dxz orbitals which are directed between the axes will be lowered in energy relative to the average energy in the spherical crystal field. Thus, the degeneracy of the d orbitals has been removed due to ligand electron-metal electron repulsions in the octahedral complex to yield three orbitals of lower energy, t2g set and two orbitals of higher energy, eg set. This splitting of the degenerate levels due to the presence of ligands in a definite geometry is termed as crystal field splitting and the energy separation is denoted Δo (the subscript o is for octahedral) (Fig.9.8). Thus, the energy of the two eg orbitals will increase by (3/5) Δo and that of the three t2g will decrease by (2/5)Δo. The crystal field splitting,Δo, depends upon the field d orbitals produced by the ligand and charge on the metal ion. Some ligands are able to produce strong fields in which case, the splitting will be large whereas others produce weak fields and consequently result in small splitting of d orbitals. In general, ligands can be arranged in a series in the order of increasing field strength as given below: I– < Br– < SCN– < Cl– < S2– < F– < OH– < C2O42– < H2O < NCS–< edta4– < NH3 < en < CN– < CO Such a series is termed as spectrochemical series. It is an experimentally determined series based on the absorption of light by complexes with different ligands. Let us assign electrons in the d orbitals of metal ion in octahedral coordination entities. Obviously, the single d electron occupies one of the lower energy t2g orbitals. In d2 and d3 coordination entities, the d electrons occupy the t2g orbitals singly in accordance with the Hund’s rule. For d4 ions, two possible patterns of electron distribution arise: (i) the fourth electron could either enter the t2g level and pair with an existing electron, or (ii) it could avoid paying the price of the pairing energy by occupying the e g level. Which of these possibilities occurs, depends on the relative magnitude of the crystal field splitting, Δo and the pairing energy, P (P represents the energy required for electron pairing in a single orbital). The two options are: (i) If Δo < P, the fourth electron enters one of the eg orbitals giving the configuration t2g3e1g. Ligands for which Δo< P are known as weak field ligands and form high spin complexes. (ii) If Δo > P, it becomes more energetically favourable for the fourth electron to occupy a t2g orbital with configuration t2g4eg0 . Ligands which produce this effect are known as strong field ligands and form low spin complexes. Calculations show that d4 to d7 coordination entities are more stable for strong field as compared to weak field cases. (b) Crystal field splitting in tetrahedral coordination entities In tetrahedral coordination entity formation, the d orbital splitting (Fig. 9.9) is inverted and is smaller as compared to the octahedral field splitting. For the same metal, the same ligands and metal-ligand distances, it can be shown that Δt = (4/9) Δ0. Consequently, the orbital splitting energies are not sufficiently large for forcing pairing and, therefore, low spin configurations are rarely observed. 9.5.5 Colour in Coordination Compounds In the previous Unit, we learnt that one of the most distinctive properties of transition metal complexes is their wide range of colours. This means that some of the visible spectrum is being removed from white light as it passes through the sample, so the light that emerges is no longer white. The colour of the complex is complementary to that which is absorbed. The complementary colour is the colour generated from the wavelength left over; if green light is absorbed by the complex, it appears red. Table 9.3 gives the relationship of the different wavelength absorbed and the colour observed. The colour in the coordination compounds can be readily explained in terms of the crystal field theory. Consider, for example, the complex [Ti(H2O)6]3+, which is violet in colour. This is an octahedral complex where the single electron (Ti3+ is a 3d1 system) in the metal d orbital is in the t2g level in the ground state of the complex. The next higher state available for the electron is the empty eg level. If light corresponding to the energy of yellow-green region is absorbed by the complex, it would excite the electron from t2g level to the eg level (t2g1 eg0 → t2g0 eg1 ). Consequently, the complex appears violet in colour (Fig. 9.10). The crystal field theory attributes the colour of the coordination compounds to d-d transition of the electron. It is important to note that in the absence of ligand, crystal field splitting does not occur and hence the substance is colourless. For example, removal of water from [Ti(HM2O)6]Cl3 on heating renders it colourless. Similarly, anhydrous CuSO4 is white, but CuSO4.5H2O is 3+ blue in colour. The influence of the ligand on the colour of a complex may be illustrated by considering the [Ni(H2O)6]2+ complex, which forms when nickel(II) chloride is dissolved in water. If the didentate ligand, ethane-1,2-diamine(en) is progressively added in the molar ratios en:Ni, 1:1, 2:1, 3:1, the following series of reactions and their associated colour changes occur: [Ni(H2O)6]2+ + en (aq)= [Ni(H2O)4(en)] (aq) + 2H2O green pale blue [Ni(H2O)4(en)]2+(aq) + en (aq) = [Ni(H2O)2(en)2]2+(aq) + 2H2O [Ni(H2O)2(en)2](aq) + en (aq) = [Ni(en)3]2+ (aq) This sequence is shown in Fig. 9.11. The colours produced by electronic transitions within the d orbitals of a transition metal ion occur frequently in everyday life. Ruby [Fig.9.12(a)] is aluminium oxide (Al2O3) containing about 0.5-1% Cr3+ ions (d3), which are randomly distributed in positions normally occupied by Al3+ . We may view these chromium(III) species as octahedral chromium(III) complexes incorporated into the alumina lattice; d–d transitions at these centres give rise to the colour. In emerald [Fig.9.12(b)], Cr3+ ions occupy octahedral sites in the mineral beryl (Be3Al2Si6O18). The absorption bands seen in the ruby shift to longer wavelength, namely yellow-red and blue, causing emerald to transmit light in the green region. 9.5.6 Limitations of Crystal Field Theory The crystal field model is successful in explaining the formation, structures, colour and magnetic properties of coordination compounds to a large extent. However, from the assumptions that the ligands are point charges, it follows that anionic ligands should exert the greatest splitting effect. The anionic ligands actually are found at the low end of the spectrochemical series. Further, it does not take into account the covalent character of bonding between the ligand and the central atom. These are some of the weaknesses of CFT, which are explained by ligand field theory (LFT) and molecular orbital theory which are beyond the scope of the present study. 9.5 Explain on the basis of valence bond theory that [Ni(CN)4]2− ion with square planar structure is diamagnetic and the [NiCl4]2− ion with tetrahedral geometry is paramagnetic. 9.6 [NiCl4]2− is paramagnetic while [Ni(CO)4] is diamagnetic though both are tetrahedral. Why? 9.7 [Fe(H2O)6]3+ is strongly paramagnetic whereas [Fe(CN)6]3− is weakly paramagnetic. Explain. 9.8 Explain [Co(NH3)6]3+ is an inner orbital complex whereas [Ni(NH3)6]2+ is an outer orbital complex. 9.9 Predict the number of unpaired electrons in the square planar [Pt(CN)4]2−ion. 9.10 The hexaquo manganese(II) ion contains five unpaired electrons, while the hexacyanoion contains only one unpaired electron. Explain using Crystal Field Theory. 9.6 Bonding in Metal Carbonyls The homoleptic carbonyls (compounds containing carbonyl ligands only) are formed by most of the transition metals. These carbonyls have simple, well defined structures. Tetracarbonylnickel(0) is tetrahedral, pentacarbonyliron(0) is trigonalbipyramidal while hexacarbonyl chromium(0) is octahedral. Decacarbonyldimanganese(0) is made up of two square pyramidal Mn(CO)5 units joined by a Mn – Mn bond. Octacarbonyldicobalt(0) has a Co – Co bond bridged by two CO groups (Fig.9.13). The metal-carbon bond in metal carbonyls possess both s and p character. The M–C σ bond is formed by the donation of lone pair of electrons on the carbonyl carbon into a vacant orbital of the metal. The M–C π bond is formed by the donation of a pair of electrons from a filled d orbital of metal into the vacant antibonding π* orbital of carbon monoxide. The metal to ligand bonding creates a synergic effect which strengthens the bond between CO and the metal (Fig.9.14). 9.7 Stability of Coordination Compounds The stability of a complex in solution refers to the degree of association between the two species involved in the state of equilibrium. The magnitude of the (stability or formation) equilibrium constant for the association, quantitatively expresses the stability. Thus, if we have a reaction of the type: M + 4L € ML4 then the larger the stability constant, the higher the proportion of ML4 that exists in solution. Free metal ions rarely exist in the solution so that M will usually be surrounded by solvent molecules which will compete with the ligand molecules, L, and be successively replaced by them. For simplicity, we generally ignore these solvent molecules and write four stability constants as follows: M + L € ML K1 = [ML]/[M][L] ML + L € ML2 K2 = [ML2]/[ML][L] ML3 + L € ML4 K4 = [ML4]/[ML3][L] where K1, K2, etc., are referred to as stepwise stability constants. Alternatively, we can write the overall stability constant thus: M +4L € ML4 β4 = [ML4]/[M][L]4 The stepwise and overall stability constant are therefore related as follows: β4 = K1 × K2 × K3 × K4 or more generally, βn = K1 × K2 × K3 × K4 ……. Kn If we take as an example, the steps involved in the formation of the cuprammonium ion, we have the following: Cu2+ + NH3 € Cu(NH3)2+ K1 = [Cu(NH3)2+]/[Cu2+][NH3] Cu(NH3)2+ + NH3 € Cu(NH3)22+ K2 = [Cu(NH3)22+]/[Cu(NH3)][NH3] etc. where K1, K2 are the stepwise stability constants and overall stability constant. Also β4 = [Cu(NH3)42+]/[Cu2+][NH3)4 The addition of the four amine groups to copper shows a pattern found for most formation constants, in that the successive stability constants decrease. In this case, the four constants are: logK1 = 4.0, logK2 = 3.2, logK3 = 2.7, logK4 = 2.0 or log β4 = 11.9 The instability constant or the dissociation constant of coordination compounds is defined as the reciprocal of the formation constant. 9.11 Calculate the overall complex dissociation equilibrium constant for the Cu(NH3)42+ ion, given that β4 for this complex is 2.1 × 1013 . 9.8 Importance and Applications of Coordination Compounds The coordination compounds are of great importance. These compounds are widely present in the mineral, plant and animal worlds and are known to play many important functions in the area of analytical chemistry, metallurgy, biological systems, industry and medicine. These are described below: • Coordination compounds find use in many qualitative and quantitative chemical analysis. The familiar colour reactions given by metal ions with a number of ligands (especially chelating ligands), as a result of formation of coordination entities, form the basis for their detection and estimation by classical and instrumental methods of analysis. Examples of such reagents include EDTA, DMG (dimethylglyoxime), α–nitroso–β–naphthol, cupron, etc. • Hardness of water is estimated by simple titration with Na2EDTA. The Ca2+ and Mg2+ ions form stable complexes with EDTA. The selective estimation of these ions can be done due to difference in the stability constants of calcium and magnesium complexes. • Some important extraction processes of metals, like those of silver and gold, make use of complex formation. Gold, for example, combines with cyanide in the presence of oxygen and water to form the coordination entity [Au(CN)2]− in aqueous solution. Gold can be separated in metallic form from this solution by the addition of zinc (Unit 6). • Similarly, purification of metals can be achieved through formation and subsequent decomposition of their coordination compounds. For example, impure nickel is converted to [Ni(CO)4], which is decomposed to yield pure nickel. •Coordination compounds are of great importance in biological systems. The pigment responsible for photosynthesis, chlorophyll, is a coordination compound of magnesium. Haemoglobin, the red pigment of blood which acts as oxygen carrier is a coordination compound of iron. Vitamin B12, cyanocobalamine, the anti– pernicious anaemia factor, is a coordination compound of cobalt. Among the other compounds of biological importance with coordinated metal ions are the enzymes like, carboxypeptidase A and carbonic anhydrase (catalysts of biological systems). •Coordination compounds are used as catalysts for many industrial processes. Examples include rhodium complex, [(Ph3P)3RhCl], a Wilkinson catalyst, is used for the hydrogenation of alkenes. •Articles can be electroplated with silver and gold much more smoothly and evenly from solutions of the complexes, [Ag(CN)2]– and [Au(CN)2]− than from a solution of simple metal ions. •In black and white photography, the developed film is fixed by washing with hypo solution which dissolves the undecomposed AgBr to form a complex ion, [Ag(S2O3)2]3− . •There is growing interest in the use of chelate therapy in medicinal chemistry. An example is the treatment of problems caused by the presence of metals in toxic proportions in plant/animal systems. Thus, excess of copper and iron are removed by the chelating ligands D–penicillamine and desferrioxime B via the formation of coordination compounds. EDTA is used in the treatment of lead poisoning. Some coordination compounds of platinum effectively inhibit the growth of tumours. Examples are: cis–platin and related compounds. The chemistry of coordination compounds is an important and challenging area of modern inorganic chemistry. During the last fifty years, advances in this area, have provided development of new concepts and models of bonding and molecular structure, novel breakthroughs in chemical industry and vital insights into the functioning of critical components of biological systems. The first systematic attempt at explaining the formation, reactions, structure and bonding of a coordination compound was made by A. Werner. His theory postulated the use of two types of linkages (primary and secondary) by a metal atom/ion in a coordination compound. In the modern language of chemistry these linkages are recognised as the ionisable (ionic) and non-ionisable (covalent) bonds, respectively. Using the property of isomerism, Werner predicted the geometrical shapes of a large number of coordination entities. The Valence Bond Theory (VBT) explains with reasonable success, the formation, magnetic behaviour and geometrical shapes of coordination compounds. It, however, fails to provide a quantitative interpretation of magnetic behaviour and has nothing to say about the optical properties of these compounds. The Crystal Field Theory (CFT) to coordination compounds is based on the effect of different crystal fields (provided by the ligands taken as point charges), on the degeneracy of d orbital energies of the central metal atom/ion. The splitting of the d orbitals provides different electronic arrangements in strong and weak crystal fields. The treatment provides for quantitative estimations of orbital separation energies, magnetic moments and spectral and stability parameters. However, the assumption that ligands consititute point charges creates many theoretical difficulties. The metal–carbon bond in metal carbonyls possesses both σ and π character. The ligand to metal is σ bond and metal to ligand is π bond. This unique synergic bonding provides stability to metal carbonyls. The stability of coordination compounds is measured in terms of stepwise stability (or formation) constant (K) or overall stability constant (β). The β stabilisation of coordination compound due to chelation is called the chelate effect. The stability of coordination compounds is related to Gibbs energy, enthalpy and entropy terms. Coordination compounds are of great importance. These compounds provide critical insights into the functioning and structures of vital components of biological systems. Coordination compounds also find extensive applications in metallurgical processes, analytical and medicinal chemistry. 9.1 Explain the bonding in coordination compounds in terms of Werner’s postulates. 9.2 FeSO4 solution mixed with (NH4)2SO4 solution in 1:1 molar ratio gives the test of Fe2+ ion but CuSO4 solution mixed with aqueous ammonia in 1:4 molar ratio does not give the test of Cu2+ ion. Explain why? 9.3 Explain with two examples each of the following: coordination entity, ligand, coordination number, coordination polyhedron, homoleptic and heteroleptic. 9.4 What is meant by unidentate, didentate and ambidentate ligands? Give two examples for each. 9.5 Specify the oxidation numbers of the metals in the following coordination entities: 9.6 Using IUPAC norms write the formulas for the following: (ii) Potassium tetrachloridopalladate(II) (iv) Potassium tetracyanonickelate(II) (vi) Hexaamminecobalt(III) sulphate (vii) Potassium tri(oxalato)chromate(III) Using IUPAC norms write the systematic names of the following: 9.8 List various types of isomerism possible for coordination compounds, giving an example of each. 9.9 How many geometrical isomers are possible in the following coordination entities? 9.10 Draw the structures of optical isomers of: 9.11 Draw all the isomers (geometrical and optical) of: 9.12 Write all the geometrical isomers of [Pt(NH3)(Br)(Cl)(py)] and how many of these will exhibit optical isomers? 9.13 Aqueous copper sulphate solution (blue in colour) gives: (i) a green precipitate with aqueous potassium fluoride and (ii) a bright green solution with aqueous potassium chloride. Explain these experimental results. 9.14 What is the coordination entity formed when excess of aqueous KCN is added to an aqueous solution of copper sulphate? Why is it that no precipitate of copper sulphide is obtained when H2S(g) is passed through this solution? 9.15 Discuss the nature of bonding in the following coordination entities on the basis of valence bond theory: 9.16 Draw figure to show the splitting of d orbitals in an octahedral crystal field. 9.17 What is spectrochemical series? Explain the difference between a weak field ligand and a strong field ligand. 9.18 What is crystal field splitting energy? How does the magnitude of Δo decide the actual configuration of d orbitals in a coordination entity? 9.19 [Cr(NH3)6]3+ is paramagnetic while [Ni(CN)4]2− is diamagnetic. Explain why? 9.20 A solution of [Ni(H2O)6]2+ is green but a solution of [Ni(CN)4]2− is colorless. Explain. 9.21 [Fe(CN)6]4− and [Fe(H2O]2+ are of different colours in dilute solutions. why? 9.22 Discuss the nature of bonding in metal carbonyls. 9.23 Give the oxidation state, d orbital occupation and coordination number of the central metal ion in the following complexes: 9.24 Write down the IUPAC name for each of the following complexes and indicate the oxidation state, electronic configuration and coordination number. Also give stereochemistry and magnetic moment of the complex: 9.25 What is meant by stability of a coordination compound in solution? State the factors which govern stability of complexes. 9.26 What is meant by the chelate effect? Give an example. 9.27 Discuss briefly giving an example in each case the role of coordination compounds in: (i) biological systems (ii) medicinal chemistry and (iii) analytical chemistry (iv) extraction/metallurgy of metals. 9.28 How many ions are produced from the complex Co(NH3)6Cl2 in solution? 9.29 Amongst the following ions which one has the highest magnetic moment value? 9.30 The oxidation number of cobalt in K[Co(CO)4] is 9.31 Amongst the following, the most stable complex is 9.32 What will be the correct order for the wavelengths of absorption in the visible region for the following: [Ni(NO2)6]4− , [Ni(NH3)6]2+ , [Ni(H2O)6]2+ ? Answers to Some Intext Questions 9.1 (i) [Co(NH3)4(H2O)2]Cl3 9.2 (i)Hexaamminecobalt(III) chloride 9.3 (i) Both geometrical (cis-, trans-) and optical isomers for cis can exist. (ii) Two optical isomers can exist. (iii) There are 10 possible isomers. (Hint: There are geometrical, ionisation and linkage isomers possible). (iv) Geometrical (cis-, trans-) isomers can exist. 9.4 The ionisation isomers dissolve in water to yield different ions and thus react differently to various reagents: [Co(NH3)5Br]SO4 + Ba2+ → BaSO4(s) [Co(NH3)5SO4]Br + Ba2+ → No reaction [Co(NH3)5Br]SO4 + Ag+ → No reaction [Co(NH3)5SO4]Br + Ag+ → AgBr (s) 9.6 In Ni(CO)4, Ni is in zero oxidation state whereas in NiCl42− , it is in +2 oxidation state. In the presence of CO ligand, the unpaired d electrons of Ni pair up but Cl− being a weak ligand is unable to pair up the unpaired electrons. 9.7 In presence of CN−, (a strong ligand) the 3d electrons pair up leaving only one unpaired electron. The hybridisation is d2sp3 forming inner orbital complex. In the presence of H2O, (a weak ligand), 3d electrons do not pair up. The hybridisation is sp3d2 forming an outer orbital complex containing five unpaired electrons, it is strongly paramagnetic. 9.8 In the presence of NH3, the 3d electrons pair up leaving two d orbitals empty to be involved in d2sp3 hybridisation forming inner orbital complex in case of [Co(NH3)6]3 . In Ni(NH3)62+ , Ni is in +2 oxidation state and has d8 configuration, the hybridisation involved is sp3d2 forming outer orbital complex. 9.9 For square planar shape, the hybridisation is dsp2 . Hence the unpaired electrons in 5d orbital pair up to make one d orbital empty for dsp2 hybridisation. Thus there is no unpaired electron. 9.11 The overall dissociation constant is the reciprocal of overall stability constant i.e. 1/ β4 = 4.7 × 10−14 I. Multiple Choice Questions (Type-I) 1. Which of the following complexes formed by Cu2+ ions is most stable? 2. The colour of the coordination compounds depends on the crystal field splitting. What will be the correct order of absorption of wavelength of light in the visible region, for the complexes, [Co(NH3)6]3+ , [Co(CN)6]3– , [Co(H2O)6]3+ (i) [Co(CN)6]3– > [Co(NH3)6]3+ > [Co(H2O)6]3+ (ii) [Co(NH3)6]3+ > [Co(H2O)6]3+ > [Co(CN)6]3– (iii) [Co(H2O)6]3+ > [Co(NH3)6]3+ > [Co(CN)6]3– (iv) [Co(CN)6]3– > [Co(NH3)6]3+ > [Co(H2O)6]3+ 3. When 0.1 mol CoCl3(NH3)5 is treated with excess of AgNO3, 0.2 mol of AgCl are obtained. The conductivity of solution will correspond to (i) 1:3 electrolyte (ii) 1:2 electrolyte (iii) 1:1 electrolyte (iv) 3:1 electrolyte 4. When 1 mol CrCl3⋅6H2O is treated with excess of AgNO3, 3 mol of AgCl are obtained. The formula of the complex is : 5. The correct IUPAC name of [Pt(NH3)2Cl2] is (i) Diamminedichloridoplatinum (II) (ii) Diamminedichloridoplatinum (IV) (iii) Diamminedichloridoplatinum (0) (iv) Dichloridodiammineplatinum (IV) 6. The stabilisation of coordination compounds due to chelation is called the chelate effect. Which of the following is the most stable complex species? 7. Indicate the complex ion which shows geometrical isomerism. 8. The CFSE for octahedral [CoCl6]4– is 18,000 cm–1. The CFSE for tetrahedral [CoCl4]2– will be (i) 18,000 cm–1 (ii) 16,000 cm–1 (iii) 8,000 cm–1 (iv) 20,000 cm–1 9. Due to the presence of ambidentate ligands coordination compounds show isomerism. Palladium complexes of the type [Pd(C6H5)2(SCN)2] and [Pd(C6H5)2(NCS)2] are (i) linkage isomers (ii) coordination isomers (iii) ionisation isomers (iv) geometrical isomers 10. The compounds [Co(SO4)(NH3)5]Br and [Co(SO4)(NH3)5]Cl represent (i) linkage isomerism (ii) ionisation isomerism (iii) coordination isomerism (iv) no isomerism 11. A chelating agent has two or more than two donor atoms to bind to a single metal ion. Which of the following is not a chelating agent? 12. Which of the following species is not expected to be a ligand? 13. What kind of isomerism exists between [Cr(H2O)6]Cl3 (violet) and [Cr(H2O)5Cl]Cl2⋅H2O (greyish-green)? (i) linkage isomerism (ii) solvate isomerism (iii) ionisation isomerism (iv) coordination isomerism 14. IUPAC name of [Pt(NH3)2Cl(NO2)] is : (i) Platinum diaminechloronitrite (ii) Chloronitrito-N-ammineplatinum (II) (iii) Diamminechloridonitrito-N-platinum (II) (iv) Diamminechloronitrito-N-platinate (II) II. Multiple Choice Questions (Type-II) Note : In the following questions two or more options may be correct. 15. Atomic number of Mn, Fe and Co are 25, 26 and 27 respectively. Which of the following inner orbital octahedral complex ions are diamagnetic? 16. Atomic number of Mn, Fe, Co and Ni are 25, 26 27 and 28 respectively. Which of the following outer orbital octahedral complexes have same number of unpaired electrons? 17. Which of the following options are correct for [Fe(CN)6]3– (i) d2sp3 hybridisation (ii) sp3d2 hybridisation 18. An aqueous pink solution of cobalt(II) chloride changes to deep blue on addition of excess of HCl. This is because____________. (i) [Co(H2O)6]2+ is transformed into [CoCl6]4– (ii) [Co(H2O)6]2+ is transformed into [CoCl4]2– (iii) tetrahedral complexes have smaller crystal field splitting than octahedral complexes. (iv) tetrahedral complexes have larger crystal field splitting than octahedral complex. 19. Which of the following complexes are homoleptic? (ii) [Co(NH3)4 Cl2]+ 20. Which of the following complexes are heteroleptic? (ii) [Fe(NH3)4 Cl2]+ 21. Identify the optically active compounds from the following : (ii) trans– [Co(en)2 Cl2]+ (iii) cis– [Co(en)2 Cl2]+ (iv) [Cr (NH3)5Cl] 22. Identify the correct statements for the behaviour of ethane-1, 2-diamine as a ligand. (i) It is a neutral ligand. (ii) It is a didentate ligand. (iii) It is a chelating ligand. (iv) It is a unidentate ligand. 23. Which of the following complexes show linkage isomerism? (i) [Co(NH3)5 (NO2)]2+ III. Short Answer Type 24. Arrange the following complexes in the increasing order of conductivity of their solution: [Co(NH3)3Cl3], [Co(NH3)4Cl2] Cl, [Co(NH3)6]Cl3 , [Cr(NH3)5Cl]Cl2 25. A coordination compound CrCl3⋅4H2O precipitates silver chloride when treated with silver nitrate. The molar conductance of its solution corresponds to a total of two ions. Write structural formula of the compound and name it. 26. A complex of the type [M(AA)2X2]n+ is known to be optically active. What does this indicate about the structure of the complex? Give one example of such complex. 27. Magnetic moment of [MnCl4]2– is 5.92 BM. Explain giving reason. 28. On the basis of crystal field theory explain why Co(III) forms paramagnetic octahedral complex with weak field ligands whereas it forms diamagnetic octahedral complex with strong field ligands. 29. Why are low spin tetrahedral complexes not formed? 30. Give the electronic configuration of the following complexes on the basis of Crystal Field Splitting theory. [CoF6]3–, [Fe(CN)6]4– and [Cu(NH3)6]2+. 31. Explain why [Fe(H2O)6]3+ has magnetic moment value of 5.92 BM whereas [Fe(CN)6]3– has a value of only 1.74 BM. 32. Arrange following complex ions in increasing order of crystal field splitting energy (ΔO) : [Cr(Cl)6]3–, [Cr(CN)6]3–, [Cr(NH3)6]3+. 33. Why do compounds having similar geometry have different magnetic moment? 34. CuSO4.5H2O is blue in colour while CuSO4 is colourless. Why? 35. Name the type of isomerism when ambidentate ligands are attached to central metal ion. Give two examples of ambidentate ligands. IV. Matching Type Note : In the following questions match the items given in Columns I and II. 36. Match the complex ions given in Column I with the colours given in Column II and assign the correct code : |Column I (Complex ion)||Column II (Colour)| |D.||(Ni (H2O)4 (en)]2+ (aq)||4.||Yellowish orange| (i) A (1) B (2) C (4) D (5) (ii) A (4) B (3) C (2) D (1) (iii) A (3) B (2) C (4) D (1) (iv) A (4) B (1) C (2) D (3) 37. Match the coordination compounds given in Column I with the central metal atoms given in Column II and assign the correct code : |Column I (Coordination Compound)||Column II (Central metal atom)| (i) A (5) B (4) C (1) D (2) (ii) A (3) B (4) C (5) D (1) (iii) A (4) B (3) C (2) D (1) (iv) A (3) B (4) C (1) D (2) 38. Match the complex ions given in Column I with the hybridisation and number of unpaired electrons given in Column II and assign the correct code : |Column I (Complex ion)||Column II (Hybridisation, number of unpaired electrons)| (i) A (3) B (1) C (5) D (2) (ii) A (4) B (3) C (2) D (1) (iii) A (3) B (2) C (4) D (1) (iv) A (4) B (1) C (2) D (3) 39. Match the complex species given in Column I with the possible isomerism given in Column II and assign the correct code : |Column I (Complex species)||Column II (Isomerism)| (i) A (1) B (2) C (4) D (5) (ii) A (4) B (3) C (2) D (1) (iii) A (4) B (1) C (5) D (3) (iv) A (4) B (1) C (2) D (3) 40. Match the compounds given in Column I with the oxidation state of cobalt present in it (given in Column II) and assign the correct code. |Column I (Compound)||Column II (Oxidation state of Co)| (i) A (1) B (2) C (4) D (5) (ii) A (4) B (3) C (2) D (1) (iii) A (5) B (1) C (4) D (2) (iv) A (4) B (1) C (2) D (3) V. Assertion and Reason Type Note : In the following questions a statement of assertion followed by a statement of reason is given. Choose the correct answer out of the following choices. (i) Assertion and reason both are true, reason is correct explanation of assertion. (ii) Assertion and reason both are true but reason is not the correct explanation of assertion. (iii) Assertion is true, reason is false. (iv) Assertion is false, reason is true. 41. Assertion : Toxic metal ions are removed by the chelating ligands. Reason : Chelate complexes tend to be more stable. 42. Assertion : [Cr(H2O)6]Cl2 and [Fe(H2O)6]Cl2 are reducing in nature. Reason : Unpaired electrons are present in their d-orbitals. 43. Assertion : Linkage isomerism arises in coordination compounds containing ambidentate ligand. Reason : Ambidentate ligand has two different donor atoms. 44. Assertion : Complexes of MX6 and MX5L type (X and L are unidentate) do not show geometrical isomerism. Reason : Geometrical isomerism is not shown by complexes of coordination number 6. 45. Assertion : ([Fe(CN)6]3– ion shows magnetic moment corresponding to two unpaired electrons. Reason : Because it has d2sp3 type hybridisation. VI. Long Answer Type 46. Using crystal field theory, draw energy level diagram, write electronic configuration of the central metal atom/ion and determine the magnetic moment value in the following : (i) [CoF6]3–, [Co(H2O)6]2+ , [Co(CN)6]3– (ii) [FeF6]3–, [Fe(H2O)6]2+, [Fe(CN)6]4– 47. Using valence bond theory, explain the following in relation to the complexes given below: [Mn(CN)6]3– , [Co(NH3)6]3+, [Cr(H2O)6]3+ , [FeCl6]4– (i) Type of hybridisation. (ii) Inner or outer orbital complex. (iii) Magnetic behaviour. (iv) Spin only magnetic moment value. 48. CoSO4Cl.5NH3 exists in two isomeric forms ‘A’ and ‘B’. Isomer ‘A’ reacts with AgNO3 to give white precipitate, but does not react with BaCl2. Isomer ‘B’ gives white precipitate with BaCl2 but does not react with AgNO3. Answer the following questions. (i) Identify ‘A’ and ‘B’ and write their structural formulas. (ii) Name the type of isomerism involved. (iii) Give the IUPAC name of ‘A’ and ‘B’. 49. What is the relationship between observed colour of the complex and the wavelength of light absorbed by the complex? 50. Why are different colours observed in octahedral and tetrahedral complexes for the same metal and same ligands? I. Multiple Choice Questions (Type-I) 1. (ii) 2. (iii) 3. (ii) 4. (iv) 5. (i) 6. (iii) 7. (i) 8. (iii) 9. (i) 10. (iv) 11. (i) 12. (ii) 13. (ii) 14. (iii) II. Multiple Choice Questions (Type-II) 15. (i), (iii) 16. (i), (iii) 17. (i), (iii) 18. (ii), (iii) 19. (i), (iii) 20. (ii), (iv) 21. (i), (iii) 22. (i), (ii), (iii) 23. (i), (iii) III. Short Answer Type 24. [Co(NH3)3Cl3] < [Cr(NH3)5Cl]Cl < [Co(NH3)5Cl]Cl2 < [Co(NH3)6]Cl3 25. [Co(H2O)4Cl2]Cl (tetraaquadichloridocobalt(III) chloride) 26. An optically active complex of the type [M(AA)2X2]n+ indicates cisoctahedral structure, e.g. cis-[Pt(en)2Cl2]2+ or cis-[Cr(en)2Cl2]+ 27. The magnetic moment of 5.92 BM corresponds to the presence of five unpaired electrons in the d-orbitals of Mn2+ ion. As a result the hybridisation involved is sp3 rather than dsp2. Thus tetrahedral structure of [MnCl4]2– complex will show 5.92 BM magnetic moment value. 28. With weak field ligands; ΔO < p, the electronic configuration of Co (III) will be t2g4 eg2 and it has 4 unpaired electrons and is paramagnetic. With strong field ligands, Δ0 > p, the electronic configuration will be t2g6g eg0. It has no unpaired electrons and is diamagnetic. 29. Because for tetrahedral complexes, the crystal field stabilisation energy is lower than pairing energy. [CoF6]3–, Co3+(d6) t2g4 eg2, [Fe(CN)6]4– , Fe2+(d6) t2g6 eg0, [Cu(NH3)6]2+ , Cu2+ (d9) t2g6 eg3, 31. [Fe(CN)6]3– involves d2sp3 hybridisation with one unpaired electron and [Fe(H2O)6]3+ involves sp3d2 hybridisation with five unpaired electrons. This difference is due to the presence of strong ligand CN– and weak ligand H2O in these complexes. 32. Crystal field splitting energy increases in the order [Cr(Cl)6]3– < [Cr(NH3)6]3+ < [Cr(CN)6]3– 33. It is due to the presence of weak and strong ligands in complexes, if CFSE is high, the complex will show low value of magnetic moment and vice versa, e.g. [CoF6]3– and [Co(NH3)6]3+ , the former is paramagnetic and the latter is diamagnetic. 34. In CuSO4.5H2O, water acts as ligand as a result it causes crystal field splitting. Hence d—d transition is possible in CuSO4.5H2O and shows colour. In the anhydrous CuSO4 due to the absence of water (ligand), crystal field splitting is not possible and hence no colour. 35. Linkage isomerism IV. Matching Type 36. (ii) 37. (i) 38. (ii) 39. (iv) 40. (i) V. Assertion and Reason Type 41. (i) 42. (ii) 43. (i) 44. (ii) 45. (iv) Number of unpaired electrons = 4 Number of unpaired electron = 5 Number of unpaired electron = 4 Fe2+ = 3d6 Since CN– is strong field ligand all the electrons get paired. No unpaired electrons so diamegnetic Mn3+ = 3d4 Co3+ = 3d6 (ii) Inner orbital complex Cr3+ = 3d3 (ii) Inner orbital complex (iv) 3.87 BM Fe2+ = 3d6 (ii) Outer orbital complex (iv) 4.9 BM 48. (i) A – [Co(NH3)5SO4]Cl B – [Co(NH3)5Cl]SO4 (ii) Ionisation isomerism (iii) (A), Pentaamminesulphatocobalt (III) chloride (B), Pentaamminechlorocobalt (III) sulphate. 49. When white light falls on the complex, some part of it is absorbed. Higher the crystal field splitting, lower will be the wavelength absorbed by the complex. The observed colour of complex is the colour generated from the wavelength left over. 50. Δt = (4/9) Δ0. So higher wavelength is absorbed in octahedral complex than tetrahedral complex for same metal and ligands.
http://textbook.s-anand.net/ncert/class-xii/chemistry/9-coordination-compound
13
58
Electricity and magnetism Cross product 1 Introduction to the cross product Cross product 1 ⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles. - I've been requested to do a video on the cross product, - and its special circumstances, because I was at the point on - the physics playlist where I had to teach magnetism anyway, - so this is as good a time as any to introduce the notion of - the cross product. - So what's the cross product? - Well, we know about vector addition, vector subtraction, - but what happens when you multiply vectors? - And there's actually two ways to do it: with the dot product - or the cross product. - And just keep in mind these are-- well, really, every - operation we've learned is defined by human beings for - some other purpose, and there's nothing different - about the cross product. - I take the time to say that here because the cross - product, at least when I first learned it, seemed a little - bit unnatural. - Anyway, enough talk. - Let me show you what it is. - So the cross product of two vectors: Let's say I have - vector a cross vector b, and the notation is literally like - the times sign that you knew before you started taking - algebra and using dots and parentheses, so it's - literally just an x. - So the cross product of vectors a and b is equal to-- - and this is going to seem very bizarre at first, but - hopefully, we can get a little bit of a visual feel of what - this means. - It equals the magnitude of vector a times the magnitude - of vector b times the sign of the angle between them, the - smallest angle between them. - And now, this is the kicker, and this quantity is not going - to be just a scalar quantity. - It's not just going to have magnitude. - It actually has direction, and that direction we specify by - the vector n, the unit vector n. - We could put a little cap on it to show - that it's a unit vector. - There are a couple of things that are special about this - direction that's specified by n. - One, n is perpendicular to both of these vectors. - It is orthogonal to both of these vectors, so we'll think - about it in a second what that implies - about it just visually. - And then the other thing is the direction of this vector - is defined by the right hand rule, and we'll - see that in a second. - So let's try to think about this visually. - And I have to give you an important caveat: You can only - take a cross product when we are dealing in three - A cross product really has-- maybe you could define a use - for it in other dimensions or a way to take a cross product - in other dimensions, but it really only has a use in three - dimensions, and that's useful, because we live in a - three-dimensional world. - So let's see. - Let's take some cross products. - I think when you see it visually, it will make a - little bit more sense, especially once you get used - to the right hand rule. - So let's say that that's vector b. - I don't have to draw a straight line, but it - doesn't hurt to. - I don't have to draw it neatly. - OK, here we go. - Let's say that that is vector a, and we want to take the - cross product of them. - This is vector a. - This is b. - I'll probably just switch to one color because it's hard to - keep switching between them. - And then the angle between them is theta. - Now, let's say the length of a is-- I don't know, let's say - magnitude of a is equal to 5, and let's say that the - magnitude of b is equal to 10. - It looks about double that. - I'm just making up the numbers on the fly. - So what's the cross product? - Well, the magnitude part is easy. - Let's say this angle is equal to 30 degrees. - 30 degrees, or if we wanted to write it in radians, I - always-- just because we grow up in a world of degrees, I - always find it easier to visualize degrees, but we - could think about it in terms of radians as well. - 30 degrees is-- let's see, there's 3, 6-- it's pi over 6, - so we could also write pi over 6 radians. - But anyway, this is a 30-degree angle, so what will - be a cross b? - a cross b is going to equal the magnitude of a for the - length of this vector, so it's going to be equal to 5 times - the length of this b vector, so times 10, times the sine of - the angle between them. - And, of course, you could've taken the - larger, the obtuse angle. - You could have said this was the angle between them, but I - said earlier that it was the smaller, the acute, angle - between them up to 90 degrees. - This is going to be sine of 30 degrees times this vector n. - And it's a unit vector, so I'll go over what direction - it's actually pointing in a second. - Let's just figure out its magnitude. - So this is equal to 50, and what's sine of 30 degrees? - Sine of 30 degrees is 1/2. - You could type it in your calculator if you're not sure. - So it's 5 times 10 times 1/2 times the unit vector, so that - equals 25 times the unit vector. - Now, this is where it gets, depending on your point of - view, either interesting or confusing. - So what direction is this unit vector pointing in? - So what I said earlier is, it's - perpendicular to both of these. - So how can something be - perpendicular to both of these? - It seems like I can't draw one. - Well, that's because right here, where I drew a and b, - I'm operating in two dimensions. - But if I have a third dimension, if I could go in or - out of my writing pad or, from your point of view, your - screen, then I have a vector that is perpendicular to both. - So imagine of vector that's-- I wish I could draw it-- that - is literally going straight in at this point or straight out - at this point. - Hopefully, you're seeing it. - Let me show you the notation for that. - So if I draw a vector like this, if I draw a circle with - an x in it like that, that is a vector that's going into the - page or into the screen. - And if I draw this, that is a vector that's popping out of - the screen. - And where does that convention come from? - It's from an arrowhead, because what does - an arrow look like? - An arrow, which is our convention for drawing - vectors, looks something like this: The tip of an arrow is - circular and it comes to a point, so that's the tip, if - you look at it head-on, if it was popping out of the video. - And what does the tail of an arrow look like? - It has fins, right? - There would be one fin here and there'd be another fin - right there. - And so if you took this arrow and you were to go into the - page and just see the back of the arrow or the behind of the - arrow, it would look like that. - So this is a vector that's going into the page and this - is a vector that's going out of the page. - So we know that n is perpendicular to both a and b, - and so the only way you can get a vector that's - perpendicular to both of these, it essentially has to - be perpendicular, or normal, or orthogonal to the plane - that's your computer screen. - But how do we know if it's going into the screen or how - do we know if it's coming out of the screen, this vector n? - And this is where the right hand rule-- I know this is a - little bit overwhelming. - We'll do a bunch of example problems. But the right hand - rule, what you do is you take your right hand-- that's why - it's called the right hand rule-- and you take your index - finger and you point it in the direction of the first vector - in your cross product, and order matters. - So let's do that. - So you have to take your finger and put it in the - direction of the first arrow, which is a, and then you have - to take your middle finger and point it in that direction of - the second arrow, b. - So in this case, your hand would look - something like this. - I'm going to try to draw it. - This is pushing the abilities of my art skills. - So that's my right hand. - My thumb is going to be coming down, right? - That is my right hand that I drew. - This is my index finger, and I'm pointing it in the - direction of a. - Maybe it goes a little bit more in this direction, right? - Then I put my middle finger, and I kind of make an L with - it, or you could kind of say it almost looks like you're - shooting a gun. - And I point that in the direction of b, and then - whichever direction that your thumb faces in, so in this - case, your thumb is going into the page, right? - Your thumb would be going down if you took your right hand - into this configuration. - So that tells us that the vector n points into the page. - So the vector n has magnitude 25, and it points into the - page, so we could draw it like that with an x. - If I were to attempt to draw it in three dimensions, it - would look something like this. - Vector a. - Let me see if I can give some perspective. - If this was straight down, if that's vector n, then a could - look something like that. - Let me draw it in the same color as a. - a could look something like that, and then b would look - something like that. - I'm trying to draw a three-dimensional figure on - two dimensions, so it might look a little different, but I - think you get the point. - Here I drew a and b on the plane. - Here I have perspective where I was able to - draw n going down. - But this is the definition of a cross product. - Now, I'm going to leave it there, just because for some - reason, YouTube hasn't been letting me go over the limit - as much, and I will do another video where I do several - problems, and actually, in the process, I'm going to explain - a little bit about magnetism. - And we'll take the cross product of several things, and - hopefully, you'll get a little bit better intuition. - See you soon. Be specific, and indicate a time in the video: At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger? Have something that's not a question about this content? This discussion area is not meant for answering homework questions. Share a tip When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831... Have something that's not a tip or feedback about this content? This discussion area is not meant for answering homework questions. Discuss the site For general discussions about Khan Academy, visit our Reddit discussion page. Flag inappropriate posts Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians. - disrespectful or offensive - an advertisement - low quality - not about the video topic - soliciting votes or seeking badges - a homework question - a duplicate answer - repeatedly making the same post - a tip or feedback in Questions - a question in Tips & Feedback - an answer that should be its own question about the site
http://www.khanacademy.org/science/physics/electricity-and-magnetism/v/cross-product-1
13
87
Areas of focus: A scalar is a quantity like mass or temperature that only has a magnitude. On the other had, a vector is a mathematical object that has magnitude and direction. A line of given length and pointing along a given direction, such as an arrow, is the typical representation of a vector. Typical notation to designate a vector is a boldfaced character, a character with and arrow on it, or a character with a line under it (i.e., ). The magnitude of a vector is its length and is normally denoted by or A. Addition of two vectors is accomplished by laying the vectors head to tail in sequence to create a triangle such as is shown in the figure. The following rules apply in vector algebra. where P and Q are vectors and a is a scalar. A unit vector is a vector of unit length. A unit vector is sometimes denoted by replacing the arrow on a vector with a "^" or just adding a "^" on a boldfaced character (i.e., ). Therefore, Any vector can be made into a unit vector by dividing it by its length. Any vector can be fully represented by providing its magnitude and a unit vector along its direction. Base vectors are a set of vectors selected as a base to represent all other vectors. The idea is to construct each vector from the addition of vectors along the base directions. For example, the vector in the figure can be written as the sum of the three vectors u1, u2, and u3, each along the direction of one of the base vectors e1, e2, and e3, so that Each one of the vectors u1, u2, and u3 is parallel to one of the base vectors and can be written as scalar multiple of that base. Let u1, u2, and u3 denote these scalar multipliers such that one has The original vector u can now be written as The scalar multipliers u1, u2, and u3 are known as the components of u in the base described by the base vectors e1, e2, and e3. If the base vectors are unit vectors, then the components represent the lengths, respectively, of the three vectors u1, u2, and u3. If the base vectors are unit vectors and are mutually orthogonal, then the base is known as an orthonormal, Euclidean, or Cartesian base. A vector can be resolved along any two directions in a plane containing it. The figure shows how the parallelogram rule is used to construct vectors a and b that add up to c. In three dimensions, a vector can be resolved along any three non-coplanar lines. The figure shows how a vector can be resolved along the three directions by first finding a vector in the plane of two of the directions and then resolving this new vector along the two directions in the plane. When vectors are represented in terms of base vectors and components, addition of two vectors results in the addition of the components of the vectors. Therefore, if the two vectors A and B are represented by The base vectors of a rectangular x-y coordinate system are given by the unit vectors and along the x and y directions, respectively. Using the base vectors, one can represent any vector F as Due to the orthogonality of the bases, one has the following relations. The base vectors of a rectangular coordinate system are given by a set of three mutually orthogonal unit vectors denoted by , , and that are along the x, y, and z coordinate directions, respectively, as shown in the figure. The system shown is a right-handed system since the thumb of the right hand points in the direction of z if the fingers are such that they represent a rotation around the z-axis from x to y. This system can be changed into a left-handed system by reversing the direction of any one of the coordinate lines and its associated base vector. In a rectangular coordinate system the components of the vector are the projections of the vector along the x, y, and z directions. For example, in the figure the projections of vector A along the x, y, and z directions are given by Ax, Ay, and Az, respectively. As a result of the Pythagorean theorem, and the orthogonality of the base vectors, the magnitude of a vector in a rectangular coordinate system can be calculated by Direction cosines are defined as where the angles , , and are the angles shown in the figure. As shown in the figure, the direction cosines represent the cosines of the angles made between the vector and the three coordinate directions. The direction cosines can be calculated from the components of the vector and its magnitude through the relations The three direction cosines are not independent and must satisfy the relation This results form the fact that A unit vector can be constructed along a vector using the direction cosines as its components along the x, y, and z directions. For example, the unit-vector along the vector A is obtained from The vector connecting point A to point B is given by A unit vector along the line A-B can be obtained from A vector F along the line A-B and of magnitude F can thus be obtained from the relation The dot product is denoted by "" between two vectors. The dot product of vectors A and B results in a scalar given by the relation where is the angle between the two vectors. Order is not important in the dot product as can be seen by the dot products definition. As a result one gets The dot product has the following properties. Since the cosine of 90o is zero, the dot product of two orthogonal vectors will result in zero. Since the angle between a vector and itself is zero, and the cosine of zero is one, the magnitude of a vector can be written in terms of the dot product using the rule When working with vectors represented in a rectangular coordinate system by the components then the dot product can be evaluated from the relation This can be verified by direct multiplication of the vectors and noting that due to the orthogonality of the base vectors of a rectangular system one has Projection of a vector onto a line: The orthogonal projection of a vector along a line is obtained by moving one end of the vector onto the line and dropping a perpendicular onto the line from the other end of the vector. The resulting segment on the line is the vector's orthogonal projection or simply its projection. The scalar projection of vector A along the unit vector is the length of the orthogonal projection A along a line parallel to , and can be evaluated using the dot product. The relation for the projection is The vector projection of A along the unit vector simply multiplies the scalar projection by the unit vector to get a vector along . This gives the relation The cross product of vectors a and b is a vector perpendicular to both a and b and has a magnitude equal to the area of the parallelogram generated from a and b. The direction of the cross product is given by the right-hand rule . The cross product is denoted by a "" between the vectors Order is important in the cross product. If the order of operations changes in a cross product the direction of the resulting vector is reversed. That is, The cross product has the following properties. When working in rectangular coordinate systems, the cross product of vectors a and b given by can be evaluated using the rule One can also use direct multiplication of the base vectors using the relations The triple product of vectors a, b, and c is given by The value of the triple product is equal to the volume of the parallelepiped constructed from the vectors. This can be seen from the figure since The triple product has the following properties Consider vectors described in a rectangular coordinate system as The triple product can be evaluated using the relation The triple vector product has the properties
http://emweb.unl.edu/Math/mathweb/vectors/vectors.html
13
53
In this lesson, the velocity of a particle traveling on a circular path will be examined. Two different perspectives for measuring a particle's motion are its angular velocity and its linear velocity. During this lesson, we will assume that our objects are moving in uniform circular motion, that is, they are moving along a circular path at a constant speed which is neither increasing nor decreasing. The Lesson: The angular velocity of a particle traveling on a circular path is the ratio of the angle traversed to the amount of time it takes to traverse that angle. Let's Practice: Angular velocity is a measure of the angular displacement per unit time. Notice that the angular velocity of the fan is much larger than the angular velocity of the Earth. However, the Earth has a much larger radius than a ceiling fan. Therefore a point on the surface of the Earth is moving much faster than the tip of a fan blade. To see this, we will calculate the linear velocity of a point on the surface of the Earth and a point on the tip of a fan blade. - Consider the Earth which rotates on its axis once every 24 hours. This complete circle is radians. Therefore, the angular velocity of the Earth’s rotation is . - A second example is that of a ceiling fan. If the fan rotates 30 times per minute, the angular velocity is radians per minute or . Note: Despite a much larger angular velocity, the tip of the blade of a ceiling fan has a much smaller linear velocity than the Earth because of the vast difference in the lengths of their respective radii. - The radius of the Earth is approximately 4000 miles. The Earth is rotating at a rate of per hour. Notice that the distance traveled in one hour on the surface of the Earth in 15º of rotation is where we use to calculate the circumference of the Earth and we use as the fraction of the circumference traversed in one hour. The linear velocity is 1047 miles per hour! We only keep from flying off the surface of the Earth because of gravity. - To calculate the linear velocity of the fan blade, we note that the blade rotates 30 times in one minute or times per hour. Assuming the radius of the fan is 2 feet, we have the circumference of the fan as . Multiplying this by 1800 gives . We can generalize the calculations made for the Earth and the ceiling fan so that we have formulas which will work for the motion of any particle on a circular path. - The angular velocity is a ratio of the total angular measurement through which a particle rotates in a given unit of time. If we use w to stand for angular velocity, we have . Reviewing the motion of the Earth, recall that the - Earth has an angular velocity of radians per hour. - linear velocity of a point on the Earth's surface was calculated by multiplying this angular velocity by the radius of the Earth = . - Using this as a guide, we define linear velocity, v, to be where w is angular velocity in radians and r is the radius. - A Ferris Wheel rotates 3 times each minute. The passengers sit in seats that are 25 feet from the center of the wheel. What is the angular velocity of the wheel in degrees per minute and radians per minute? What is the linear velocity of the passengers in the seats? 3 revolutions per minute is . The linear velocity is calculated from . This gives us . We can change this to miles per hour by multiplying as follows: . We can simplify this result by “canceling” the minutes and feet to get - An object is rotating on a circular path at 4 revolutions per minute. The linear velocity of the object is 400 feet per minute. What is the radius of the circle and what is the angular velocity of the rotating platform? We first find the angular velocity: 4 revolutions per minute is . Second, we use . This gives us - An object moves along a circular path of radius r. What is the effect on the linear velocity if the radius of this circle is doubled? The linear velocity is . If the linear velocity is recalculated using 2r as the new radius, we have: . Therefore if the radius is doubled, the linear velocity is also doubled.
http://www.algebralab.org/lessons/lesson.aspx?file=Trigonometry_TrigAngLinVelocity.xml
13
57
Although the Goddard Space Flight Center received its official designation on the first of May 1959, Goddard's roots actually date back far beyond that. In a sense, they date back almost as far as civilization itself - for people have been gazing into the night sky and wondered about its secrets for thousands of years. In the fourth century B.C., Aristotle created a model of the universe that astronomers relied on for more than a millennium. His assumption that the universe revolved around the Earth proved to be incorrect, but his effort was no different than that of modern scientists trying to solve the riddles of black holes or dark matter.1 The roots of Goddard's work in rocket development and atmospheric research date back several centuries, as well. The first... ...reported use of rocket technology was in the year 1232, when the Chin Tarters developed a "fire arrow" to fend off a Mongol assault on the city of Kai-feng-fu. In 1749, Scotsman Alexander Wilson was sending thermometers aloft on kites to measure upper-air temperatures. One hundred and fifty years later, meteorologists were beginning to accurately map the properties of the atmosphere using kites and balloons.2 Robert J. Goddard, for whom the Goddard Space Flight Center is named, received his first patents for a multi-stage rocket and liquid rocket propellents in 1914, and his famous paper on "A Method of Reaching Extreme Altitudes" was published in 1919. But it would not be until the close of World War II that all these long-standing interests and efforts would come together to create the foundation for modern space science and, eventually, the Goddard Space.Flight Center.3 A certain amount of rocket research was being conducted in the United States even during the war. But the Germans had made far greater advancements in rocket technology. Before the end of the war, German scientists had developed a large, operational ballistic rocket weapon known as the "V-2.". When the war came to a close, the U.S. military brought a number of these rockets back to the United States to learn more about their handling and operation. The Army planned to fire the V-2s at the White Sands Proving Ground in New Mexico. The Army's interest was in furthering the design of ballistic missiles. But the military recognized the research opportunity the rocket firings presented and offered to let interested groups instrument the rockets for high-altitude scientific research.4 The V-2 program helped spark the development of other rockets, and research with "sounding rockets," as these small upper atmosphere rockets were called, expanded greatly over the next few years. The results from these rocket firings also began to gain the attention of the international scientific community. In 1951, the International Council of Scientific Unions had suggested organizing a third "International Polar Year" in 1957. The first two such events had been held in 1882 and 1932 and focused on accurately locating meridians (longitudinal lines) of the Earth. A third event was proposed after an interval of only 25 years because so many rapid advances had been made in technology and instrumentation since the beginning of WWII. Scientists in the 1950s could look at many more aspects of the Earth and the atmosphere than their predecessors could even a decade earlier. In 1952, the proposed event was approved by the Council and renamed the International Geophysical Year (IGY) to reflect this expanded focus on studying the whole Earth and it immediate surroundings.5 The U.S. scientists quickly agreed to incorporate rocket soundings as part of their contribution to the IGY. But a loftier goal soon emerged. In October 1954, the International Council's IGY committee issued a formal challenge to participating countries to attempt to launch a satellite as part of the IGY. In July 1955, President Dwight D. Eisenhower picked up the gauntlet. The United States, he announced, would launch "small, unmanned Earth-circling satellites as part of the U.S. participation in the IGY."6 In September 1956, the Soviet Union announced that it, too, would launch a satellite the following year. The race was on. Sputnik, Vanguard, and the Birth of NASA The U.S. satellite project was to be a joint effort of the National Academy of Sciences (NAS), the National Science Foundation (NSF) and the Department of Defense (DOD). The NAS was in charge of selecting the experiments for the satellite, the NSF would provide funding, and the Defense Department would provide the launch vehicle. Sparked by the V-2 launch program, the Naval Research Lab (NRL) already had begun work on a rocket called the Viking, and the NRL proposed to mate the Viking with a smaller "Aerobee." The Aerobee was a rocket that had evolved from a rocket JPL had first tested in 1945 and was used extensively for sounding rocket research. The Viking would be the first stage, the Aerobee would be the second stage, and another small rocket would serve as the third stage. The proposal was approved and dubbed "Project Vanguard."7 Yet despite these efforts, the Americans would not be the first into space. On 4 October 1957, the Russians launched Sputnik I - and changed the world forever. The launch of Sputnik was disappointing to U.S. scientists, who had hoped to reach space first. But following good scientific etiquette, they swallowed their pride and gave credit to the Soviets for their impressive accomplishment. The rest of the U.S., however, had a very different reaction. Coming as it did at the height of the Cold War, the launch of Sputnik sent an astounding wave of shock and fear across the country. The Russians appeared to have proven themselves technologically more advanced. Aside from a loss of prestige and possible economic considerations from falling behind the Russians in technological ability, the launch raised questions of national security, as well. If the Soviets could conquer space, what new threats could they pose? The situation was not helped by a second successful Sputnik launch a month later or the embarrassing, catastrophic failure of a Vanguard rocket two seconds after launch in early December. Space suddenly became a national priority. Congress began ramping up to deal with the "crisis." President Eisenhower created a post of Science Adviser to the President and asked his Science Advisory Committee to develop a national policy on space. That policy would lead to the National Aeronautics and Space Act of 1958 that created the National Aeronautics and Space Administration.8 In the months following the launch of Sputnik, numerous proposals were put forth about how the development of space capability should be organized. But in the end, President Eisenhower decided that the best way to pursue a civilian space program with speed and efficiency was to put its leadership under a strengthened and redesignated National Advisory Committee for Aeronautics (NACA). Proposed legislation for the creation of this new agency was sent to Congress on 2 April 1958 and signed into law on 29 July 1958.9 The Space Act outlined a tremendously ambitious list of objectives for the new agency. While the administrative and political debate over a new space agency was being conducted, work continued on the the IGY satellite project. The Vanguard rocket project had been approved not because the Viking and Aerobee were the only rocket programs underway, but because the military did not want to divert any of its its intercontinental ballistic missile (ICBM) efforts to the civilian IGY project. But the launch of Sputnik and the subsequent Vanguard failure changed that situation. Getting a satellite into orbit was now a top national priority. In November 1957, the Army Ballistics Missile Agency was given permission to attempt the launch of a satellite using a proven Jupiter C missile from the Redstone Arsenal in Huntsville, Alabama. The United States finally achieved successful space flight on 31 January 1958 when a Jupiter rocket successfully launched a small cylinder named Explorer I into orbit11. In retrospect, it's interesting to speculate how history might have been different had the Army's Jupiter missile been chosen as the satellite launch vehicle from the outset, rather than the Vanguard. The United States might well have beaten the Soviets into space. But without the public fear and outcry at losing our technological edge, there might well not have been the public support for the creation of NASA and its extensive space program.12 Meanwhile, the Vanguard program still continued, although it was struggling. A third rocket broke apart in flight just five days after the successful Explorer I launch. Finally, on 17 March 1958, a Vanguard rocket successfully launched Vanguard I - a six-inch sphere weighing only four pounds - into orbit. The Explorer I and Vanguard I satellites proved we could reach space. The next task was to create an organization that could manage our effort to explore it - an effort that would become one of the most enormous and expensive endeavors of the 20th century.13 Origins of the Goddard Space Flight Center As planning began for the new space agency in the summer of 1958, it quickly became clear that a research center devoted to the space effort would have to be added to the existing NACA aeronautical research centers. The space program was going to involve big contracts and complicated projects, and the founding fathers of NASA wanted to make sure there was enough in-house expertise to manage the projects and contracts effectively. Even before the Space Act was signed into law, Hugh Dryden, who became the Deputy Administrator of NASA, began looking for a location for the new space center. Dryden approached a friend of his in the Department of Agriculture about obtaining a tract of government land near the Beltsville Agricultural Research Center in... ....Maryland. Dr. John W. Townsend, who became the first head of the space science division at Goddard and, later, one of the Center's directors, was involved in the negotiations for the property. The process, as he recalls, was rather short. He (the department of Agriculture representative) said, "Are you all good guys?" I said "Yes." He said, "Will you keep down the development?" I said, "Yes." He spreads out a map and says, "How much do you want?" And that was that. We had our place.14 On 1 August 1958, Maryland's Senator J. Glenn Beall announced that the new "Outer Space Agency" would establish its laboratory and plant in Greenbelt, Maryland. But while the new research center was, in fact, built in Greenbelt, Senator Beall's press release shows how naive even decision-makers were about how huge the space effort would become. Beall confidently asserted that the research center would employ 650 people, and that "all research work in connection with outer space programs will be conducted at the Greenbelt installation."15 The initial cadre of personnel for the new space center - and NASA itself, for that matter - was assembled through a blanket transfer authority granted to NASA to insure the agency had the resources it needed to do its job. One of the first steps was the transfer of the entire Project Vanguard mission and staff from the NRL to the new space agency, a move that was actually included in the executive order that officially opened the doors of NASA on 1 October 1958.16 The 157 people in the Vanguard project became one of the first groups incorporated into what was then being called the "Beltsville Space Center." In December 1958, 47 additional scientists from the NRL's sounding rocket branch also transferred to NASA, including branch head John Townsend. Fifteen additional scientists, including Dr. Robert Jastrow, also transferred to the new space center from the NRL's theoretical division. The Space Task Group at the Langley Research Center, responsible for the manned space flight effort that would become Project Mercury, was initially put under administrative control of the Beltsville center, as well, although the group's 250 employees remained at Langley. A propulsion-oriented space task group from the Lewis Research Center was also put under the control of the new space center. The space center's initial cadre was completed in April 1959 with the transfer of a group working on the TIROS meteorological satellite for the Army Signal Corps Research and Development Laboratory in Ft. Monmouth, New Jersey.17 The Beltsville Space Center was officially designated as a NASA research center on 15 January 1959, after the initial personnel transfers had been completed. On 1 May 1959, the Beltsville facility was renamed the Goddard Space Flight Center, in honor of Dr. Robert J. Goddard. 18 Although Goddard existed administratively by May of 1959, it still did not exist in any physical sense. Construction finally began on the first building at the Beltsville Space... .....Center in April 1959,19 but it would be some time before the facilities there were ready to be occupied. In the meantime, the Center's employees were scattered around the country. The Lewis and Langley task groups were still at those research centers. The NRL scientists were working out of temporary quarters in two abandoned warehouses next to the Naval Lab facilities. Additional administrative personnel were housed in space at the Naval Receiving Station and at NASA's temporary headquarters in the old Cosmos Club Building, also known as the Dolly Madison House on H Street in Washington, D.C. Robert Jastrow's theoretical division was housed above the Mazor Furniture Store in Silver Spring, Maryland.20 The different groups may have been one organization on paper but, in reality, operations were fairly segmented. The Center did not even have an official director and would not have one until September 1959. Until then, working relationships and facilities were both somewhat improvised. Not surprisingly, the working conditions in those early days were also less than ideal. Offices were cramped cubicles and desks were sometimes made of packing crates. Laboratory facilities were equally rough. One of the early engineers remembers using chunks of dry ice in makeshift "cold boxes" to cool circuitry panels and components. The boxes were effective, but researchers had to make sure they didn't breathe too deeply or keep their heads in the boxes too long, because the process also formed toxic carbonic acid fumes. But there was a kind of raw enthusiasm for the work - a pioneering challenge with few rules and seemingly limitless potential - that more than made up for the rudimentary facilities. It helped that many of the scientists also came from a background in sounding rockets. Sounding rocket research, especially in the early days, was a field that demanded a lot of flexibility and ingenuity. Because their work had begun long before the post-Sputnik flood of funding, these scientists were accustomed to very basic, low-budget operations. Comfort may not have been at a premium in Goddard's early days, but scientists who had braved the frigid North Atlantic to fire rockoons (rockets carried to high altitude by helium balloons before being fired) had certainly seen a lot worse.21 As 1959 progressed, Goddard continued to grow. By June, the new research center had 391 employees in the Washington area and, by the end of 1959, its personnel numbered 579.22 As the personnel grew, so did the physical facilities at the Greenbelt, Maryland, site. By September 1959, the first building was ready to be occupied. The plan for Goddard's physical facilities was to create a campus-like atmosphere that would accommodate the many different jobs the Center was to perform. The buildings were numbered in order of construction, and there was a general plan to put laboratories and computer facilities on one side, utility buildings in the center of the campus, and offices on the other side. Most of the buildings were one, two, or three-story structures that blended inconspicuously into the landscape. The one exception was Building 8, which was built to house the manned space flight program personnel. Robert Gilruth, who was in charge of the program, supposedly wanted a tall structure, so the building was designed with six stories. The original plan to incorporate the manned space flight program at Goddard also resulted in the construction of a special bay tall enough to house Mercury capsules as part of the test and evaluation facility in Building 5. By 1961, however, this aspect of NASA's program had been moved to the new space center in Houston, Texas. So Building 8 was used to house administration offices, instead.23 Even as formal facilities developed, it still took something of a pioneer's spirit to work at Goddard during the early days. The Center was built in a swampy, wooded area, and wood planks often had to be stretched across large sections of mud between parking areas and offices. And on more than one occasion, displaced local snakes found their way into employees' cars, leading to distinctive screams coming from the parking lot at the end of the day.24 Improvisation and flexibility were critical skills to have in the scientific and engineering work that was done, as well. Space was a new endeavor, and there were few guidelines as to how to proceed - either in terms of what should be done or how that goal should be accomplished. At the very beginning, there was no established procedure to decide which experiments should be pursued, and there was a shortage of space scientists who were interested or ready to work with satellites. As a result, the first scientists recruited or transferred to Goddard had a lot of freedom to make their own decisions about what ought to be done. In 1959, NASA Headquarters announced that it would select the satellite experiments, but a shortage of qualified scientists at that level resulted in Goddard scientists initially taking part in the evaluation process. Experiments from outside scientists were incorporated into virtually all the satellite projects, but there were soon more scientists and proposals than there were flight opportunities. The outside scientific community began to complain that Goddard scientists had an unfair advantage. It took a while to sort out, but by 1961 NASA had developed a procedure that is still the foundation of how experiments are selected today. Headquarters issues Announcements of flight Opportunities (AOs), and scientists from around the country can submit proposals for experiments for the upcoming project. The proposals are evaluated by sub-committees organized by NASA Headquarters. The committees are made up of scientists from both NASA and the outside scientific community, but members do not evaluate proposals that might compete with their own work. These groups also conduct long-range mission planning, along with the National Academy of Sciences' Space Science Board.25 The final selection of experiments for satellite missions is made by a steering committee of NASA scientists. Because of a possible conflict of interest, the selection board took care to ensure a fairness in selecting space science research.26 Yet in the early days of Goddard, uncertainty about how to choose which experiments to pursue was only part of the challenge. The work itself required a flexible, pragmatic approach. Nobody had built satellites before, so there was no established support industry. Scientists drew upon their sounding rocket experience and learned as they went. Often, they learned lessons the hard way. Early summaries of satellite launches and results are peppered with notes such as "two experiment booms failed to deploy properly, however...," "Satellite's tracking beacon failed...," and, all too often, "liftoff appeared normal, but orbit was not achieved."27 Launch vehicles were clearly the weakest link in the early days, causing much frustration for space scientists. In 1959, only four of NASA's ten scientific satellite launches succeeded.28 In this environment of experimentation with regard to equipment as well as cosmic phenomena, Goddard scientists and engineers were constantly inventing new instruments, systems, and components, and they often had to fly something to see if it would really work. This talent for innovation became one of the strengths of Goddard, leading to the development of everything from an artificial sun to help test satellites to modular and servicable spacecraft, to solid state recorder technology and microchip technology for space applications. This entrepreneurial environment also spawned a distinct style and culture that would come to characterize Goddard's operations throughout its developmental years. It was a very pragmatic approach that stressed direct, solution-focused communication with the line personnel doing the work and avoided formal paperwork unless absolutely necessary. One early radio astronomy satellite, for example, required a complex system to keep it pointed in the right direction and an antenna array that was taller than the Empire State Building. After heated debate as to how the satellite should be built, the project manager approved one engineer's design and asked him to document it for him. On the launch day, when asked for the still-missing documentation, the engineer ripped off a corner of a piece of notebook paper, scribbled his recommendation, and handed it to the project manager. As one of the early scientists said, the Center's philosophy was "Don't talk about it, don't write about it - do it!"29 Dedicating the new space center This innovative and pragmatic approach to operations permeated the entire staff of the young space center, a trait that proved very useful in everything from spacecraft design to Goddard's formal dedication ceremonies. Construction of the facilities at Goddard progressed through 1959 and 1960. By the spring of 1961, NASA decided the work was far enough along to organize formal dedication ceremonies. But while there were several buildings that were finished and occupied, the Center was still lacking a few elements necessary for a dedication. A week before the ceremonies, the Secret Service came out to survey the site, because it was thought President Kennedy might attend. They told Goddard's director of administration, Mike Vaccaro, that he had to have a fence surrounding the Center. It rained for a solid week before the dedication, but Vaccaro managed to find a contractor who worked a crew 24 hours a day in the rain and mud to cut down trees and put in a chain link fence. After all that, the President did not attend the ceremonies. But... ....someone then decided that a dedication couldn't take place without a flagpole to mark the Center's entrance. Vaccaro had three days to find a flagpole - a seemingly impossible deadline to meet while still complying with government procurement regulations. One of his staff said there was a school being closed down that had a flagpole outside it, so Vaccaro spoke to the school board and then created a specification that described that flagpole so precisely that the school was the only bid that fit the bill. He then sent some of his staff over to dig up the flagpole and move it over to the Center's entrance gate - where it still stands today. There was also the problem of a bust statue. The dedication ceremony was supposed to include the unveiling of a bronze bust of Robert J. Goddard. But the sculptor commissioned to create the bust got behind schedule, and all he had done by the dedication date was a clay model. Vaccaro sent one of his employees to bring the clay sculpture to the Center for the ceremonies, anyway. To make things worse, the taxi bringing the bust back to the Center stopped short at one point, causing the bust to fall to the floor of the cab. The bust survived pretty much intact, but its nose broke off. Undaunted, Vaccaro and his employees pieced the nose back together and simply spray painted the clay bronze, finishing with so little time to spare that the paint was still wet when the bust was finally unveiled.30 But the ceremonies went beautifully, the Goddard Space Flight Center was given its formal send-off, and the Center could settle back down to the work of getting satellites into orbit. The Early Years In the view of those who were present at the time, the 1960s were a kind of golden age for Goddard. There was an entrepreneurial enthusiasm among its employees, and NASA was too new and still too small to have much in the way of bureaucracy, paperwork, or red tape. The scientists were being given the opportunity to be the first into a new territory. Sounding rockets and satellites weren't just making little refinements of already known phenomena and theories - they were exploring the space around Earth for the first time. Practically everything the scientists did was something that had never been done before, and they were discovering significant and new surprises and phenomena on almost every flight. Because of the impetus behind the Mercury, Gemini and Apollo space programs, space scientists also suddenly found themselves with a level of funding they had never had available before. Although there were many frustrations associated with learning how to operate in space and develop reliable technology that could survive its rigors, support for that effort was almost limitless. The Apollo program was "the rising tide that lifted all boats," as one Goddard manager put it. There was also a sense of mission, importance and purpose that has been difficult to duplicate since. We were going to space and we were going to be first to the Moon, and our national security, prestige, and pride was seen as dependent on how well we did the job.31 The Goddard Institute for Space Studies In this kind of environment, both the space program and Goddard grew quickly. Even before Goddard completed its formal dedication ceremonies, plans were laid for the establishment of a separate Goddard Institute for Space Studies in New York City. Two of the big concerns in the early days of the space program were attracting top scientists to work with the new agency and insuring there would be space-skilled researchers coming out of the universities. Early in NASA's development, the agency set aside money for both research and facilities grants to universities to help create strong space science departments.32 But one of Goddard's early managers thought the link should be personal as well as financial. Dr. Robert Jastrow had transferred to Goddard to head up the theoretical division in the fall of 1958. He argued that if Goddard wanted to attract the top theoretical physicists from academia to work with the space program, it had to have a location more convenient to leading universities. By late 1960, he had convinced managers at Goddard and Headquarters to allow him to set up a separate Goddard institute in New York. The Goddard Institute for Space Studies (GISS) provided a gathering point for theoretical physicists and space scientists in the area. But the institute offered them another carrot, as well - some of the most powerful computers in existence at the time. The computers were a tremendous asset in crunching the impossibly big numbers involved in problems of theoretical physics and orbital projections. Over the years, the Goddard Institute organized conferences and symposia and offered research fellowships to graduate students in the area. It also kept its place at the forefront of computer technology. In 1975, the first fourth generation computer to be put into use anywhere in the United States was installed at the Goddard Institute in New York.33 Goddard's international ties and projects were expanding quickly, as well. In part the growth was natural, because Goddard and the space program itself grew out of an international scientific effort - the International Geophysical Year. Scientists also tended to see their community as global rather than national, which made international projects much easier to organize. Furthermore, the need for a world-wide network of ground stations to track the IGY satellites forced the early space scientists and engineers to develop working relationships with international partners even before NASA existed. These efforts were enhanced both by the Space Act that created NASA, which specified international cooperation as a priority for the new agency, and by the simple fact that there was significant interest among other countries in doing space research. Early NASA managers quickly set down a very simple policy about international projects that still guides the international efforts NASA undertakes. There were only two main rules. The first was that there would be no exchange of funds between... ....NASA and international partners. Each side would contribute part of the project. The second was that the results would be made available to the whole international community. The result was a number of highly successful international satellites created by joint teams who worked together extremely well - sometimes so well that it seemed that they all came from a single country.34 In April 1962, NASA launched Ariel I - a joint effort between Goddard and the United Kingdom and the first international satellite. Researchers in the U.K. developed the instruments for the satellite, and Goddard managed development of the satellite and the overall project. Ariel was followed five months later by Alouette I, a cooperative venture between NASA and Canada. Although Alouette was the second international satellite, it was the first satellite in NASA's international space research program that was developed entirely by another country.35 These early satellites were followed by others. Over the years, Goddard's international ties grew stronger through additional cooperative scientific satellite projects and the development of ground station networks. Today, international cooperation is a critical component of both NASA's scientific satellite and human space flight programs. The work Goddard conducted throughout the 1960s was focused on basics: conquering the technical challenges of even getting into space, figuring out how to get satellites to work reliably once they got there, and starting to take basic measurements of what existed beyond the Earth's atmosphere. The first few satellites focused on taking in situ measurements of forces and particles that existed in the immediate vicinity of Earth, but the research quickly expanded to astronomy, weather satellites, and communication satellites. Indeed, one of the initial groups that was transferred to form Goddard was a group from the Army Signal Corps that was already working on development of a weather satellite called the Television Infrared Observation Satellite (TIROS). The first TIROS satellite was launched in April 1960. Four months later, the first communications satellite was launched into a successful orbit. The original charter for NASA limited its research to passive communications satellites, leaving active communications technology to the Department of Defense. So the first communications satellite was an inflatable mylar sphere called "Echo," which simply bounced communications signals back to the ground. The limitation against active communications satellite research was soon lifted, however, and civilian prototypes of communications satellites with active transmitters were in orbit by early 1963.36 As the 1960s progressed, the size of satellites grew along with the funding for the space program. The early satellites were simple vehicles with one or two main experiments. Although small satellites continued to be built and launched, the mid-1960s saw the evolution of a new Observatory-class of satellites, as well - spacecraft weighing as much as one thousand pounds, with multiple instruments and experiments. In... ....part, the bigger satellites reflected advances in launch vehicles that allowed bigger payloads to get into orbit. But they also paralleled the rapidly expanding sights, funding, and goals of the space program. The research conducted with satellites also expanded during the 1960s. Astronomy satellites were a little more complex to design, because they had to have the ability to remain pointed at one spot for a length of time. Astronomers also were not as motivated as their space physics colleagues to undertake the challenge of space-based research, because many astronomy experiments could be conducted from ground observatories. Nonetheless, space offered the opportunity to look at objects in regions of the electromagnetic spectrum obscured by the Earth's atmosphere. The ability to launch larger satellites brought that opportunity within reach as it opened the door to space-based astronomy telescopes. Goddard launched its first Orbiting Astronomical Observatory (OAO) in 1966. That satellite failed, but another OAO launched two years later was very successful. These OAO satellites laid the groundwork for Goddard's many astronomical satellites that followed, including the Hubble Space Telescope. Goddard scientists also were involved in instrumenting some of the planetary probes that were already being developed in the 1960s, such as the Pioneer probes into interplanetary space and the Ranger probes to the Moon. The other main effort underway at Goddard in the 1960s involved the development of tracking and communication facilities and capabilities for both the scientific satellites and the manned space flight program. Goddard became the hub of the massive, international tracking and communications wheel that involved aircraft, supertankers converted into mobile communications units, and a wide diversity of ground stations. This system provided NASA with a kind of "Internet" that stretched not only around the world, but into space, as well. Every communication to or from any spacecraft came through this network. A duplicate mission control center was also built at Goddard in case the computers at the main control room at the Johnson Space Center in Houston, Texas failed for any reason. Whether it was in tracking, data, satellite engineering, or space science research, the 1960s were a heady time to work for NASA. The nation was behind the effort, funding was flowing from Congress faster than scientists and engineers could spend it, and there was an intoxicating feeling of exploration. Almost everything Goddard was doing had never been done before. Space was the new frontier, and the people at Goddard knew they were pioneers in the endeavor of the century. This is not to say that there were no difficulties, frustrations, problems, or disappointments in the 1960s. Tensions between the Center and NASA Headquarters increased as NASA projects got bigger. Goddard's first director, Harry Goett, came to Goddard from the former NACA Ames Research Center. He was a fierce defender of his people and believed vehemently in the independence of field centers. Unfortunately, Goddard was not only almost in Headquarters' back yard, it was also under a much more intense spotlight because of its focus on space. The issues between Goddard and NASA Headquarters were not unique to Goddard, or even to NASA. Tension exists almost inherently between the Headquarters and field installations of any institution or corporation. While both components are necessary to solve the myriad of big-picture and hands-on problems the organization faces, their different tasks and perspectives often put Headquarters and field personnel in conflict with each other. In order to run interference for field offices and conduct long-range planning, funding, or legislative battles, Headquarters personnel need information and a certain.... ....amount of control over what happens elsewhere in the organization. Yet to field personnel who are shielded from these large-scale threats and pressures, this oversight and control is often seen as unwelcome interference. In the case of NASA, Headquarters had constant pressure from Congress to know what was going on, and it had a justifiable concern about managing budgets and projects that were truly astronomical. To allow senior management to keep tabs... ....on different projects and to maintain a constant information flow from the Centers to Headquarters, NASA designated program managers at Headquarters who would oversee the agency's various long-term, continuing endeavors, such as astronomy. Those program managers would oversee the shorter-term individual projects, such as a single astronomy satellite, that were being managed by Goddard or the other NASA field centers.37 These program managers were something of a sore spot for Goett and the Goddard managers, who felt they knew well enough how to manage their work and, like typical field office managers, sometimes saw this oversight as unwelcome interference. Managers at other NASA Centers shared this opinion, but the tension was probably higher at Goddard because it was so close to Headquarters. Program managers wanted to sit in on meetings, and Goett wanted his project managers and scientists left alone. Tensions over authority and management escalated between Goett and Headquarters until Goett was finally replaced in 1965.38 The increasing attention paid to the space program had other consequences, as well. If it created more support and funding for the work, it also put projects in the eye of a public that didn't necessarily understand that failure was an integral part of the scientific process. The public reaction to early launch failures, especially the embarrassing Vanguard explosion in December 1957, made it very clear to the NASA engineers and scientists that failure, in any guise, was unacceptable. This situation intensified after the Apollo I fire in 1967 that cost the lives of three astronauts. With each failure, oversight and review processes got more detailed and complex, and the pressure to succeed intensified. As a result, Goddard's engineers quickly developed a policy of intricate oversight of contractors and detailed testing of components and satellites. Private industry has become more adept at building satellites, and NASA is now reviewing this policy with the view that it may increase costs unnecessarily and duplicate manpower and effort. In the future, satellites may be built more independently by private companies under performance-based contracts with NASA. But in the early days, close working relationships with contractors and detailed oversight of satellite building were two of the critical elements that led to Goddard's success. The Post-Apollo Era The ending of the Apollo program brought a new era to NASA, and to Goddard, as well. The drive to the Moon had unified NASA and garnered tremendous support for space efforts from Congress and the country in general. But once that goal was achieved, NASA's role, mission and funding became a little less clear. In some ways, Goddard's focus on scientific missions and a diversity of projects helped protect it from some of the cutbacks that accompanied the end of the Apollo program in 1972. But there were still two Reductions in Force (RIFs)39 at Goddard after the final Apollo 17 mission that hurt the high morale and enthusiasm that had characterized the Center throughout its first decade. Yet despite the cutbacks, the work at Goddard was still expanding into new areas. Even as the Apollo program wound down, NASA was developing a new launch vehicle that what would become known as the Space Shuttle. The primary advantage of the Shuttle was seen as its reusable nature. But an engineer at Goddard named Frank Ceppolina saw another distinct opportunity with the Shuttle. With its large cargo bay and regular missions into low Earth orbit, he believed the Shuttle could be used as a floating workshop to retrieve and service satellites in orbit. Goddard had already pioneered the concept of modular spacecraft design with its Orbiting Geophysical Observatory (OGO) satellites in the 1960s. But in 1974, Ceppolina took that concept one step further by proposing a Multi-mission Modular Spacecraft (MMS) with easily replaceable, standardized modules that would support a wide variety of different instruments. The modular approach would not only reduce manufacturing costs, it would also make it possible to repair the satellite on station, because repairing it would be a fairly straightforward matter of removing and replacing various modules. The first modular satellite was called the "Solar Max" spacecraft. It was designed to look at solar phenomena during a peak solar activity time and was launched in 1980. About a year after launch it developed problems and, in 1984, it became the first satellite to be repaired in space by Shuttle astronauts. The servicing allowed the satellite to gather additional valuable scientific data. But perhaps the biggest benefit of the Solar Max repair mission was the experience it gave NASA in servicing satellites. That experience would prove invaluable a few years later when flaws discovered in the Hubble Space Telescope forced NASA to undertake a massive and difficult repair effort to save the expensive and high-visibility Hubble mission.40 Goddard made significant strides in space science in the years following Apollo, developing projects that would begin to explore new wavelengths and farther distances in the galaxy and the universe. The International Ultraviolet Explorer (IUE) launched in 1978, has proven to be one of the most successful and productive satellites ever put into orbit. It continued operating for almost 19 years - 14 years beyond its expected life span - and generated more data and scientific papers than any other satellite to date. Goddard's astronomy work also expanded into the high-energy astronomy field in the 1970s. The first Small Astronomy Satellite, which mapped X-ray sources across the sky, was launched in 1970. A gamma-ray satellite followed in 1972. Goddard also had instruments on the High Energy Astronomical Observatory (HEAO) satellites, which were managed by the Marshall Space Flight Center.41 The HEAO satellites also marked the start of a competition between Marshall and Goddard that would intensify with the development of the Hubble Space Telescope. When the HEAO satellites were being planned in the late 1960s and early 1970s, Goddard had a lot of different projects underway. Senior managers at the Marshall Space Flight Center, however, were eagerly looking for new work projects to keep the center busy and alive. Marshall's main project had been the development of the Saturn rocket for the Apollo program and, with the close of the Apollo era, questions began to come up about whether Marshall was even needed anymore. When the HEAO project came up, the response of Goddard's senior management was that the Center was too busy to take on the project unless the Center was allowed to hire more civil servants to do the work. Marshall, on the other hand, enthusiastically promised to make the project a high priority and assured Headquarters that it already had the staff on board to manage it. In truth, Marshall had a little bit of experience with building structures for astronomy, having developed the Apollo Telescope Mount for Skylab, and the Center had shown an interest in doing high-energy research. When it got the HEAO project, however, Marshall still had an extremely limited space science capability. From a strictly scientific standpoint, Goddard would have been the logical center to run the project. But the combination of the available work force at Marshall and the enthusiasm and support that Center showed for the project led NASA Headquarters to choose Marshall over Goddard to manage the HEAO satellites. The loss of HEAO to Marshall was a bitter pill for some of Goddard's scientists to swallow. Goddard had all but owned the scientific satellite effort at NASA for more than a decade and felt a great deal of pride and investment in the expertise it had developed in the field. It was an adjustment to have to start sharing that pie. What made the HEAO loss particularly bitter in retrospect, however, was that it gave Marshall experience in telescope development - experience that factored heavily in Headquarters' decision to award the development of the Hubble Space Telescope to Marshall, as well. There were other reasons for giving the Hubble telescope to Marshall - including concern among some in the external scientific community that Goddard scientists still had too much of an inside edge on satellite research projects. Goddard was going to manage development of Hubble's scientific instruments and operation of the telescope once it was in orbit. If Goddard managed the development of the telescope as well, its scientists would know more about all aspects of this extremely powerful new tool than any of the external scientists. By giving the telescope project to Marshall to develop, that perceived edge was softened a bit. Indeed, Hubble was perceived to be such a tremendously powerful tool for research that the outside community did not even want to rely on NASA Headquarters to decide which astronomers should be given time on the telescope. At the insistence of the general astronomical community, an independent Space Telescope Science Institute was set up to evaluate and select proposals from astronomers wanting to conduct research with the Hubble. The important point, however, was that the telescope project was approved. It would become the largest astronomical telescope ever put into space - a lens into mysteries and wonders of the universe no one on Earth had ever been able to see before.42 The field of space-based Earth science, which in a sense had begun with the first TIROS launch in 1960, also continued to evolve in the post-Apollo era. The first of a second generation of weather satellites was launched in 1970 and, in 1972, the first Earth Resources Technology Satellite (ERTS) was put into orbit. By looking at the the reflected radiation of the Earth's land masses with high resolution in different wavelengths, the ERTS instruments could provide information about the composition, use and health of the land and vegetation in different areas. The ERTS satellite became the basis of the Landsat satellites that still provide remote images of Earth today. Other satellites developed in the 1970s began to look more closely at the Earth's atmosphere and oceans, as well. The Nimbus-7 satellite, for example, carried new instruments that, among other things, could measure the levels of ozone in the... ....atmosphere and phytoplankton in the ocean. As instruments and satellites that could explore the Earth's resources and processes evolved, however, Earth scientists found themselves caught in the middle of an often politically charged tug-of-war between science and application. Launching satellites to look at phenomena or gather astronomical or physics data in space typically has been viewed as a strictly scientific endeavor whose value lies in the more esoteric goal of expanding knowledge. Satellites that have looked back on Earth, however, have always been more closely linked with practical applications of their data - a fact that has both advantages and disadvantages for the scientists involved. When Goddard began, all of the scientific satellites were organized under the "Space Sciences and Applications" directorate. Although the Center was working on developing weather and communications satellites, the technology and high resolution instruments needed for more specific resource management tasks did not yet exist. In addition, it was the height of the space race and science and space exploration for its own sake had a broad base of support in Congress and in society at large. In the post-Apollo era, however, NASA found itself needing to justify its expenditures, which led to a greater emphasis on proving the practical benefits of space. At NASA headquarters a separate "applications" office was created to focus on satellite projects that had, or could have, commercial applications. In an effort to focus efforts on more "applications" research (communications, meteorology, oceanography and remote imaging of land masses) as well as scientific studies, Goddard's senior management decided to split out "applications" functions into a new directorate at the Center, as well. In many ways, the distinction between science and application is a fine one. Often, the data collected is the same - the difference lies only in how it is analyzed or used. A satellite that maps snow cover over time, for example, can be used to better understand whether snow cover is changing as a result of global climate system changes. But that same information is also extremely useful in predicting snow melt runoff, which is closely linked with water resource management. A satellite that looks at the upper atmosphere will collect data that can help scientists understand the dynamics of chemical processes in that region. That same information, however, can also be used to determine how much damage pollutants are causing or whether we are, in fact, depleting our ozone layer. For this reason, Earth scientists can be more affected by shifting national priorities than their space science counterparts.43 The problem is the inseparable policy implications of information pertaining to our own planet. If we discover that the atmosphere of Mars is changing, nobody feels any great need to do anything about it. If we discover that pollutants in the air are destroying our own atmosphere, however, it creates a great deal of pressure to do something to remedy the situation. Scientists can argue that information is neutral - that it can show less damage than environmentalists claim as well as more severe dangers than we anticipated. But the fact remains that, either way, the data from Earth science research can have political implications that impact the support those efforts receive. The applicability of data on the ozone layer, atmospheric pollution and environmental damage may have prompted additional funding support at times when environmental issues were a priority. But the political and social implications of this data also may have made Earth science programs more... ...susceptible to attack and funding cuts when less sympathetic forces were in power.44 Yet despite whatever policy issues complicate Earth science research, advances in technology throughout the 1970s certainly made it possible to learn more about the Earth and get a better perspective on the interactions between ocean, land mass and atmospheric processes than we ever had before. The Space Shuttle Era As NASA moved into the 1980s, the focus that drove many of the agency's other efforts was the introduction of the Space Shuttle. In addition to the sheer dollars and manpower it took to develop the new spacecraft, the Shuttle created... ....new support issues and had a significant impact on how scientific satellites were designed and built. In the Apollo era, the spacecraft travelled away from the Earth, so a ground network of tracking stations could keep the astronauts in sight and in touch with mission controllers at almost all times. The Shuttle, however, was designed to stay in near-Earth orbit. This meant that the craft would be in range of any given ground station for only a short period of time. This was the case with most scientific satellites, but real-time communication was not as critical when there were no human lives at stake. Satellites simply used tape recorders to record their data and transmitted it down in batches when they passed over various ground stations. Shuttle astronauts, on the other hand, needed to be in continual communication with mission control. Goddard had gained a lot of experience in communication satellites in the early days of the Center and had done some research with geosynchronous communication satellite technology in the 1970s that offered a possible solution to the problem. A network of three geosynchronous satellites, parked in high orbits 22,300 miles above the Earth, could keep any lower Earth-orbiting satellite - including the Space Shuttle - in sight at all times. In addition to its benefits to the Shuttle program, the system could save NASA money over time by eliminating the need for the worldwide network of ground stations that tracked scientific satellites. The biggest problem with such a system was its development costs. NASA budgets were tight in the late 1970s and did not have room for a big budget item like the proposed Tracking and Data Relay Satellite System (TDRSS). So the agency worked out an arrangement to lease time on the satellites from a contractor who agreed to build the spacecraft at its own cost. Unfortunately, the agreement offered NASA little control or leverage with the contractor, and the project ran into massive cost and schedule overruns. It was a learning experience for NASA, and not one managers recall fondly. Finally, Goddard renegotiated the contract and took control of the TDRSS project. The first TDRSS satellite was finally launched from the Space Shuttle in April 1983. The second TDRSS was lost with the Shuttle "Challenger" in 1986, but the system finally became operational in 1989. The TDRSS project also required the building of a new ground station to communicate with the satellites and process their data. The location best suited for maximum coverage of the satellites was at the White Sands Missile Range in New Mexico. So in 1978, Goddard began building the TDRSS White Sands Ground Terminal (WSGT). The first station became operational in 1983, and a complete back-up facility, called the Second TDRSS Ground Terminal (STGT), became operational in 1994. The second station was built because the White Sands complex is the sole ground link for the TDRSS, and the possibility of a losing contact with the Shuttle was unacceptable. The second site insures that there will always be a.... ....working communications and data link for the TDRSS satellites.45 The edict that TDRSS would also become the system for all scientific satellite tracking and data transmission did not please everyone, because it meant every satellite had to be designed with the somewhat cumbersome TDRSS antennas. But the Shuttle's impact on space science missions went far beyond tracking systems or antenna design. Part of the justification of the Shuttle was that it could replace the expendable launch vehicles (rockets) used by NASA and the military to get satellites into orbit. As a result, the stockpile of smaller launch rockets was not replenished, and satellites had to be designed to fit in the Shuttle bay instead. There were some distinct advantages to using the Space Shuttle as a satellite launch vehicle. Limitations on size and weight - critical factors with the smaller launch vehicles - became much less stringent, opening the door for much bigger satellites. Goddard's Compton Gamma Ray Observatory, for example, weighed more than 17 tons. The Space Shuttle also opened up the possibility of having astronauts service satellites in space.46 On the other hand, using the Shuttle as the sole launch vehicle complicated the design of satellites, because they now had to undergo significantly more stringent safety checks to make sure their systems posed no threat to the astronauts who would travel into space with the cargo. But the biggest disadvantage of relying exclusively on the Shuttle hit home with savage impact in January 1986 when the Shuttle "Challenger" exploded right after lift-off. The Shuttle fleet was grounded for almost three years and, because the Shuttle was supposed to eliminate the need for them, there were few remaining expendable launch rockets. Even if there had been a large number of rockets available, few of the satellites that had been designed for the spacious... ....cargo bay of the Shuttle would fit the smaller weight and size limitations of other launch vehicles. Most satellites simply had to wait for the Shuttle fleet to start flying again. The 1980s brought some administrative changes to Goddard, as well. NASA's Wallops Island, Virginia flight facility had been created as an "Auxiliary Flight Research Station" associated with the NACA's Langley Aeronautical Laboratory in 1945.47 Its remote location on the Atlantic coast of Virginia made it a perfect site for testing aircraft models and launching small rockets. As the space program evolved, Wallops became one of the mainstays of NASA's sounding rocket program and operated numerous aircraft for scientific research purposes, as well. It also launched some of the National Science Foundation's smaller research balloons and provided tracking and other launch support services for NASA and the Department of Defense. Yet although its work expanded over the years, Wallops' small size, lower-budget projects, and remote location allowed it to retain the pragmatic, informal, entrepreneurial style that had characterized Goddard and much of NASA itself in the early days of the space program. People who worked at Wallops typically came from the local area, and there was a sense of family, loyalty, and fierce independence that characterized the facility. As one of NASA smaller research stations, however, Wallops was in a less protected political position than some of its larger and higher profile counterparts. In the early 1980s, a proposal emerged to close the Wallops Station as a way of reducing NASA's operating costs. In an effort to save the facility, NASA managers decided instead to incorporate Wallops into the Goddard Space Flight Center. Goddard was a logical choice because Wallops was already closely linked with Goddard on many of its projects. The aircraft at Wallops were sometimes used to help develop instruments that later went on Goddard satellites. Goddard also had a sounding rocket division that relied on Wallops for launch, range, tracking and data support. As time went on, Wallops had begun to develop some of the smaller, simpler sounding rocket payloads, as well. By the late 1970s, NASA headquarters was even considering transferring Goddard's entire sounding rocket program to Wallops. In 1982, Wallops Island Station became the Wallops Island Flight Facility, managed under the "Suborbital Projects and Operations" directorate at Goddard.48 At the same time, the remaining sounding rocket projects at Goddard-Greenbelt were transferred down to Wallops. The personnel at Goddard who had been working on sounding rockets had to refocus their talents. So they turned their entrepreneurial efforts to the next generation of small-budget, hands-on projects - special payloads for the Space Shuttle.49 As the 1980s progressed, Goddard began putting together a variety of small payloads to take up spare room in the Shuttle cargo bay. They ranged from $10,000 "Get Away Special" (GAS) experiments that even schoolchildren could develop to multi-million dollar Spartan satellites that the Shuttle astronauts release overboard at the start of a mission and pick up again before returning to Earth. The Post-Challenger Era: A New Dawn All of NASA was rocked on the morning of 28 January 1986 when the Shuttle "Challenger" exploded 73 seconds after launch. While many insiders at NASA were dismayed at what appeared to have been a preventable tragedy, they were not, as a whole, surprised that the Shuttle had had an accident. These were people who had witnessed numerous rockets with cherished experiments explode or fail during the launch process. They had lived through the the Orbiting Solar Observatory accident, the Apollo 1 fire, and the Apollo 13 crisis. They knew how volatile rocket technology was and how much of a research effort the Shuttle was, regardless of how much it was touted as a routine transportation system for space. These were veteran explorers who knew that for all the excitement and wonder space offered, it was a dangerous and unforgiving realm. Even twenty-five years after first reaching orbit, we were still beginners, getting into space by virtue of brute force. There was nothing routine about it. It was an understanding of just how risky the Shuttle technology was that drove a number of people within NASA to argue against eliminating the other, expendable launch vehicles. The Air Force was also concerned about relying on the Shuttle for all its launch needs. The Shuttle accident, however, settled the case. A new policy supporting a "mixed fleet" of launch vehicles was created, and expendable launch vehicles went back into production.50 Unfortunately, a dearth of launch vehicles was not the only impact the Challenger accident had on NASA or Goddard. The tragedy shattered NASA's public image, leading to intense public scrutiny of its operations and a general loss of confidence in its ability to conduct missions safely and successfully. Some within NASA wondered if the agency would even survive. To make things worse, the Challenger accident was followed four months later with the loss of a Delta rocket carrying a new weather satellite into orbit, and the loss a year later by an Atlas-Centaur rocket carrying a Department of Defense satellite. While these were not NASA projects, the agency received the criticism and the consequential public image of a Federal entity that could not execute its tasks. Launches all but came to a halt for almost two years, and even the scientific satellite projects found themselves burdened with more safety checks and oversight processes. The Shuttle resumed launches in 1989, but NASA took another hit in 1990 when it launched the much-touted Hubble Space Telescope, only to discover that the telescope had a serious flaw in its main mirror. As the last decade of the century began, NASA needed some big successes to regain the nation's confidence in the agency's competence and value. Goddard would help provide those victories. One of Goddard's biggest strengths was always its expertise in spacecraft construction. Most of the incredibly successful Explorer class of satellites, for example, were built in-house at Goddard.But the size and complexity of space science projects at Goddard - and even the Center's Explorer satellites - had grown dramatically over the years. From the early Explorer spacecraft, which could be designed, built and launched in one to three years, development and launch cycles had grown until they stretched 10 years or more. Aside from the cost of these large projects, they entailed much more risk for the scientists involved. If a satellite took 15 years from inception to launch, its scientists had to devote a major portion of their careers to the... ....project. If it failed, the cost to their careers would be enormous. In part, the growth in size and complexity of satellites was one born of necessity. To get sharp images of distant stars, the Hubble Space Telescope had to be big enough to collect large amounts of light. In the more cost-conscious era following Apollo, where new satellite starts began to dwindle every year, the pressure also increased to put as many things as possible on every new satellite that was approved. But in 1989, Tom Huber, Goddard's director of engineering, began advocating for Goddard to begin building a new line of smaller satellites. In a sense, these "Small Explorers," or SMEX satellites, would be a return to Goddard's roots in innovative, small and quickly produced spacecraft. But because technology had progressed, they could incorporate options such as fiber optic technology, standard interfaces, solid state recorders, more advanced computers that fit more power and memory into less space, and miniature gyros and star trackers. Some of these innovations, such as the solid state recorders and advanced microchip technology for space applications, had even been developed in-house at Goddard. As a result, these small satellites could be even more capable than some of the larger projects Goddard had built in the past. The goal of the SMEX satellites was to cost less than $30 million and take less than three years to develop. The program has proved highly successful, launching five satellites since 1992, and is continuing to develop advanced technology to enable the design of even more capable, inexpensive spacecraft.51 In late 1989, Goddard launched the Cosmic Background Explorer (COBE) satellite aboard a Delta launch rocket. Originally scheduled for launch aboard the Space Shuttle, the COBE satellite, which was built in-house at Goddard, had been totally redesigned in less than 36 months after the Challenger accident to fit the nose cone of a Delta rocket. Using... ....complex instruments, COBE went in search of evidence to test the "Big Bang" theory of how the universe began - and found it. Famed cosmologist Stephen Hawking called the NASA-University COBE team's discovery "the discovery of the century, if not of all time."52 The COBE satellite had perhaps solved one of the most fascinating mysteries in existence - the origins of the universe in which we live. It had taken 15 years to develop, but the COBE satellite offered the public proof that NASA could take on a difficult mission, complete it successfully, and produce something of value in the process. Goddard reached out into another difficult region of the universe when it launched the Compton Gamma Ray Observatory in 1990. The Compton was the second of NASA's planned "Four Great Observatories" that would explore the universe in various regions of the electromagnetic spectrum. The Hubble Space Telescope was to cover the visible and ultraviolet regions, the Compton was to explore the gamma ray region, and two additional observatories were to investigate phenomena in X-ray and infrared wavelengths. At over 17 tons, the Compton was the largest satellite ever launched into orbit, and its task was to explore some of the highest energy and perplexing phenomena in the cosmos. Three years later, Goddard found itself taking on an even more difficult challenge when the Center undertook the first Hubble servicing mission - better known as the Hubble repair mission. The odds of successfully developing and implementing a fix for the telescope, which had a flaw not in one instrument but in its central mirror, were estimated at no better than 50%. But because of Goddard's earlier successful pioneering efforts with servicable satellites, the Hubble had been designed to be serviced in space. This capability, and Goddard's previous experience repairing the Solar Max satellite, provided the critical components that made the Hubble repair possible. Fired with the same enthusiasm and sense of crisis that had fueled the Apollo program, the Goddard team assigned to manage the project, working with a hand-picked Shuttle crew from Houston's Johnson Space Center, succeeded beyond expectation. The success of such a difficult mission earned the team a Collier Trophy - the nation's highest award for the greatest aeronautical achievement in any given year.53 Even as Goddard launched the Compton Observatory and the Hubble Space Telescope to explore new regions of the universe, NASA announced the start of a massive new initiative to explore the planet we call home. Dubbed "Mission to Planet Earth" when it was introduced in 1990, the effort was expected to spend thirty billion dollars over at least 15 years in order to take a long-term, systems-oriented look at the health of the planet.. In some ways, the program was a natural outgrowth of increasing environmental concerns over the years and the improved ability of satellites to analyze the atmosphere and oceans of our planet. But it received a big boost... ...when a hole in the ozone layer was discovered in 1985. That discovery, as one researcher put it, "dramatized that the planet was at risk, and the potential relevance of NASA satellite technology to understanding that risk." In the wake of the Challenger disaster, Mission to Planet Earth was also seen as one of the top "leadership initiatives" that could help NASA recover from the tragedy and regain the support of the American public.54 Although numerous NASA centers would participate in the MTPE effort, the program office was located at Goddard. It was a natural choice, because Goddard was the main Earth Science center in the agency anyway. Earth Science was broken out of the Space and Earth Sciences directorate and its research began to take on a new sense of relevance in the public eye. As with earlier Earth science efforts, however, the political and social implications of this data also have made the program more susceptible to shifting national priorities than its space science counterparts. In the past eight years, the program has been scaled back repeatedly. Its budget is now down to seven billion dollars and the name of the program has been changed to Earth Science Enterprise.55 There are numerous reasons for the cutback of the program. But it can be argued that we find money for the items that are high national priorities. And one factor in the changing fortunes of the Mission to Planet Earth program is undeniably the shifting agendas that affect NASA funding. Nevertheless, the more moderate Earth Science Enterprise program will still give scientists their first real opportunity to study the planet's various oceanographic and atmospheric processes as an integrated system instead of individual components - a critical step toward understanding exactly how our planet operates and how our actions impact its health. In short, Goddard's work in the early 1990s helped bring NASA out of the dark post-Challenger era and helped create in a new energy, enthusiasm and curiosity about both planet Earth and other bodies in the universe. We now had the technology to reach back to the very beginning of time and the outer reaches of the universe. The Hubble... ....servicing mission made possible the beautiful images of far-away galaxies, stars, nebulae and planets that now flow into publications on a regular basis. These images have not only provided valuable clues to scientific questions about the cosmos, they have also fired the imaginations of both children and adults, generating a new enthusiasm for space exploration and finding out more about the galaxy and universe we call home. At the same time, we had the technology to begin to piece together answers about where El Nino weather patterns came from, how our oceans and atmosphere work together to create and control our climate, and how endangered our environment really is. These advances provided critical support for NASA at a time when many things about the agency, and the Goddard Space Flight Center, were changing. Better, Faster, Cheaper As we head into the twenty-first century, the world is changing at a rapid pace. The electronic superhighways of computers and communications are making the world a smaller place, but the marketplace a more global one. Concerns about the United States' competitiveness are growing as international competition increases. The crisis-driven days of the space race are also over, and cost now is a serious concern when Congress looks at whether or not additional space projects should be funded. This need to be more cost-efficient is driving changes both within Goddard itself and in its relationships with outside industry. Goddard recently underwent a major administrative reorganization in the hopes of making better use of its engineers' time. Instead of being scattered around the Center, its almost 2,000 engineers are being organized almost entirely into either a new Applied Engineering and Technology (AET) directorate or a new Systems, Technology, and Advanced Concepts (STAAC) Directorate. In essence, AET will provide the hands-on engineering support for whatever projects are underway at the Center, and STAAC will work on advanced concepts and systems engineering for future projects. Again, this change in matrix structure within Goddard is not a new concept. The Center has gone back and forth a couple of times between putting engineers with scientists on project teams and trying to follow a stricter discipline-oriented organization. The advantages of a project-based organization are that the engineers get to really focus on one job at a time and build synergistic relationships with the scientists with whom they are working. These relationships often lead to innovative ideas or concepts that the individual engineers or scientists might not have come to on their own. The disadvantage of this structure, which is a greater concern in times of tighter budgets, is that even if those engineers have excess time during lulls in the project, it can't easily be taken advantage of by anyone else in the Center. Their talent is tied up in one place, which can also lead to territorial "fiefdoms" instead of a more ideal Center-wide cooperation.56 At the present time, the changes are administrative only. The engineers are still being co-located with their scientist colleagues. How or if that changes in the future remains to be seen, as does the success of the reorganization in general. After all, the impact of any administrative change is determined more... ....by how it is implemented than how it looks on paper, and the success of that can only be determined once the change has been made.57 Another issue facing Goddard is the recurring question of who should be building the spacecraft. One of the strengths of Goddard has always been its in-house ability to design and build both spacecraft and instruments. The Center's founders created this in-house capability for two reasons. First, there was little in the way of a commercial spacecraft industry at the time Goddard was started. Second, although most of the satellites actually would be built by contractors, the founders of NASA believed that the agency had to have hands-on knowledge of building spacecraft in order to manage those contracts effectively. Over the years, the commercial spacecraft industry has grown and matured tremendously, leading to periodic discussions as to whether NASA should leave the spacecraft building jobs entirely to the private sector. After all, there is general agreement that the government, in the form of NASA, should not do what industry is capable of doing. In truth, however, the issue isn't quite that simple. In the late 1970s, Goddard's senior management all but stopped in-house satellite building at the Center, focusing the engineers' efforts on instrument building, instead. The rationale was that industry was capable of building satellites and NASA should be working on developing advanced technology sensors and instruments. Yet even aside from the argument that keeping in-house competence was necessary to effectively manage contracts with industry, there were flaws to this rationale. For one thing, building satellites in-house had a significant indirect effect on the employees at Goddard. The ability to help design spacecraft helped attract bright young engineers to the Center, which is always an important concern in a field where industry jobs generally pay better than NASA positions. Furthermore, knowing that some of the spacecraft sitting on top of launch vehicles had been built in-house gave Goddard employees a sense of pride and involvement in the space program that instrument building alone could not create. Taking away that element caused a huge drop in the Center's morale. Indeed, when Tom Young became the Center's director in 1980, one of his first moves was to restore the building of in-house satellites in the hopes of rebuilding morale.58 The commercial space industry has matured even further in the past 20 years, and the question about whether Goddard still should be building in-house satellites has been raised again in recent years. In the end, the answer is probably "Yes". The question lies more in the type and number of satellite projects the Center should undertake. The goal is for Goddard to pursue one or two in-house projects that involve advanced spacecraft technology and to contract out projects that involve more proven spacecraft concepts. At the same time, Goddard is taking advantage of the expertise now present in the commercial satellite industry by introducing a new "Rapid Spacecraft Procurement Initiative," with the goal of reducing the development time and cost of new spacecraft. By "pre-qualifying" certain standard spacecraft designs from various commercial satellite contractors, Goddard hopes to make it possible for some experiments to be integrated into a spacecraft and launched within as short a time frame as a year. Not every experiment can be fit into a standard spacecraft design, but there are certainly some which could benefit from this quick-turnaround system.The contracts developed by Goddard for this initiative are now being used by not only other NASA Centers, but by the Air Force, as well.59 A more complex issue is how involved NASA should be in even managing the spacecraft built by industry. Historically, Goddard has employed a very thorough and detailed oversight policy with the contracts it manages. One of the reasons the Center developed this careful, conservative policy was to avoid failure in the high-profile, high-dollar realm of NASA. As a result, the concern of NASA engineers tends to be to make sure the job is done right, regardless of the cost. While industry engineers have the same interest in excellence and success, they sometimes have greater pressure to watch the bottom line. Goddard managers quote numerous examples of times contractors only agreed to conduct additional pre-launch tests after Goddard engineers managing the contract insisted on it. They also recall various instances where Goddard finally sent its own engineers to a contractor's factory to personally supervise projects that were in trouble. Industry, on the other hand, can argue that Goddard's way of building satellites is not necessarily the only right way and this double-oversight slows down innovation and greatly increases the cost of building satellites. And in an era of decreasing federal budgets, deciding how much oversight is good or enough becomes an especially sticky issue. Currently, the trend seems to be toward a more hands-off, performance-based contract relationship with industry. Industry simply delivers a successful satellite or doesn't get paid. Some argue that a potential disadvantage to this approach is that it could rob industry engineers of the advice and experience Goddard might be... ....able to offer. Goddard's scientists and engineers have a tremendous corporate memory and have learned many lessons the hard way. So sharing that expertise might prove more cost-effective in the long run than the bottom line salary and labor allocation figures of a more hands-off system might suggest. In the end, there is truth in what all parties say. It's hard to say what the "right" answer is because, for all our progress in the world of space, we are still feeling our way and learning from our mistakes as we keep reaching out to try new things. The exact nature and scope of NASA's mission has been the subject of frequent debate since the end of the Apollo program. But NASA certainly has an edict to do those things that for reasons of cost, risk, or lack of commercial market value, industry can or will not undertake. In the early 1960s, the unknowns and risk of... ....failure were far too high and the potential profit far too uncertain for industry to fund the development of anything but communication satellites. Today, that situation is changing. In some cases, a commercial market for the data is developing. In others, the operations once considered too risky for anyone but NASA to perform are now considered routine enough to contract out to private companies, which are also much more capable than they once were. Some tracking and data functions that were a part of the Goddard Space Flight Center since its inception, for example, were recently moved down to the Marshall Space Flight Center, where they will be managed by a private company under contract to a supervising Space Operations Management Office at the Johnson Space Center.60 NASA is also starting to relinquish its hold on the launching of rockets itself. In years past, all launches were conducted at government facilities for reasons of both safety and international politics. But that is beginning to change. The state of Virginia is already in the process of building a commercial space port at the Wallops Island facility in partnership with private industry. The payloads and launch vehicles using the space port will be developed privately, and the consortium will contract with NASA to provide launch range, radar, telemetry, tracking and safety analysis services.61 NASA has also used a privately developed, airplane-launched rocket called the "Pegasus" to send a number of small satellites into space. It should be noted, however, that the Pegasus vehicle went through a series of developmental problems before it became a reliable system. The same is true of the SeaWIFS satellite, which is currently providing very useful data on ocean color but which was developed under a very different type of contract than most scientific satellites. The SeaWIFS spacecraft was developed independently by the Orbital Sciences Corporation and NASA paid only for the data it uses. While the satellite is now generating very good data, it ran into many developmental difficulties and delays that caused both NASA and the contractor a lot of aggravation. On the one hand, because NASA paid the majority of the money up front, there was less incentive for the contractor to keep on schedule. On the other hand, the up-front, fixed-price lease meant that the contractor absorbed the costs of the problems and delays when they occurred.62 Fixed-price contracts work well in many arenas. The complication with scientific satellites is that these spacecraft are not generally proven designs. It's difficult to foresee ahead of time what problems are going to arise in a research project that's breaking new ground. Indeed, there are a lot of uncertainties amidst the tremendous atmosphere of change facing Goddard, NASA and the world at large, and it remains to be seen how they will all work out. Most likely, it will take a number of missteps and failures before the right mix and/or approach is found. The process will also undoubtedly entail the same pendulum swings between different approaches that has characterized Goddard throughout its history. And since external circumstances and goals are constantly changing, there may never be one "correct" mix or answer found. In the end, our efforts in space are still an exploration into the unknown. On the cutting edge of technology and knowledge, change is the only constant - in theories of the universe as well as technology, priorities, and operating techniques. Once upon a time, Goddard's biggest challenge was overcoming the technical obstacles to operating in space. Today, Goddard's challenge is to find the flexibility to keep up with a rapidly changing world without losing the magic that has made the Center so successful over the past forty years. The new frontier for Goddard is now much broader than just space itself. The Center has to be open to reinventing itself, infusing new methods and a renewed sense of entrepreneurial innovation and teamwork into its operations while continuing to push boundaries in technology development, space and Earth exploration for the benefit of the human race. It has to be flexible enough to work as part of broader NASA, university and industry and international teams in a more global and cost-constrained space industry and world. It has to find a way to reach forward into new areas of research, commercial operations, and more efficient procedures without losing the balance between cost and results, science and engineering, basic research and applications, inside and outside efforts. And, most importantly, Goddard has to accomplish all of these things while preserving the most valuable strength it has - the people that make it all possible.
http://history.nasa.gov/SP-4312/ch2.htm
13
81
|This article may be expanded with text translated from the corresponding article in the French Wikipedia. (February 2013)| In mathematics an equation is an expression of the shape A = B, where A and B are expressions containing one or several variables called unknowns. An equation looks like an equality, but has a very different meaning: An equality is a mathematical statement that asserts that the left-hand side and the right-hand side of the equals sign (=) are the same or represent the same mathematical object; for example 2 + 2 = 4; is an equality. On the other hand, an equation is not a statement, but a problem consisting in finding the values, called solutions, that, when substituted to the unknowns, transform the equation into an equality. For example, 2 is the unique solution of the equation x + 2 = 4, in which the unknown is x. Equation may also refer to a relation between some variables that is expressed by the equality of some expressions of their values. For example the equation of the unit circle is x2 + y2 = 1, which means that a point belongs to the circle if and only if its coordinates are related by this equation. Most physical laws are expressed by equations. One of the most popular ones is Einstein's equation E=mc2. The = symbol was invented by Robert Recorde (1510–1558), who considered that nothing could be more equal than parallel straight lines with the same length. Centuries ago, the word "equation" frequently meant what we now usually call "correction" or "adjustment". This meaning is still occasionally found, especially in names which were originally given long ago. The "equation of time", for example, is a correction that must be applied to the reading of a sundial in order to obtain mean time, as would be shown by a clock. Parameters and unknowns Equations often contain other variables than the unknowns. These other variables that are supposed to be known are usually called constants, coefficients or parameters. Usually, the unknowns are denoted by letters at the end of the alphabet, x, y, z, w, …, while coefficients are denoted by letters at the beginning, a, b, c, d, … . For example, the general quadratic equation is usually written ax2 + bx + c = 0. The process of finding the solutions, or in case of parameters, expressing the unknowns in terms of the parameters is called solving the equation. Such expressions of the solutions in terms of the parameters are also called solutions. A system of equations is a set of simultaneous equations, usually in several unknowns, for which the common solutions are sought. Thus a solution to the system is a set of one value for each unknown, which is a solution to each equation in the system. For example, the system has the unique solution x = -1, y = 1. Analogous illustration Each side of the balance corresponds to each side of the equation. Different quantities can be placed on each side, if they are equal the balance corresponds to an equality (equation), if not then an inequality. In the illustration, x, y and z are all different quantities (in this case real numbers) represented as circular weights, each of x, y, z has a different weight. Addition corresponds to adding weight, subtraction corresponds to removing weight from what is already placed on. The total weight on each side is the same. Types of equations Equations can be classified according to the types of operations and quantities involved. Important types include: - An algebraic equation or polynomial equation is an equation in which both sides are polynomials (see also system of polynomial equations). These are further classified by degree: - A Diophantine equation is an equation where the unknowns are required to be integers - A transcendental equation is an equation involving a transcendental function of its unknowns - A parametric equation is an equation for which the solutions are sought as functions of some other variables, called parameters appearing in the equations - A functional equation is an equation in which the unknowns are functions rather than simple quantities - A differential equation is a functional equation involving derivatives of the unknown functions - An integral equation is a functional equation involving the antiderivatives of the unknown functions - An integro-differential equation is a functional equation involving both the derivatives and the antiderivatives of the unknown functions - A difference equation is an equation where the unknown is a function f which occurs in the equation through f(x), f(x-1), ..., f(x-k), for some whole integer called the order of the equation. If x is restricted to be an integer, a difference equation is the same as a recurrence relation An identity is a statement resembling an equation which is true for all possible values of the variable(s) it contains. Many identities are known, especially in trigonometry. Probably the best known example is: , which is true for all values of θ. In the process of solving an equation, it is often useful to combine it with an identity to produce an equation which is more easily soluble. For example, to solve the equation: - where θ is known to be between zero and 45 degrees, use the identity: so the above equation becomes: - which comes to about 20.9 degrees. Two equations or two systems of equations are equivalent if they have the same set of solutions. The following operations transform an equation or a system into an equivalent one: - Adding or subtracting the same quantity to both sides of an equation. This shows that every equation is equivalent to an equation in which the right-hand side is zero. - Multiplying or dividing both sides of an equation by a non-zero constant. - Applying an identity to transform one side of the equation. For example, expanding a product or factoring a sum. - For a systems: adding to both sides of an equation the corresponding side of another, equation multiplied by the same quantity. If some functions is be applied to both sides of an equation, the resulting equation has the solutions of the initial equation among its solutions, but may have further solutions called extraneous solutions. If the function is not defined everywhere, (like 1/x that is not defined for x=0) some solutions may be lost. Thus caution must be exercised when applying such a transformation to an equation. For example, the equation has the solution Raising both sides to the exponent of 2 (which means, applying the function to both sides of the equation) changes our equation into , which not only has the previous solution but also introduces the extraneous solution, Above transformations are the basis of most elementary methods for equation solving and some less elementary ones, like Gaussian elimination See also - "Equation". Dictionary.com. Dictionary.com, LLC. Retrieved 2009-11-24. - Winplot: General Purpose plotter which can draw and animate 2D and 3D mathematical equations. - Mathematical equation plotter: Plots 2D mathematical equations, computes integrals, and finds solutions online. - Equation plotter: A web page for producing and downloading pdf or postscript plots of the solution sets to equations and inequations in two variables (x and y). - EqWorld—contains information on solutions to many different classes of mathematical equations. - EquationSolver: A webpage that can solve single equations and linear equation systems.
http://en.wikipedia.org/wiki/Equation
13
63
A binary star is actually a name for a star system that is made up of two or more stars that orbit a common center of mass. The brightest star of a binary pair is called the primary, while the other star is called companion, or secondary. In astrophysics, binary stars easily allow for the calculation of mass of the individual stars. This is done though using their mutual orbit to precisely calculate the mass of the individual stars through Newtonian calculations. Through this, the radius and density of the individual stars can be calculated indirectly. This collected data from various binary stars also allow the calculation of mass for similar single stars through extrapolation. It is estimated that a third of the stars in our galaxy are binary and multiple star systems. Binary stars are classified by the method of observation used to discover them. These include eclipsing, visual, spectroscopic, and astrometric binaries. An eclipsing binary is a binary system where the orbital plain is close enough to the line of sight of an observer that the individual stars will eclipse each other. Eclipsing binaries are also classified as variable stars, stars that have regular and periodic changes in the apparent magnitude. This variation in brightness may be from a pair of binary stars as one passes in front of the other as seen from the point of the observer. If the two stars are of different sizes, the larger star will block the other through a total eclipse, while the smaller star will partially dim the large one via an annular eclipse. This change in brightness over regular periods of time is known as the light curve. The first observed eclipsing binary discovered was the star system Algol. The star was known to periodically change in magnitude since ancient times, but in 1881, it was discovered this was due to the fact Algol was not one, but two stars in close orbit. A visual binary is a binary pair where the individual stars are visible through a telescope that has the appropriate resolving power. Brighter stars are more difficult to resolve as visible binaries then dimmer ones, due to glare. A binary may also be difficult to resolve visually if the primary (brighter star) is significantly more visually luminous then its companion, effectively washing out the other star. By measuring the position angle of the companion star relative to the primary and the angler distance between the two stars over time, the ellipse, called the apparent ellipse, which is the orbit of the secondary in respect to the primary, can be plotted out. From this measurement of the semi-major axis and orbital period of binary stars' orbit, the mass of the stars can be determined. Visual binaries are quite common and in fact make up many of the most well-known and prominent stars int he night sky, such as Castor. Three of the six closest known stars are visual binaries: Alpha Centauri, Luyten 726-8, and Sirius. If it is impossible to resolve the star as a binary visually, then one method determining whether or not a star is part of a binary pair is through analyzing the light emitted from the star system using a spectrograph, binaries observed this way are known a spectroscopic binaries. When a Spectrograph is used to indirectly observe a star, it spreads the light from that star into a full spectrum of colors superimposed with dark absorption lines. If the star observed in this method is suspected to be a binary star, the spectrum that is analyzed is from both stars together. If in observing this spectrum, a Doppler effect is observed over time, that determines there is indeed a binary pair. The Doppler effect is caused when the two stars orbit their common center of mass, one star moves closer to us while its companion moves away. As this happens, the spectral lines in the spectrum for the star moving close will be blue shifted, while the spectral lines of its companion moving away will be red shifted though the Doppler effect. As the stars move across out line of site in their orbit, the location of absorption lines or each star will become the same. As the first star moves away and its companion approaches, the opposite shifting of wavelengths will occur, with the companion star's spectral lines blue shifting as it approaches, while the first star's spectral lines red shift as it moves away in its orbit. One of the most well known spectroscopic binaries is the star system Mizar. The star system was already known as a visual binary, when in 1889 it was discovered the primary star Mizar A of the visual binary pair, had its own close companion. This companion was the first star to be found using spectroscopy. Later it was observed that Mizar B also had its own spectroscopic companion. An astrometric binary is a star system where only one star can be observed but an unseen companion is inferred through a perturbation ("wobble") of the star's proper motion in space. This perturbation is caused by the companion's gravitational influence on the observable primary star. Sirius is the best known example of an astrometric binary, when in 1844 Friedrich Bessell observed a wobble in the motion of Sirius A. He theorized that the star had an unseen companion. Later with improving telescope technology, the companion white dwarf was confirmed visually. An optical binary are two stars that visually appear next to each other from the point of the observer using the unaided eye. The reason for this is the two stars are usually along the line of sight of the observer, giving the illusion they are part of a binary pair. However in reality the two stars are actually a great distance from each other and are not gravitationally bound as a single system. A prime example of this is Alpha Capricorni, traditionally seen as a binary with the individual stars referred to as α1 Capricorni and α2 Capricorni, however the former is 690 light years away, while the latter is only 109 light years away from Earth respectively. Cataclysmic binaries, sometimes referred to as cataclysmic variable stars or cataclysmic variables, are very close binary pair that will suddenly and irregularly increase in brightness before returning to their normal magnitude. The two components of a cataclysmic binary consist of a white dwarf primary and an M class secondary (ranging from a main sequence star to a giant). The two stars are sufficiently close that the white dwarf distorts and draws off material from the secondary. This infalling matter of mostly hydrogen forms an accretion disk around the white dwarf. Instabilities in this accretion disk can lead to what is known as a dwarf nova. There are two types of cataclysmic binaries, non-magnetic and magnetic. The non-magnetic types are by far the most common, these include U Geminorum stars as well as those that are the source of classical and recurrent novae. Much more rare are the magnetic types, where a powerful magnetic field surrounds the primary white dwarf star, greatly affecting how the material flows from the secondary star, as well as locking the two stars into a synchronous rotation. The observations of binary stars began with the invention of the telescope, with the first known recordings in the 17th century. Giovanni Battista Riccioli discovered in 1650 that Mizar was actually a binary. Christiaan Huygens found that that Theta Orionis was actually three stars in 1656, Robert Hooke made the same observation about Gamma Arietis in 1664, while in 1685 Father Fontenay observed that the star Acrux was really a binary pair. William Herschel was the first person to coin the term binary star. He defined the term in 1802 as: - The union of two stars, that are formed together in one system, by the laws of attraction. Herschel began his observation of binaries in 1779. The result was a cataloging of over 700 double stars systems as recorded in his book Catalogue of 500 new Nebulae ... and Clusters of Stars; with Remarks on the Construction of the Heavens in 1802. By the next year, he concluded that these double stars must be binary systems. It was not until 1827 though that an actual orbit of a binary star system was calculated. This was completed by Félix Savary of the star Xi Ursae Majoris. Today over 100,000 binary star systems have been cataloged, although the actual orbits of only a few thousand of these are known, with some cataloged stars possibly being only optical binaries. Evolution of Binary Stars Binary pairs have been observed in protostars that have yet to reach the main sequence, suggesting that binaries form in the early stages of star formation. This could be due to the fragmentation of the molecular cloud as the stars first form, allowing for multiple stars to be created in the same system. Because most binary stars are of different masses, one will evolve off the main sequence before its companion. In this scenario, several different events may happen. The two stars may remain detached if the companion is far enough away in distance, and not very gravitationally massive. The two stars will be semi-detached if the star that has evolved into the giant star is gravitationally close enough to its companion and exceeds its Roche lobe, losing mass to its companion through accretion. In this situation, much of the giant's mass may transfer to the companion star, actually making it the more massive of the two, despite still being in the main sequence. Algol is the prime example of this. In some situations, the two stars of the binary system are so close that the expanding giant may actually come in atmospheric contact with its companion. This is known as a contact binary. In this situation, the very close companion may cause the atmosphere of the giant to literally "splash away," leaving a naked core. The companion, meanwhile, may spiral towards the core of the once giant from the friction of their atmospheres. The result will be a merged core that becomes a white dwarf. Type I supernova If one of the stars on a binary system is a white dwarf that is close enough to its companion when the companion star exceeds its Roche lobe, the white dwarf will steadily accrete gases from its companion's atmosphere. This accreted gases will build up and compress on the surface of the white dwarf, due to its high gravity at its surface. The gas will then steadily heat up as more material accumulates. The result is that hydrogen fusion may occur on the surface of the dwarf and, through the tremendous release in energy, throw the rest of the collected gas off in a brilliant flash. This extremely bright event is called a nova. The above event will occur as long as the additional mass the white dwarf accreted from its companion doesn't cause the star to exceed the Chandrasekhar limit, which is 1.4 solar masses. If the material buildup on the white dwarf exceeds this limit, the electron degeneracy of the white dwarf itself will no longer be able to hold the star up against gravity. In this scenario a type I supernova will occur that destroys the star. The most famous example of this is the supernova SN 1572, considered one of the foremost events in astronomy. Tycho Brahe observed this event extensively. - ↑ http://www.cfa.harvard.edu/news/2006/pr200611.html - ↑ http://www.physics.sfasu.edu/astro/ebstar/ebstar.html - ↑ http://csep10.phys.utk.edu/astr162/lect/binaries/visual.html - ↑ http://csep10.phys.utk.edu/astr162/lect/binaries/spectroscopic.html - ↑ http://outreach.atnf.csiro.au/education/senior/astrophysics/binary_types.html#bintypespectro - ↑ http://outreach.atnf.csiro.au/education/senior/astrophysics/binary_types.html#bintypeastrom - ↑ http://csep10.phys.utk.edu/astr162/lect/binaries/astrometric.html - ↑ http://books.google.com/books?id=5tAMAAAAYAAJ&ots=_U3ebPQhyA&dq=The%20Binary%20Stars%2C%20Robert%20Grant%20Aitken&pg=PA1#v=onepage&q=&f=false - ↑ http://www.dibonsmith.com/starfile.htm - ↑ http://www.jstor.org/pss/107080 - ↑ http://ad.usno.navy.mil/wds/wdstext.html#intro - ↑ http://www.phys.lsu.edu/astro/nap98/bf.final.html - ↑ 14.0 14.1 http://www.astro.psu.edu/users/rbc/a1/lec16n.html - ↑ D. Prialnik, Novae, Encyclopaedia of Astronomy and Astrophysics (2001, p.1846-1856)
http://www.conservapedia.com/Binary_star
13
56
Tuesday, May 31, 2011 Among the more important and practical aspects of Basic Thermodynamics, one finds heat conductivity. This is especially useful in the design and construction of buildings to ensure the optimum materials are used, say to make possible staying warm in harsh winters, or staying cool in incendiary summers (which we'll soon see with global warming). A very basic laboratory experiment for the investigation of heat conductivity is shown in Fig. 1. Also included is the corresponding diagrammatical layout showing the components parts, including: different thermometers (which will be at different temperatures t1, t2, t3 and t4), steam inlet and outlet pipes (left side), steam jacket and water jacket. In the experiment we pass steam through the steam jacket and adjust the flow of water through the water jacket to a small stream. After a while, a steady-state flow (indicated by a constant difference (t2 - t1) will be achieved, whereupon the flow of water is adjusted to give a difference betwen thermometers t3 and t4 of about 10 F. One continues observations of the readings of all 4 thermometers until a steady state condition is reached. Once this is established, we read and record: t1, t2, t3 and t4, and catch all the warm water flowing out of the water jacket for a time interval T ~ 10 mins. (The longer the duration of a given trial, the more accurate the results. Needless to say, the thermometers ought to be scrutinized carefully throughout and if any marked fluctuations occur, a new trial should be started, because otherwise the experimental errors will be too large. Finally, one determines the mass of water collected, records the time interval T, and the readings of the four thermometers. The distance L between the thermometers t1 and t2 will also be measured, as well as the diameter d of the test rod. During each test trial, the value of heat Q transferred to the water is determined, which will be estimated by: Q = k A T(t2 - t1)/L where k denotes the 'thermal conductivity' of the material (which will be provided), A is the cross sectional area, L the length, and (t2 - t1) the temperature difference. If a known mass of water (m) passes through the jacket then the total heat received from the end of the test rod will be: Q = mc(t4 - t3) Of course, the experiment can also be performed with the ojective of determining k, the thermal conductivity. If this is the case one will make use of the relationship: k A T(t2 - t1)/L = mc(t4 - t3) so that, on solving for k: k = mc(t4 - t3)L/ A T(t2 - t1) In Fig. 2, a simple diagram is shown which describes the basic principle of heat conductivity. The temperature gradient is defined: (T2 - T1)/ x and the heat passing thorugh per second: Q/t = k(T2 - T1)/x. That is, the product of the thermal conductivity by the temperature gradient. Find the quantity of heat Q, transferred through 2 square meters of a brick wall 12 cm thick in 1 hour, if the temperature on one side is 8 C, and the temperature is 28 C on the other. (k = 0.13 W/mK). Then: Q = kAt[T2 - T1]/ x Q = (0.13 W/mC)(2 m^2)(3600 s) [20 C/ 0.12 m] = 156, 000 Joules 1) A student performs the heat conductivity experiment as shown in Fig. 1, and determines the thermal conductivity of copper to be 390 W/mC. If he then measures the thermometer differences (t4 - t3) = 5 C and (t2 - t1) = 2C, using 0.5kg of warmed water, and his copper test rod for the experiment is 0.5 m long, what would its cross-sectional area A have to be? (Take the specific heat capacity of water = 4200 J/kg K). Also, obtain the % of error in the student's result by looking up the actual thermal conductivity of copper. 2)A plate of copper 0.4 cm thick has a temperature difference of 60 C between its faces. Find: a) the temperature gradient, and b) the quantity of heat that flows through each square centimeter of one face each minute? 3)How many calories per minute will be conducted through a window glass 80 cm x 100 cm by 2mm thick if the difference between the two sides is 20C? 4) A group of 4 astronauts lands on Mars with solar radiation collection material of total area 2000 m^2. If the efficiency of the material is 30%, and the ambient night time temperature on Mars for their base location (Isidis Planitia) is -40 C (10C day time), will they have adequate collecting material if the solar constant on Mars is 620 W/m^2? (Assume insulating material with a thermal conductivity of 0.08 W/mC, and a need to keep the inside area of their domecile at least at 10 C, requiring solar radiant energy collected of at least 1,200 W per minute for an area of 10 m x 10 m.) Estimate the thickness of insulating material they're likely to need in order to make it work. Comment on whether this expedition is even feasible given the limits of their materials, and that no more than 100 m^3 of insulating material can be taken. Two items appearing in the news in the past week disclose the party of Lincoln, once proud and standing on principle and the nation's welfare- now stands for nothing but crass self-interest, and ideology. The items and the GOP reaction show that if this vile party ever gains control of the levers of power again, we are all for the high jump, whether most Americans recognize that or not! The first item concerns the vote to be taken this evening in the House of Representatives on whether or not to raise the debt ceiling. Never mind that in all the decades past this was an automatic, routine decision. No one, no party or person of power in his or her right mind wanted to see the U.S. effectively made to appear as if in default. Not even a remote "cosmetic" default, which is what the Reeps are claiming this vote amounts to, since Treasury Secretary Timothy Geithner has extended the actual deadline to Aug. 2 by being resourceful in terms of government spending.But rather than wait, the Goopers want to hold their vote two months in advance, for what....well, political posturing! Certainly not reality! Their claim is they are the party of "responsibility" because they have proposed massive cuts to Medicare (via substituting a voucher system for actual government payments) while the Dems have proposed nothing similar. But this is a false dichotomy set up only for the benefit of the weak-minded, the gullible, and those not paying attention. Unsaid, especially by the Repukes, is that the Republican budget plan actually INCREASES federal deficits by $5.4 trillion over ten years! It does this by not doing squat about military-defense cuts (current military spending is nearly 5% of GDP prompting one former defense analyst (Chuck Spinney) to assert it amounts to a "war on Social Security and Medicare") even as it allows the wealthiest 1% to continue receiving their Bush tax cuts- equal to one new Lexus each year, on average! Nowhere in any of the Repuke "solutions" or manifestos is there any faint mention of the one thing that would solve the nation's deficit problems most expeditiously: increasing TAXES! Indeed, the Repukes have all signed "loyalty oaths" (compliments of anti-tax zealot Grover Norquist) that absolutely resists any plan to raise revenue by taxes. If the country were analogous to an overspending family with the 'wife' (repukes) mainly using up credit lines on credit cards, then the comparison would be the wife's refusal to work to earn more $$$ to pay for the credit, and instead vowing to cut the family's food, utility and medical budgets! How long do you think such a wife would last before being chucked out on her ass by a responsible hubby? Yet in our country, the Repukes are treated as if they're the next thing to financial wizards and sober stars! Of course, part of the blame for this atrocity must go to the Democrats in congress and the Senate. A more pliable, spineless bunch of weenies and wusses I've not seen in a while. Not only have most now agreed to vote with the Repukes on negating an increase in the debt celing, they are actually talking of making or increasing future cuts instead of demanding the Reeps include higher taxes as part of deficit reduction! Indeed, the word from the Denver Post article this morning (p. 5A, 'Republicans and many Dems oppose Bill')is that: "After the vote fails, the focus will return to a bipartisan group of six congressional leaders who have been in private talkes with Vice President Joe Biden to come up with a massive spending cut package and allow the debt ceiling to rise" Can't the Dem knuckleheads and poseurs understand that as Norquist himself once put it, "Bipartisanship means Democratic date rape to us Republicans!" and that by playing on this losing wicket they are unwittingly ceding the field and advantage to the Repukes? As opposed to demanding from those R-shit heads that they include TAX HIKES in any deficit reduction package? I mean this is a no brainer, or ought to be! There simply can't be a workable package that doesn't include tax increases! Allowing the Reeps to dictate puts all the Dem strategy into the proverbial crapper as well as ceding gravitas on the means for deficit reduction - which the idiotic corporate media will surely get wet dreams over. How many fucking times must I cite reams of evidence that shows simple spending cuts can't work? I have cited examples from sources (e.g. Financial Times) in so many blogs now, it makes my head spin to recall them! As one Brit economist put it: "Claiming you can solve a deficit problem by using only spending cuts is like saying you can cut off your foot, and you will run faster!" - yet that's the detached reality we are left with because the weak-kneed Dems won't come up with their own solutions. As I wrote earlier, two of the best solutions for making Medicare solvent are allowing it to bargain for lowest prescription drug prices, like the VA does, and eliminating Medicare Advantage plans - which were set up in 2003 precisely to bleed the standard program into insolvency! Yet not one fucking Dem I've seen or heard has mentioned either of these, leaving me to believe they are kowtowing to some corporate interest or other, mostly likely Big PhRMA. As to the other reality detachment, that concerns the denial of climate change- global warming, now part of the mantras of all the GOP's illustrious contenders for the presidency. This, despite the fact that many of them actually had proposed changes to policy before getting into the race. But now, as with Newt Gingrich, all that matters is reality be sacrificed rather than upset the mindless Gooper-conservo masses, who only watch FAUX News and read comic books. Nevertheless at least one Republican has the right take, former NY Rep. Sherwood Bohlert who noted: "Never in my life have I been so disappointed in the pretenders to the to the throne from my party" Bingo! And this despite the fact the evidence is now overwhelming that the whole ice shelf of Greenland has been so affected by melting it's on the verge of collapse. (See: Greenland Poised on a Knife Edge, in The New Scientist, Vol. 209, No. 2794, Jan. 2011, p. 8) The article notes that just the 'break off" at the margins of the sheet, which is ongoing, is adding 300 gigatons of melted ice to the oceans each year. If the whole shelf collapses, it will raise global sea levels by 7 m (23. 1'). Are any of the GOP's idiot candidates paying attention? Including to the fact that the acidity of the oceans is already 30% higher than at the start of the Industrial Revolution? I doubt it! Sometimes, when I get this frustrated, I think that humans don't deserve to survive if they are so incapable of stewardship. At least so many. But then I do bear in mind that many are still fighting to make things right, and not just allow the knuckle-dragging buffoons and their lackeys to prevail. I just hope there are more of the former left than the latter despite a recent poll showing only 49% of Americans now believe effects of global warming have begun, compared with 61% in 2008. Are the knuckle-draggers and dummies winning? Stay tuned! Monday, May 30, 2011 Two years ago my dad, a World War II vet, passed away from pneumonia. Today, he's remembered for not only his war service (36 months in the Pacific Theater) but also his steadfast raising of a large family. Dad's military combat was waged on two fronts: against the Japanese Empire in the Phillippines and New Gunea on the one hand, and against malaria on the other - with no fewer than five hospitalizations. Even on being discharged from the Army in April of 1945, the after effects of malaria remained and he'd often come down with severe chills. As we know, the malaria parasites are never finally eliminated but stay in the bloodstream over a life time. As I recall Dad's sacrifices today, I also recall two of the last contacts I had with him, one in April of 2009 (in which he sent his last email from his email machine) and then for Father's Day, on the phone in June, 2009. His email (of April 19) lamented that his youngest son had 'gone off the rails' into hardcore fundamentalist Christianity and his I-Net church website was thrashing others in the family by use of a false self-righteousness. He expressly deplored the attacks he'd seen against my sister, as well as my mother, and to a lesser extent the de facto attacks by implication against him. He regarded ANY attacks against Catholicism as attacks against his own beliefs (especially as he'd converted to Catholicism from being a Southern Baptist). His final hope, at the end of the email, was that my hardcore fundie bro would give it a rest and realize that life is too short to "carry on" waging crusades even in the name of trying to "save" others. He himself seemed to finally realize and appreciate that salvation is a relative thing, and possibly for that reason, refused to condemn his fundie son to Catholic Hell for abandoning the religion in which he was raised. In the end, each will believe as he or she sees fit, and all efforts to undermine, shame, or intimidate others into one's own fold are doomed to failure. All one really accomplishes is alienation, hatred and further isolation. His one wish was that if he did pass away, we'd come together as a family not pull farther apart via false causes, agendas or beliefs. In my last contact with him on Father's Day, his voice was rasping and he appeared to sense the end might be near. We talked briefly about my latest book project, on the Kennedy assassination, and I mentioned that I had dedicated it to him. He expressed thanks and said he hoped he might get to see one draft. Alas, he passed before it could be sent to him. These days, especially around Memorial Day, and near his birthday (May 25) I often find myself going back to read his old emails from 2007-009 which I have kept stored in my 'old' email folder. None of them are very long, except the one from April 19, 2009, when he expressed the hope that a son would soon find his way back to the light and family solidarity. The others were mainly recollections about past events, and current ones. In one, he inquired after my wife Janice's health and gave some advice on car repair after my wife was involved in a serious car collision in central Colorado (hit broadside by a reckless driver who went through a red light). In others he recounted assorted celebrations, including the most recent - for Xmas 2008 at the Port Charlotte Retirement Center, into which he and Mom moved three years earlier. Dad, who provided the center of gravity for the family (always sending out the greeting cards for each and every member) will be sorely missed. But always remembered, especially - on this day - for the extraordinary service he gave to his country. Sunday, May 29, 2011 It was bad enough this week to see provisions of the "Patriot Act" which were due to expire extended again by a bunch of weak-kneed wussies in Congress, including the illustrious House "Tea Party" contingent- otherwise so noisy about their precious freedoms and defending them. Well, where the f*ck were these loudmouths when those extensions passed? Where were ALL of them, who we ordinarily see yapping about time -honored patriots but who are so cowed by the Anti-terror shtick (like so many in the 50s were by Joe McCarthy's anti-commie crusade) that they give it cover and even funding? Don't these dildo-brains understand the security state is already over-extended? Don't they grasp that fundamental REAL rights, like the 4th amendment - are under assault by the Patriot Act? It makes me wonder if any of our politicos ever read Benjamin Franklin's quote that: "Those who would sacrifice liberty for a temporary security deserve neither liberty or security". And by the way, pardon me if I blow off anyone who claims "well, times are different now!" No, they are not! I lived through the fifities and early 60s when the most massive REAL threat to freedom was Soviet Russia which had over 5,000 H-bomb armed ICBMs aimed at us. Never during that whole freaking time, even in the midst of the parlous Cuban Missile Crisis of 1962, were so many civil liberties just chucked like I've seen since 9/11. Anyway, anyone who still believes all these laws are merely innocuous inconveniences needs to familiarize himself or herself with the case of one Scott Crow of Austin, TX. Crow is an activist, specifically a self-described veteran organizer of anti-corporate demonstrations. He recently found, on requesting FOIA files from the FBI, that they'd been monitoring his activities for the past three years including "tracing license plates of cars parked out in front, recording the comings and goings of residents and guests, and in one case speculating about the presence of a strange object on the driveway" (Denver Post, 'Texan's FBI File Reveals Domestic Spying', p. 12 A, 5/29). Well, the strange object turned out to be a quilt that was made for an after-school program, according to Crow (ibid.) Crow also found that more than 440 heavily redacted pages were in his FBI file, many under the rubric of Domestic Terrorism. Of course, this was exactly what many of us worried about when congress - at least most of them - passed the misnamed "Patriot Act" in 2002 without even reading its many provisions. We fretted that, given the open-ended definition of terrorism, just about any and everything would be included and that might well extend to domestic protests, especially against corporations. Now we know how valid those concerns were! Crow himself commented on the extensive documents with mixed anger, astonishment and a degree of flattery that so much government energy could be expended on one little guy who lives in a ramshackle home with a wife, two goats, a dozen chickens and a turkey. According to Crow: "It's just a big farce that the government created such a paper tiger. Al Qaeda and real terrorists are hard to find, but we're easy to find. It's outrageous that they would spend so much money surveilling civil activists....and equating our actions with Al Qaeda" But recall this might not be that strange after all. It was Rachel Maddow, on her MSNBC show some weeks ago, that first brought it to national attention (after OBL's killing) that his intent was never to kill Americans so much as make us spend ourselves into bankruptcy. Maddow's analysis: included evidence of the country's transformation to a national security state, as she cited the vast, increased cost of intelligence, not only for the CIA, but more than a doubling in personnel for the DIA (from 7,500 to 16,500). Additionally, the National Security Agency (NSA) doubled its budget and the number of security -based organizations went from 20 in 2001, to 37 in 2002, then adding 36 the next year, 26 the year after that, then 31 and 32 more, with 20 additional security organizations added in 2007, 2008, and 2009. Not said was that under the Patriot Act, ALL these agencies have compiled a single database that is cross-correlated and they often work together, including the FBI. All of this mind boggling security and military infrastructure was at the cost of pressing domestic needs, including repairing crumbling civil infrastructure (bridges, roads, water and sewer mains etc. - estimated cost from The Society of Civil Engineers: $1.7 trllion) as well as health care for nearly 50 million currently without it. Even now as certain harpies and miscreants aim their sights at Medicare for the elderly, one of the most miserly programs in terms of benefits compared with similar programs in other nations. And goons like Paul Ryan and his henchmen want to make it smaller yet! The point is, with such a vast and over-extended, over-active security state, that gets on average $28 billion a year, ways must always be found to justify the subsidies. The ways, evident from Scott Crow's files, include domestic spying. Where the f*ck are the ultra-Patriot Tea Baggers in all this? Or...are they all A-ok with a massive spy state that also protects corporations from civil protests, portrayed as "domestic terrorism"? According to Mike German, a former FBI agent now at the ACLU: "You have a bunch of guys and women all over the country sent out to find terrorism. Fortunately, there isn't a lot of terrorism in many communities, so they end up pursuing people who are critical of the government". This is bollocks because one of the most fought for rights of Americans, since Thomas Paine's fiery 'Common Sense', is to be critical of the government. And, the day the citizen becomes fearful of doing that is the day we have tyranny returned. For when citizens are fearful of government that's what one has, when government is fearful of the citizen, one has liberty. Strange that all the Tea bagging repukes didn't recall that when they passed the extension for the Patriot Act. Even The Financial Times has noted in an editorial ('Terror and the Law', May 16, p. 8) it's time to put in place time limits. As they note: "Congress needs to put time limits on the post 9/11 powers. Failure to do so in that first sweeping authorization was a dereliction of duty....Emergency powers were justified after 9/11 but allowing them in perpetuity is wrong" We agree and would also add, that if there aren't enough genuine terrorists for the assorted agencies to go after and monitor, and they find they have to pursue innocent Americans exercising their free speech rights, then it's time to exact massive cuts in security funding for all agencies. Or at least, in direct proportion to the actual threat! We also expect the Teepees to get on board with this, or declare themselves ignominous HYPOCRITES! It's absolutely ludicrous as well as hilarious to behold certain fundies going off on the Quelle or Q source tradition (e.g. as "Satanic"), while they accept a bible (KJV) that has absolutely NO objective validation as any unique, sacred source! One just has to scratch his head in wonderment and awe at the chutzpah it takes to readily ignore the immense deficits in an entire BOOK, while carping at a coherent assemblage of Yeshua's sayings that is claimed to form a commonality of source for at least two of the NT gospels (Matthew and Luke). As noted in my earlier blog (before the last), textual analysis by all reputable sources recognizes Q as a putatitive collection of Yeshua's sayings which doesn't exist independently (e.g. as a specific text) but rather can be parsed from the separate gospels, such as Mark. Germane to this Q tradition, is how - when one applies textual analysis to the books, gospels- one can unearth the process whereby the orthodox (Pauline) Church worked and reworked the sayings to fit them into one gospel milieu or another. None of this is mysterious nor does the basis require any "objective proof" (a real howler since the whole historic process of biblical book selection and compilation has been subjective!), since the disclosure of the Q (which as I indicated is more a TRADITION than explicit text)isn't based on a direct, isolated manuscript but rather distilled from the comparison of numerous similar passages in differing NT sources. Thus, one can employ this template to derive a plausible timeline: for example, The Gospel of Mark appears to have committed the sayings to paper about 40 years after the inspiring events, then Matthew and Luke composed their versions some 15-20 years after Mark. ALL SERIOUS biblical scholars accept this timeline, only hucksters of holiness, pretenders and pseudo-scholars do not! Again, I advise those who wish a genuine scholarly insight, as opposed to pseudo-insight, to avail themselves of Yale University's excellent Introduction to the New Testament course by Prof. Dale B. Martin. Of particular import is his lecture: The Historical Jesus Which clearly shows the editing in John to make it conform to orthodoxy. Now, the choice before people is whether to accept the basis and findings of a highly accomplished scholar in the field, teaching at one of the nation's premier Ivy League universities, or the word of someone belonging to an online bible school with an "I-net" church. I think the choice is a no-brainer, but then that is writing as one who's actually taken a real course in textual analysis (including languages used in translation) from a real university! But let's get back to the blind spot inhering in these Q-tradition carpers, which I also exposed (to do with their King James Bible) in the blog before last. As I showed in that blog, their accepted KJV fails on all three critical tests for authenticity: a) no major re-translations or re-doings, b) no major omissions or deletions, c) a consistency with what the earliest original (e.g. Greek) translations (say in the Greek Septuagint) allowed, with no major contradictions. As I showed, the current KJV failed criterion (a) by being a compromise translation to try to bridge the gap between the Puritans and the Church of England. Thus, two distinct translations, call them X and Y, were jury-rigged to give some mutant single translation, call it X^Y, belonging to neither. It's something like taking the head of a cow and grafting it onto a de-capitated bull and saying you now have an authentic cow-bull. Actually it's more like cow bullshit! Making matters worse, the translators were told to preserve as much as possible of the Bishop's Bible of 1568 (then the official English Bible). The translators were also granted wide latitude in how they specifically formed different translations of the text, in many cases being allowed to use the Geneva Bible and some other versions "when they agree better with the text" in Greek or Hebrew. This "mixing and matching" process is believed by many experts (e.g. Geza Vermes) to have been responsible for many of the more blatant contradictions that have emerged, viz. in answer to the question posed 'Are unsaved sinners eternally tormented?: (a) YES (Isa 33:14; Mt 13:40-42, 25:41,46; Mk 9:43-48; Jude 6-7; Re 14:10-11) (b) NO (Eze 18:4; Mt 7:13, 10:28; Lu 13:3,5; John 3:15-16; Ac 3:23; 1Co 15:18; 2Th 2:10; Heb 10:39; 2Pe 3:7,9) This is a huge divide, and a serious blotch on the integrity of the KJV. Indeed, if such a fundamental question as "eternal torment" can't be properly addressed, how many other shibboleths will one find? Meanwhle, the current KJV fails test (b) because we know from historical records (kept by the Anglican Church) that what eventually became the "King James Bible" by 1626-30 was in fact NOT the original, but rather 75% to 90% adopted from William Tyndale's English New Testament, published in 1626. This version was actually published in defiance of then English law - so it is amazing so much of it was then incorporated into the original KJV! Tyndale's tack was to render Scripture in the common language of his time to make it accessible even to a humble plow boy. But this meant ignoring the originally published KJV and resorting to his own translations, basing his ms. on Hebrew and Greek texts. In so doing he'd defied an English law from 1401 that forbade the publication of any English book without Church of England permission. Tyndale got the last laugh, because a year after he was strangled for "heresy" in the Netherlands, King Henry VIII granted a license to a complete "King James Bible" that was more than three-fourths Tyndale's translation from his English New Testament! Thus, the current incarnation of the KJV is not the original translation adopted by the commisson of King James I. Thus, the KJV also fails criterion (b). What about test (c)? This was broken as soon as Tyndale's version was 75% adopted and the correlated parts of the earlier (King James I) ordained sections, removed. More to the point, the KJV rendering of Matt: 25:46: "And these shall go away into everlasting punishment: but the righteous into life eternal" disclosed a total inconsistency with the earliest original (e.g. Greek) translations! Here in Matt. 25:46, the Greek for everlasting punishment is "kolasin aionion." Kolasin is a noun in the accusative form, singular voice, feminine gender and means "punishment, chastening, correction, to cut-off as in pruning a tree to bear more fruit." Meanwhile, "Aionion" is the adjective form of "aion," in the singular form and means "pertaining to an eon or age, an indeterminate period of time. But it does not mean eternal!(Critical examination discloses the Bible speaks of five "aions", minimum, and perhaps many more. If there were "aions" in the past, it must logically mean that each one of them have ended!) Thus, a 'pick as you choose' process for the creation of the KJV combined with inept and cavailer translations of the Greek Septuagint obviously allowed huge errors to creep in, and Matt. 25:46 is an enormous one, given it's the sole place that refers to "everlasting punishment". So if this translation is wrong because of a cavalier Greek translation (of kolasin aionion) then everything to do with it goes out the window. Thus, the KJV fails all THREE tests for validity for an authentic bible! We suggest that before certain "pastors" launch into more tirades against the Q tradition, they examine more completely their own bowdlerized and maimed bible, which is somewhat like a Frankenstein "dog", put together from the excavated carcasses of about ten different doggie cadavers! And not even today's resident geniuses who worship this Frankensteined monstrosity as the "final entity" can even recognize which end is the head and which is the heinie! But hey, maybe as a biblical yardstick its "head" doubles as a heinie! Cream of KJV Bible soup anyone? The matter of the ultimate origin of life, the theory of Abiogenesis (which is often erroneously conflated with the theory of evolution) has been problematical for years. What is sought is a basic explanation for how fundamentally non-living matter could acquire the properties and attributes of life, including being able to reproduce. In principle this isn't that remarkable a stretch, since we already know there exist living entities at the "margins" - the viruses- which display no attributes of life until they become attached to a host. Once in a host, they can appropriate its cell machinery to churn out billions of copies of themselves. Evolution in such organisms is also no biggie. For example, consider a point mutation in a Type A flu virus. Here, a minuscule substitution of amino bases yields a virus imperceptibly different in DNA structure from a predecessor. This is a case of microevolution brought about by mutation. A new 'flu vaccine must be prepared to contain it. The most that flu vaccines achieve is keeping the selection or s-value fairly constant for a majority of influenza viruses, while not entirely eliminating the associated gene frequencies. Hence, yearly vaccines only attempt to reduce the most virulent strains, such as ‘Type A flu’, to the most minimal equilibrium frequency. Total elimination is impossible because there are always new gene mutations of the virus to assume the place of any strains that have been eliminated. At the same time, the ongoing enterprise of preparing new flu vaccines is an indirect acknowledgement of microevolution in the flu virus. Amazingly, there are many tens of thousands of uneducated people who actually don't believe such examples qualify as bona fide evolution! It's as if these forlorn people can't process that the success of natural selection is inextricably bound to the fitness (w) and the selective value, s, e.g. via: w = 1 - s. Meanwhile, we know there are pleuro-pneumonia like organisms or PPLOs for short. The PPLO is as close to the theoretical limit of how small an organism can be . Some figures clarify this. It has about 12 million atoms, and a molecular weight of 2.88 million Daltons . Compared to an amoeba, it weighs about one billions times less. Now, in a remarkable find published in The New Scientist (Vol. 209, No. 2794, p. 11), two investigators: Kunikho Kaneko and Atsushi Kamimura, have made a remarkable breathrough in devising a testable model that is able to replicate the Abiogenesis process. The two basically solved the problem of how a lipid-coated protocell can divide into two (displaying reproduction) when the genetic material replicates. Recall in an earlier blog where I showed the hypothetical protocell reaction wherein a self-sustaining coacervate droplet can use one or two basic reactions involving adenosine triphosphate (ATP) and adenosine diphosphate: L*M + R + ADP + P -> R + L + M + ATP ATP + X + Y + X*Y -> ADP + X*Y + X*Y + P In the above, L*M is some large, indeterminate, energy-rich compound that could serve as ‘food’. Whatever the specific form, it’s conceived here to have two major parts capable of being broken to liberate energy. Compound R is perhaps a protenoid or lipid-coated protocell, but in any case able to act on L*M to decompose it. The problem with this earlier hypothesis was that such lipid-coated protocells lack the machinery to allow for easy division. Kaneko and Kamimura solved this by taking their inspiration (for their model) from living things in which DNA and RNA code for proteins and the proteins catalyse replication of the genetic material. This goes back to biochemist Jacque Monod's concept that the organism is a self-constructing machine. Its macroscopic structure is not imposed upon it by outside forces, instead it shapes itself autonomously by dint of constructive internal (chemical) interactions. Thus in the Kaneko- Kamimura model one has a self-perpetuating system in which a cluster of two types of molecules catalyse replication for one another while also demonstrating rudimentary cell division. In the Kaneko and Kamimura model, as with DNA, the genetic material replicates much more slowly than the other cluster molecules but also takes longer to degrade, so it enables lots of the other molecule to accumulate. Following replication of the heredity carrier the copies drift apart while the molecules between them break down automatically creating two separate entities (see image). This is an exciting breakthrough but some further investigations are needed, specifically ways to circumvent the problem that (in real life) membrane lipids around an RNA molecule don't typically catalyse RNA replication. However, this isn't insurmountable, because all one need do (theoretically) is replace the lipids with hydrophobic peptides. We look forward to further work done by Kaneko and Kamimura as well as others in the microbiology field, working at the forefront of Abiogensis. Saturday, May 28, 2011 In some religious blogs it's become fashionable to debate on whether the Bible or a Church (as a generic "body of Christ") emerged first. On the side of the former are mainly fundamentalists who while they may know how to cite chapter and verse, are oblivious to historical facts. They claim that while a formal Bible may not have existed ab initio there were still coherent sayings of Yeshua that prefigured a later "correct, truthful" bible known as the King James. In fact, I will show this is all codswallop and uses specious arguments including retroactive claims to make a spurious case. In the meantime, those who argue that a single Church existed aren't aware that in fact dozens of differing Christian sects competed with only one prevailing: an orthodox form pushed by Paul of Tarsus. First, let's get to the claim of an accurate compilation of Yeshua's sayings into a claimed original "bible" text that would later evolve to the King James version. Of interest is the tradition designated as "Q" or "Quelle" (the German for "source"). Textual analysis recognizes Q as a collection of Yeshua's sayings which doesn't exist independently (e.g. as a specific text) but rather can be parsed from the separate gospels, such as Mark. Germane to this Q tradition, is how - when one applies textual analysis to the books, gospels- one can unearth the process whereby the orthodox (Pauline) Church worked and reworked the sayings to fit them into one gospel milieu or another. One can also derive a plausible timeline: for example, The Gospel of Mark appears to have committed the sayings to paper about 40 years after the inspiring events, then Matthew and Luke composed their versions some 15-20 years after Mark. Finally, as noted in an earlier blog on this - John was actually an original GNOSTIC gospel that was reworked to conform to the Catholic Orthodoxy and added some 50-75 years after Matthew and Luke. (Again, if one knows Greek, one can easily spot the multiple edits in John that transmute its content from a Gnostic view to an orthodox Catholic one). Here is where contradictory arguments emerge (conflating Church and bona fide Bible existence) because some fundies have insisted that "early church councils" adopted rigorous "principles" to determine whether a given New Testament book was truly inspired by the Holy Spirit. These are generally listed as criteria to meet for explicit questions: 1) Was the author an apostle or have a close connection with an apostle? 2) Was the book being accepted by the Body of Christ at large? 3) Did the book contain consistency of doctrine and orthodox teaching? 4) Did the book bear evidence of high moral and spiritual values that would reflect a work of the Holy Spirit? This is a noble effort, but it actually blows up in the collective faces of the fundies (who later argue against an orthodox Catholic purist take) because at the end of the day they rely on the criteria of a religion-church that they really can't accept! This is a tricky point so needs to be explained in the historical context. The fundamental problem is that whenever one refers explicitly to "early church councils" in antiquity, there can be one and only one meaning: the councils of the Catholic Church, since those were the sole ones then existing. Thus, fundies who inadvertently (or desperately?) invoke "principles" or coda for NT acceptance demanded by "early church Councils" are in fact conferring benediction on the CATHOLIC, PAULINE ORTHODOXY. Thus, they are unwittingly validating the Catholic process for separating wheat from chaff in terms of which books, texts were acceptable and which weren't! A more honest and logical approach, would be to simply argue that no official "Church" or religion existed then that was bequeathed special status by Christ, and that the acceptance of this or that text was under highly unique guidelines independent of "early church councils". Those guidlines would then be provided. Ideally, these criteria will be truly independent from those ordained by the RC Church's councils, which the fundies reject as a "harlot of Babylon". In any case, to be faithful to history, the new principles would have to also be disclosed in the book the fundies most revere: the King James version. But is this even feasible? For logical consistency and to be coherent with any proposed (later) doctrine, the claim would only stand if the final revered product (the current KJV) had not been severely compromised or altered such that it lost content or context. This would require: a) no major re-translations or re-doings, b) no major omissions or deletions, and c) a consistency with what the earliest original (e.g. Greek) translations (say in the Greek Septuagint) allowed, with no major contradictions. Now, let's examine each of these in turn. As for (a) we do know from the extant historical records that the KJV originated when James VI of Scotland (who came to be King James I of England in 1603), commissioned an enclave of experts to Hampton Court near London in 1604, to arrive at a compromise translation to try to bridge the gap between the Puritans and the Church of England. Thus, already we see that a compromise was injected into the mix, for the existing documents of the OT, NT. We also know the assigned objectives were: the translation of the Old Testament from Hebrew, and the New Testament from Greek, to be undertaken and respectively assembled by no less than 47 translators in 6 committes working in London, Oxford and Cambridge. The final results emerged seven years later, in 1611. Back to the project: the translators were all instructed not to translate "church" as "congregation", and to preserve as much as possible the Bishop's Bible of 1568 (then the official English Bible). The translators were also granted wide latitude in how they specifically formed different translations of the text, in many cases being allowed to use the Geneva Bible and some other versions "when they agree better with the text" in Greek or Hebrew. This "mixing and matching" process is believed by many experts (e.g. Geza Vermes) to have been responsible for many of the more blatant contradictions that have emerged and which fundies are unable to explain away no matter how much they try. For example: in answer to the question posed 'Are unsaved sinners eternally tormented?: (a) YES (Isa 33:14; Mt 13:40-42, 25:41,46; Mk 9:43-48; Jude 6-7; Re 14:10-11) (b) NO (Eze 18:4; Mt 7:13, 10:28; Lu 13:3,5; John 3:15-16; Ac 3:23; 1Co 15:18; 2Th 2:10; Heb 10:39; 2Pe 3:7,9) This is a huge divide, and a serious blotch on the integrity of the KJV. Indeed, if such a fundamental question as "eternal torment" can't be properly addressed, how many other shibboleths will one find? Another example of the skewed process appears in the KJV rendering of Matt: 25:46: "And these shall go away into everlasting punishment: but the righteous into life eternal" Here, the Greek for "everlasting punishment" is "kolasin aionion." Kolasin is a noun in the accusative form, singular voice, feminine gender and means "punishment, chastening, correction, to cut-off as in pruning a tree to bear more fruit." Meanwhile, "Aionion" is the adjective form of "aion," in the singular form and means "pertaining to an eon or age, an indeterminate period of time. But it does not mean eternal!(Critical examination discloses the Bible speaks of five "aions", minimum, and perhaps many more. If there were "aions" in the past, it must logically mean that each one of them have ended!) Thus, a 'pick as you choose' process for the creation of the KJV obviously allowed huge errors to creep in, and Matt. 25:46 is an enormous one, given it's the sole place that refers to "everlasting punishment". So if this translation is wrong because of a cavalier Greek translation (of kolasin aionion) then everything to do with it goes out the window. Thus, the KJV fails test (a) for logical consistency. What about (b), i.e. no major omissions or deletions? Again, we know from historical records (kept by the Anglican Church) that what eventually became the "King James Bible" by 1626-30 was in fact NOT the original, but rather 75% to 90% adopted from William Tyndale's English New Testament, published in 1626. This version was actually published in defiance of then English law - so it is amazing so much of it was then incorporated into the original KJV! Tyndale's tack was to render Scripture in the common language of his time to make it accessible even to a humble plow boy. But this meant ignoring the originally published KJV and resorting to his own translations, basing his ms. on Hebrew and Greek texts. In so doing he'd defied an English law from 1401 that forbade the publication of any English book without Church of England permission. But, Tyndale got the last laugh, because a year after he was strangled for "heresy" in the Netherlands, King Henry VIII granted a license to a complete "King James Bible" that was more than three-fourths Tyndale's translation from his English New Testament! Thus, the current incarnation of the KJV is not the original translation adopted by the commisson of King James I. Thus, the KJV also fails criterion (b). Now what about (c), a consistency with the earliest original (e.g. Greek) translations? I already showed this was broken as soon as Tyndale's version was 75% adopted and the correlated parts of the earlier (King James I) ordained sections, removed. More to the point, I gave the specific example of how the earliest Greek text for the meaning of ""kolasin aionion" (punishment for an age) was destroyed and altered to "everlasting punishment". Thus, the original bond was alredy destroyed - perhaps in the 'mix and match' translation process permitted by King James I in his commission of scholars! Thus, the current KJV fails all three logical texts for authenticity and hence can't possibly be the basis for any erstwhile "biblical church" or any founding document, period. Now, what of the claim for an "early Church" itself? Is there such a thing? The answer is 'No!'. While it is true that Christ said "Thou art Peter and upon this rock I will build my Church", that can be interpreted in more than one way. All the evidence, indeed, shows that the earliest conglomerate or "congregation" of followers wasn't a formal "church" by any standard but a polyglot group with shared beliefs, and shared outlook. I would argue that this group never evolved to become a formal Church, and that the latter didn't appear until the Edict of Milan was signed in 313 A.D. under the Emperor Constantine Augustus. The problem with the Edict of Milan is that the then Christians essentially made a pact with "the Devil", i.e. signed on to a deal with the then Emperor that would allocate state religion status to the Christians (no more persecutions!) but at the cost of sharing that stage with the Emperor's own Sol Invictus (Sun worship) religion. Thus, the choice of December 25 for the nativity, since at that time that date was nearest the Winter Solstice or the 're-brith of the Sun' (when the Sun reaches its lowest declination and begins its apparent journey northward on the ecliptic, leading to longer days). Thus, the "church" codified in 313 A.D. was in fact an artifact of the original community called Christian, much like the current KJV is an artifact of the original book called King James version of the Bible. Bible or church-based Christianity? As I said, a false dichotomy. The best plan for people, if committed to a spiritual existence and authentic relationship to whatever that means, is to toss out both church and bible and live without the graven images of either. Friday, May 27, 2011 As another $690 BILLION defense spending bill wends its way through congress, you can lay 100000 to 1 Vegas odds that it will get passed with few problems or rider amendments. There are too many key Senators as well as Reps who now depend for their political livelihood on military spending. Translated: They depend on ordinary American taxpayers to keep pushing military pork to their communities while thousands of other communities endure continued infrastructure decay. This is a damned disgrace, and what's more, in a parlous financial environment in which we'll soon need to raise the national debt ceiling, it is unconscionable! We need to get our miserable asses out of Afghanistan, and we need to do it this summer, not by 2014! We don't have the freaking money - even borrowed from the Chinese- to continue with that bullshit. Now, out of the mouths of 'babes' one find similar sentiments expressed, as in a letter published in yesterday's Denver Post from high school student Abigail L. Cooke. Just when you thought 99% of young people only had their eyes and brains tethered to the social media like Facebook, along comes a surprise and it is a heartening one. Abigail wrote: "Fifteen trillion dollars in debt and counting. Every year our government spends billions on defense, leaving insufficient funds for important things like education. This year, almost $30 million will be cut from my school district’s education budget. That means fewer teachers, and even less arts programs for students like myself. But why? Where will all of this money go? Probably to help further fund our involvement in Afghanistan or Iraq — invasions costing more than $1 trillion when combined. Imagine if just one billion of those dollars were applied to our school districts here in Colorado. That would mean more after-school programs, more teachers, smaller classes, instruments, music and instructors for music programs that our nation’s students are desperate for. It’s time we start putting our money where we truly need it. It’s time for America to start raising scholars, not soldiers This is an excellent letter! It shows this teen's priorities are on much more solid ground than superficial personal concerns. Indeed, her vision and insight would put nearly all politicos to shame. The only small error she committed in her letter was underestimating the disgraceful costs of the occupations in Iraq and Afghanistan which will come to more like $3.3 trillion when all is said and done, and all the returned vets' hospital treatments and therapies must be paid for by the taxpayers. But otherwise, she's nailed it. She's shown (and she knows) we have priorities all screwed up in this country. Our whole domestic tapestry is unravelling as we continue to stubbornly involve ourselves in nations which have no more respect for us - irrespective of the gazillion bytes of PR churned out each day. Even current Secretary of Defense Robert Gates has admitted defense spending cuts must be made, though he seems to take issue with President Obama's conservative $400 billion proposed cuts over 12 years. In fact, this is ridiculous. The cuts ought to be more like $400 billion per YEAR! (Especially given the DoD budget was effectively doubled since 2000.) Just pulling our asses out of both Iraq and Afghanistan and leaving the idiotic nation-building behind would more than achieve that in the next three years. Cutting most of the dumb, money squandering armaments (including 'cloaked' helicopters which, while cool, aren't essential unless you're always into violating other nations' sovereignty via pre-meditated raids etc.) and you get even bigger savings. I also totally disagree with Gates' assessment that "Americans would face tough choices" in a number of decisions, such as which weapons systems to eliminate, and the size of fighting units. Look, this is a no brainer! We already are overstretched across the freaking globe in nearly 44 countries. DO we have to be cop of the world? I don't think so! Nor can we afford that role in our debt environment. As for fighting units, do we really need to continue to maintain bases in Japan, Germany and S. Korea? Last time I checked all those nations had formidable forces that could more than take care of themselves. Gates also whined in his last address: "A smaller military, no matter how superb, will be able to go fewer places and do fewer things" SO WHAT? As I said, we don't need to be scattered in 44 nations across the globe! (See Chapter Five: The State of the American Empire: How the U.S. Shapes the World). We need to be taking care of our own mammoth country and its PEOPLE, left ignored the past decade, and especially the crumbling infrastructure: roads, sewers, water mains, bridges....which is a much bigger threat to our security than some phantom bad guys some place. We have to get it into our fat heads we can't police and patrol the planet. This is foolishness. Gates did also say that however vast the defense spending (the Pentagon approved the largest ever defense budget in February) it "was not the major cause of the nation's fiscal problems". However, he also added in the same breath that it was "nearly impossible to get accurate answers to where the money has been spent and how much?" Good god, man! If you 'can't get accurate answers' there's no telling how much they're pissing away! And let's not forget that we still haven't turned up $1.1 TRILLION that the Pentagon "misplaced" around 1999-2000. This, of course, was well documented by former defense analyst Chuck Spinney in a memorable PBS interview with Bill Moyers in August, 2002. Spinney also pointed out that if money is given via legislation but never accounted for, as to the GAO, then the Pentagon itself becomes an unaccountable and unelected agent that undermines democracy. Spinney is also known for a September 2000, Defense Weekly commentary, in which he called the move to increase the military budget from 2.9% to 4% of the GDP as " tantamount to a declaration of total war on Social Security and Medicare in the following decade." Well, he wasn't off on that one! It is time our politicians and representatives get that into their heads, and begin now with massive cuts to tame the country's over-inflated military empire. Let's not forget it wasn't so much 'barbarians at the gate' that brought Rome to ruin, but military overstretch which all its taxes and property seizures could no longer pay for. They say those who forget the past are doomed to repeat it. Let's hope it's not to late to learn! John F. Kennedy, had he lived, would likely have ready a torrent of “I told you so’s”, in terms of the parlous and vexing state this nation finds itself in, from too many entanglements of only marginal value to our actual national security. He’d also have expressed anger that for all the warnings he’d delivered about “enforcing peace with American weapons of war” (his famous Pax Americana speech at American University, Washington, in June, 1963, see photo) nothing sank in, and President after President never heeded his advice, each preferring to remain hostage to the Military-Industrial complex. Though JFK wasn’t so prescient in a specific context, his American University speech did generically prefigure the horrific consequences if the U.S. insisted on being the policeman of the world, enforcing American terms of peace (via the noxious document NSC-68) with American weapons of war. This speech probably set the foundation for Kennedy’s later plan (under National Security Action Memorandum-263) to pull out of Vietnam (after the 1964 elections when political blowback would be minimal). He could likely see that if, indeed, the U.S. remained in Vietnam - the perils of a much wider war, along with consolidation of the military-industrial-oil complex – would be unavoidable. Alas, JFK was assassinated, and LBJ invoked NSAM-273 to repeal JFK's NSAM-263 and with the phony firing on the Maddox and Turner Joy by N. Vietnamese, we were in for a penny, in for a pound: 58,000 killed, and $269 billion in costs. One would have thought we'd have learned from 'Nam, but the phony Iraq intervention showed we forgot it all. Thursday, May 26, 2011 As the discussion continues unabated about whether the Wednesday New York special election result was an indictment of Paul Ryan's Nazified plan to deny healthcare to seniors (I refuse to dignify it by calling it a "Medicare plan" far less a plan to "save Medicare") it is well to go back into recent history and what life was like before Medicare came onstream in 1966. To put it succinctly: life for the aged in this country was generally nasty, short and brutish - with few options or appeals to assistance if one became seriously ill. Much of this is detailed in several chapters of the Oxford University Press monograph, entitled: One Nation Uninsured, by Jill Quadagno, which gets to the bottom of why there is such massive political aversion to any kind of genuine health care coverage in this country which doesn't drag in the profit motive. Medicare is discussed because it putatively paved the way to at least get a 'Leg into the door' on the all but closed-shop capitalist insurance front, stiffly guarded by the likes of the AMA. By ca. 1960 there were some 19 million citizens over age 65, and some 185,000 physicians (p. 69). The options for medical care, however, were sparse though socially insured medical insurance had been in the pipeline since the Truman administration. But by 1960 the most serious countermeasure to it was the Kerr-Mills plan- which basically confined assistance only to the "aged poor". By 1963 only 28 states had adopted any of it, and barely 148,000 seniors were covered. Again, this out of a total population of 19 million seniors. Was Kerr-Mills a terrific deal? Hell no! In many states that established a Kerr-Mills format there still weren't the funds to finance it. This meant the older person and his immediate family had to be put on the hook for the money allocated to any care. Thus 12 states had 'family responsibility' provisions (p. 60-61) which "effectively imposed means tests on relatives of the aged, deterring many poor, elderly people from applying for support" The author cites as an example the state of Pennsylvania (p. 61) wherein "the elderly had to provide detailed information on their children's finances to qualify for Kerr-Mills". Meanwhile, in New York many seniors actually withdrew their applications on learning their children would be involved, and would have to cough up any extra money for care the state couldn't cover (ibid.). This also meant the adult children had to appear before state boards and answer direct questions concerning their assets and liabilities, how much mortgage they owed - if any, as well as bank account balances. Most seniors, naturally, weren't able to tolerate this level of humiliation and scrutiny of their offspring, not to mention being held liable. In other states under Kerr-Mills, the elder person had to sign away the deed to his home (if he owned one) to pay for all medical bills upon his or her decease. THIS is what life was like for the elderly in the days before Medicare. What about any seniors not among the "aged poor"? Their only option was a miserly private insurance policy with huge costs and meager benefits. Policies typically covered "only a portion of hospital costs and no medical care" (p. 61). This left the elderly with 67-75% of all the expenses to absorb, which puts the plan about on the same level as Paul Ryan's scheme (the GAO estimates seniors will have to pay at least 2/3 of all expenses), if the latter is ever introduced - say if the GOP gains all three branches of government next year, which would be a nightmare come true! As an example, "Continental Casualty and Mutual of Omaha provided only $10 a day for hospital charges and room and board, less than half the average cost". Of course NO prescription drugs were covered, all had to be paid for out of pocket. The average commercial plan - even given this miserliness- was quite expensive, ranging from $580 - $650 a year, when the median elder income was only $2,875. Thus, nearly 25% of income was gobbled up by first tier medical costs, which could easily expand to 50-75% of income if a number of prescriptions were needed, and medical treatments, hospitalizations. Quadagno also notes that (p. 62) "insurers typically skimmed off the younger, healthier elderly" thereby forcing insurers to raise premiums. Thus "many elderly were priced out of the market entirely". In many cases, the consequences were horrific, as illustrated by the case of an 80-year old granny who allowed an umbilical hernia to go unreated, with the result it ultimately protruded to 18" out of her belly and eventually ruptured, with her bleeding to death on the kitchen floor while baking a cake (p. 63). Nor were such events exceptional. Again, there is no reason this couldn't happen under Ryan's plan, and there'd be every incentive for insurance companies to skim the healthier (and younger) seniors as in pre-Medicare days, and zero incentive not to. After all, with no government mandate for providing care, why should the profit -oriented insurance companies put themselves on a downward treadmill or "losing wicket" as we call it in Barbados? They wouldn't if they had any grain of sense. Without a mandate or order from the government, you can also bet your sweet bippy they'd reject any elderly person with a pre-existing condition. This would be the proverbial no-brainer for them! Thus, by the time JFK proposed a government health plan linked to Social Security, in 1960, America's seniors were more than ready. More than ready to stop being parasitized by commercial outfits, or humiliated by the likes of states under the odious Kerr-Mills plan. The only main opponents were the AMA which (p. 68) "ran newspaper ads and TV spots declaring Medicare was socialized medicine and a threat to freedom" and blowhards like Ronnie Reagan who made idiotic recorded talks trying to scare people by asserting (ibid.): "One of the traditional methods of imposing statism on a people has been by way of medicine". Fortunately, most seniors who'd actually experienced the dregs of capitalist medical bestiality didn't buy this hog swill. They organized under groups like the National Concil of Senior Citizens (see image) and turned the tables by imposing relentless pressure on representatives (the most intransigent of whom were Southern Democrats, to whom LBJ had to finally confront and read the 'riot act'). Eventually, the opposing voices were muted and Medicare was passed. Let's hope enough older voters-people become aware of this history before they remotely allow any plan like Paul Ryan's to take them back to the conditions of 1960! Before moving on to more First Law considerations, heat capacity, and specific heat capacity, we look at the solution of the problem at the end of the previous blog (Basic Physics, Part 12): (a) The external work done is W = P (V2 - v1) = 1.01 x 10^5 Pa(0.375 - 0.250)m^3 W = 1.25 x 10^4 J, or 12,500 Joules. (b) delta U = n Cv,m (delta T) = 10(20.2 J/mol K)(T2 - T1), where n = 10 and For (T2 - T1), we first find T1 = 0 C = 273 + 27 = 300K and we then need to find the higher temperature T2. Since for an isobaric process, V ~ T (P = const.) then: V2/ V1 = T2/T1 or T2 = (V2/V1) T1 = [300k] x (0.375 m^3)/ (0.250 m^3) = 450 K Then: (T2 - T1) = (450 K - 300 K) = 150 K, so: delta U = 10(20.2 J/mol K) (150 K) = 30, 300 J (c) Heat applied = delta Q = n C p,m (delta T) = 10(28.5 J/mol K) (150 K) delta Q = 42, 750 J Let's now go back to reiterate and summarize aspects of the First Law of Thermodynamics, by first noting the types of processes one can obtain under which conditions, given +U = Q - W (another way to express the law, with +U as subject): (i) Adiabatic process (for Q = 0, i.e. then +U = -W) (ii) Isobaric process (for P = constant) (iii) Isovolumetric process (for V = constant) (iv) Isothermal process (for T = constant) Other important aspects to note in applying the 1st law: (a) The conservation of energy statement of the 1st law is independent of path, i.e. (Q - W) is completely determined by the initial and final state, not intermediary states, Example: say a gas is going from initial state S(i) with P(i), V(i) to final state S(f) with P(f), V(f), then one finds that (Q - W) is the same for all paths connecting S(i) to S(f). (b) Q is positive (Q > 0) when heat enters the system, and (c) W is positive (W > 0) when work is done BY the system, and vice versa. Now, on to heat capacity! This is a generic as opposed to specific quantity defined as the heat that must be transferred to produce a change in temperature, or: Q = C(T2 - T1). The specific heat capacity, c = C/ m where m is the mass. Then: C = mc and Q = mc (T2 - T1). We also saw already: C' = C/n (= Cp,m, Cv,m) where C' is the molar heat capacity. So, Q= nC' (T2 - T1). The heat capacity has interesting applications apart from prosaic, terrestrial ones. For example, since space is a near-vacuum, m ~ 0, and c ~ 0, so little or no thermal capacity (C) exists. What this means is that energy from the Sun (via radiation) can be transferred through space, without appreciably heating space. Space is 'cold' not because it absolutely 'lacks heat' but because its density (of particles, hence mass) is too low to have much quantity of heat, or 'thermal capacity'. What about in the vicinity of Earth? Similar arguments apply. The higher one is above the Earth, the lower the thermal capacity of the medium - so the lower the amount of heat that can be retained, or measured. The lower in altitude one goes, the greater the number of air particles, and the greater retention of heat- especially if water vapor is also included (since water has a large specific heat capacity). What happens is that the radiant energy (mainly from the infrared spectral region) transfers kinetic energy to the molecules of the atmosphere, thereby raising its internal energy: U = 3kT/2. This internal energy, defined along with the thermal capacity of the air(C) is what enables us to feel warmth. Conversely, the relative absence at higher altitudes makes us feel colder. Specific heat capacity can be measured in simple lab experiments, using the apparatus as set out in the graphic. This image shows: an outer calorimeter (left), inner calorimeter cup (next), and thermometer (far right - inserted in calorimeter cap), and a metal sample for which we may seek to find its specific heat capacity - call it c(x). The practical procedure is then straight forward. Assuming a mass of water (m_w) say of 100g, and mass of calorimeter (m_cal) which must take the inner cup + outer into account, then we can find what we need using a basic heat conservation equation: heat lost by hot substance = heat gained by cold substance + heat gained by calorimeter Then let the unknown metal (mass m_x) be heated to 100 C then deposited in the 200g of water at temperature 27 C in the calorimeter cup (which must be weighed of course). Let the calorimeter be of mass 0.1kg and made of copper for which we know the specific heat capacity c_Cu = 400 J/kg K, then we have: -m_x c(x) (T - T_x) = m_w c(w)(T - T_w) + m_cal cCu (T - T_w) This assumes no net heat loss, and also the initial temperature of the calorimeter and the water are the same, e.g. T_w = 20 C. Obviously then, if the unknown metal x is heated to 100 C and dropped into the calorimeter, heat will be gained by both the water and calorimeter, even as heat is lost from the specimen. Now, if c(w) is known to be 4,200 J.kg K we ought to be able to work out the unknown specific heat capacity c(x) if say, the mass m_x is known. Such calorimetric experiments are extremely important since they show several principles at once. Example problem: For the experimental layout described, let the final temperature attained by the water + calorimeter be 25C. Obtain the unknown specific heat capacity, c(x) if m_x = 0.2 kg: Then we have: T = 25 C, T_w = 20 C and T_x = 100 C. We also have all other quantities so we can obtain c(x). (Let us also bear in mind here that differences in Celsius degrees = differences in Kelvin degrees). Then we may write: c(x) = [m_w c(w)(T - T_w) + m_cal cCu (T - T_w)]/[m_x (T - T_x)] Substituting the measured values of the data: c(x) = [0.2 kg (4200 J/kg K) (5 K) + 0.1 kg(400 J/kg K)(5 K)/ 0.2 kg( 75 K) c(x) = [4200 J + 200 J]/ 7.5 kg K = 4400 J/ 15 kg K = 293 J/kg K (which is most likely an alloy of copper and silver, c(Ag) = 234 J/kg K). (1) Let 5 million calories of solar energy be absorbed by 2 cubic meters of hydrogen gas 100 km above the Earth. If the particle density of the gas is 10,000 atoms per cubic meter, estimate the heat capacity of the gas volume. (Atomic mass of a hydrogen atom, 1 u ~ 1.6 x 10^-27 kg). (2) 10 lbs. of iron and 5 lbs. of aluminum - both at 200 F, are added to 10 lb. of water at 40 F contained in a vessel whose thermal capacity is 0.5 Btu/ deg F. Calculate the final temperature if c(Al) = 0.21 cal/ g C, and c (Fe) = 0.11 cal/g C (Note: specific heat capacities are the same in calories per gram per degree Celsius as in Btu/ F deg). (3)A calorimeter and its contents have a total thermal (heat) capacity of C = 200 cal/ deg C. A body of mass 210 g and at temperature 80 C is placed in the calorimeter resulting in a temperature increase from 10 C to 20 C. Compute the specific heat capacity of the body. Wednesday, May 25, 2011 Well, who'da thought? Democrat Kathy Hochul, running on a firm pro-Medicare stance, bested the (much better funded) Reep candidate Jane Corwin by 47-43% in New York's 26th congressional district special election. The 'W' is being touted, as well it should, as the first major successful test of the pro-Medicare stance vs. the horrific Medicare voucher plan espoused by Paul Ryan and his fellow numbskulls. What this win should also do, is serve as a template by which to beat Repup heinie into oblivion next year, and perhaps take back the House, and amass an even wider majority in the Senate. The point? The Reeps overplayed their hand and now they need to be made to pay through their eyes, ears, nose, and any other place for their hubris! Of course, it wasn't ten minutes after the declared win before the inevitable whining began. Paul Ryan himself accused Democrats of distorting his proposal, averring. “If you can scare seniors into thinking their current benefits are affected, that’s going to have an effect. That’s exactly what happened here,” But this is horse shit. The fact is that it's immaterial whether "current" benefits are affected or not. This was roundly exposed during the recent Easter recess during which Ryan took his dog and pony show around Wisconsin trying to win support. It fell like a frickin' lead balloon!(Often amidst many catcalls, and howls of derision, screams of 'Liar!') The "plan" was roundly eviscerated in confab after confab held by Ryan in various venues. People in the assorted audiences, many who had actually run the numbers, asked Ryan pointedly how they were going to be able to afford to pay nearly two thirds of their own expenses out of pocket when many of their relatives were already struggling (in existing Medicare) to pay roughly 20%. Ryan had no answers other than to try the ploy of asserting that current seniors (55 and over) wouldn't be affected by the changes, only those who turned 55 by 2021. But again, people were too smart for his palaver, and the attempt to split up elder interests didn't work. After some in the audience heckled him with cries of "Liar!" and "Bullshit!" others referenced how they didn't want their younger relatives (already having a hard time finding work for decent pay) to have this onerous monkey on their back. From then, you could see Ryan chastened and backing off. The fact, still unable to be processed by the foolish, ideological Reeps, is that Ryan's Nazi-plan was dead in the water from the word 'Go!' - hell, even Newt Gingrich saw it! Anyone who's tried to obtain private health insurance when over the age of 60 knew the score. Private insurers simply weren't interested in insuring a group that was 5-6 times more likely to be ill or injured or need care than a 35-year old. Well, they weren't interested unless you could cough up LOTS of moola, usually $850 or more a month, plus pay a high deductible of $5,000/ year and sometimes more. This is what one would get under the Ryan Medicare "plan" and that was assuming no pre-existing conditions, unheard of for most over-60 years olds. Thus, Kathy Hochul's constant refrain during her campaign that: "Hey, they just hand you $8,000 to buy insurance, then send you on your way with a 'Good Luck!'", is spot- on. THAT is exactly what would unfold in the Ryan scenario and anyone who'd try to tell you differently is either a liar, an ignoramous or an idiot. Or maybe all three! As Ms. Hochul noted, the advantage of standard government Medicare is that it mandates insurers MUST accept a person over 65 who is qualified (i.e. has worked at least 40 quarters, or disabled), pre-existing conditions or not. It also mandates certain price structures for operations, treatments, and keeps patient costs lower than would otherwise be the case. Even so, they're not insignificant. Starting from July I will have to cough up around $3,300 a year, and that is assuming no major operations or interventions. So it's not a freebie. Yes, as I've written until blue in the face, there are ways to ensure Medicare's long term solvency, and they don't require draconian spending cuts - especially coupled with preserving atrocious tax cuts for the wealthiest like Paul Ryan wants to do. They require only moderate changes, like enabling the government to bargain for lowest prescription prices like the VA, or if that can't work, allow importing of lower cost Canadian drugs. And if PhRMA squeals like stuck pigs, tell them to fuck off. Also, one can eliminate the 'Medicare Advantage' plans which spend $12 billion more a year on average than standard Medicare. Further, the FICA limits can be increased to at least $250 grand, along with no more Bush tax cuts - for middle OR wealthy classes. All these in tandem can resolve the insolvency problem but they require Dems especially to make the honest determination and run with it, as opposed to falling into the Reeps' spending cut trap. If they're dumb enough to do that all bets are off! Hochul's win is devastating to the repups but only if the Demos use it and don't find a way (next year) to yet again seize defeat from the jaws of victory. That means they not only must run on her same justified fear (of the Ryan plan) template, but ALSO have the heart and courage to define, articulate and embrace Medicare changes that don't depend on massive spending cuts. I already listed them above, the question is whether enough Dems will have the cojones to embrace them. In a blog just over a year ago, I cited an issue of Skeptic Magazine (Vol. 15, No. 2, 2009) by James Allen Cheyne, who made reference to a compendium of research which has shown an inverse correlation between religious belief and intelligence as measured by IQ. Cheyne observed (ibid.): "Correlations between measures of intelligence and reported religious belief are remarkably consistent. Approximately 90% of all the studies ever conducted have reported that .....as intelligence (as measured by IQ) goes up, religious belief goes down." At the time I noted it didn't appear so fantastic a claim, based on the statistics he cited, coupled with one's realization that just a moderate IQ (105-115) should be able to see that talking snakes (as in the "Garden of Eden"), plus guys living in whales' bellies, and a man who can walk on water...are all preposterous. No genuinely intelligent person could buy into any of these any more than a smart kid would buy into Santa Claus. In more depth, Cheyne made reference to a particular type of thought he called ACH thinking- or abstract, categorical and hypothetical - which appeared to be mostly missing from believers' and which figured prominently on many IQ tests (such as the Raven's and Wechsler Similarities tests). Such tests featured many questions which constructed an abstract hypothetical from a particular category, then asked the person to predict the consequences, if any. For example, some ACH type questions would be: 1) If Venus and Earth were to exchange orbits, what (if anything) would happen as a consequence to each planet to change it from its current conditions? 2) If a hollow equilateral pyramid were "opened" up and spread out in two dimensions, how would it appear? 3) We observe the red shift of galaxy clusters and interpret cosmic expansion. What would we conclude if all galaxy clusters showed a blue shift- but only up to 1 billion light years distant and no more? 4) If the gravity on Earth were suddenly decreased by half, theorize how would this affect energy costs in two named modes of transportation? 5) Imagine a sphere turned inside out, how would it look in 3 dimensions? In two? None of the above are particularly "easy" but neither are they too difficult for a person aware of basic facts (e.g. that Venus is already closer to the Sun than Earth by about 1/3) but do require the ability to abstract from the conditions of the facts to the given hypothetical to infer the new situation, and assess it. This is the very ability that Cheyne shows is missing as one examines results for religious believers. At the time of the blog, the question as to this IQ deficit in believers was mostly unanswered, but now there may be an empirical basis. (Particularly as Cheyne's largest IQ deficits were observed statistically in Christian Fundamentalists). In a new study, completed at Duke University Medical Center and funded by the National Institutes of Health and the Templeton Foundation, it was found that Protestants who did not have a "born again" experience had significantly more gray matter than either those who reported a life-changing religious experience or unaffiliated (but still religious) adults. The measurments made focused on at least two MRI measurements of the hippocampus region of 268 adults between 1994 and 2005. Those identified as Protestant who did not have a religious conversion or born-again experience — more common among their evangelical brethren — had a bigger hippocampus, as well as atheists who had no religious orientation, period. Also interesting, is that those who professed a Catholic affiliation also had smaller brains, based on hippocampus size. (A putative comparison of brain scans is shown in the accompanying graphic but not exactly to scale so the magnification of the atheist brain scan and Protestant mainline one (center) must be adjusted by a longitudinal factor of about 1:11 and 1:14 smaller respectively compared to the fundy scan). Biologically, we know the hippocampus is an area buried deep in the brain that helps regulate emotion and memory. Atrophy or shrinkage in this region of the brain has long been linked to mental health problems such as depression, dementia and Alzheimer's disease. Damage, which may well be incepted by stress (say the stress of belonging to a minority group - as hypothesized by the researchers) may be one reason for the relative brain size deficit. But I believe a much more likely one (which will have to be tested-confirmed in the future with more detailed scans, say using PET (positron emission tomography) imagery and SPECT: single photon emission tomography, scans) is that the long term disuse of the application of the memory centers (based in the hippocampus) leads inevitably to long term decline (the typical average age of participants in the study was 58). In other words, "use it or lose it". If then the believer constantly disavows facts in critical thinking, and instead of marshalling those facts- say in original thought - has a tendency to rely on a single book or bible to "do his thinking for him", then his brain won't develop the flexibility or capacity of thought needed to adapt and it will lose mass- cells over time, i.e. shrink. This was already theorized as long ago as 1991 by Robert Ornstein in his Evolution of Consciousness. The same can apply to Catholics, also found to have shrunken hippocampi, because they will reject their own critical thought and factual (memory) application, in favor or what the Pope or Vatican says. They will also tend to uncritically accept "saints", miracles and other bilge and pfolderol as replacements for reality. In each of these instances there will also plausibly be recurring failures in taking specialized tests (or IQ tests) which contain a large number of abstract, categorical and hypothetical (e.g. ACH) questions. Obviously, more research and supporting tests need to be conducted, but it seems likely that at least the initial findings comport well with James Allen Cheyne's findings of lower IQs for believers, especially fundies. This ought to tell these folks that there is something deleterious to the brain in holding fast to 2,000 year old sayings (most butchered and bowdlerized) from sheep herding, semi-literate, and scientifically pre-literate nomads! As I perused the Milwaukee-Journal Sentinel Online several days ago, one story caught my eye regarding 'Retirees Underestimate Health Costs'. The piece mentioned how too many grossly underestimate the out of pocket costs that will face them, even with Medicare. I posted a comment observing that this shows the Republicans are out of it with their Medicare repair plans, since the out of pocket costs will be MUCH higher (as with Ryan's plan). Also, the Journal -sentinel piece gave the lie to the widely circulated Repuke myth that Medicare is basically a "freebie". No, it certainly is not! But on scanning many of the other comments I was astounded to behold one after the other bearing essentially the same refrain, which might be summarized as follows: Well, anyone coulda told the feds that none of these entitlements would be sustainable! People, seniors are just gonna have to SAVE more and work longer! Oh yeah? Says WHO? Most of these folks, so righteous in their anti-entitlement mentality have no clue at all, not one in a million, that an American jobless future is already upon us and will not be getting any easier, anytime soon. So once more, it;'s time to pull back the heavy lids of delusion and open some brains to the stiff sunlight of reality! (Hoping that Jack Nicholson's famous refrain (from the movie 'A Few Good Men') "You can't handle the TRUTH!" won't apply to any of my readers.) At least two recent extensive articles, one in TIME and another in The Economist sheds much needed light, along with a recently released working paper for the Council on Foreign Relations, and which was authored by Michael Spence and Sandile Hlathshwayo. The latter's paper specifically warned that "growth and employment are set to diverge for decades in the U.S.". What does this mean exactly? It means that for the next multi-decades, and perhaps forever, economic growth as measured by the GDP and unemployment will be decoupled. Whereas before - much before! - the more workers the more economic growth, that will no longer apply. Now, LESS workers - or should I say LESS AMERICAN workers, will translate into higher GDP. The Jobless future is here, but actually - it has been with us for some time! As far back as 1995, The Wall Street Journal noted the 'Million Missing Men' in a title by the same name, estimating that one million workers aged 55 and over were absent from the work force. They had evidently been downsized then vanished. However, not really! They simply maintained low profiles and after searching for decently remunerative work, gave up and dropped out. Many lived off their wives' earnings, but many others lived off savings and investments of their own, or took odd jobs to just keep heads afloat as they reduced their consumption dramatically - and maybe lived with a relative or friend. In its own article ('Decline of the Working Man', p. 75, April 30th) The Economist observes: "Of all the rich, Group of Seven Economies, America's unemployment rate has the lowest share of 'prime age' males in work: just over 80% of those aged between 25 and 54 have a job, compared to 95% in 1995" Not mentioned, but often noted in assorted AARP Bulletins, are the 45% of those over 55 who have no job. Not even part time work. Even the 80% working figure given by the Economist is somewhat overblown because from the latest BLS stats and census data barely half those 80% working have full time jobs. The rest are underemployed in part time jobs, often patching two or three pissant pay jobs together to make ends meet. As authors William Wolman and Anne Colamosca (The Judas Economy: The Triumph of Capital and the Betrayal of Work) have noted: the effect of chronic underemployment, especially among those over 50 is just as pernicious as long term unemployment. It means, for example, that a large swath of people will not be able to save enough to support any kind of retirement scenario and will depend almost exclusively on Social Security. It is these, of course, for whom Medicare will be most critical for their survival and the injunction to "work more and save more" becomes more a cruel joke at the behest of a clown or moron. The Economist, as good a journal as it is, unfortunately gets the base causes for this situation totally wrong, which is mystifying. They insist, for example, that many Americans have "let their schooling slide", meaning that they often haven't revamped their technical skills or trained for new fields. But this is false. Many reports (including a special series in The Denver Post some four years ago) noted how people in Denver - let go after the tech bust- had retrained only to find their new jobs sent out to India, because of lower wages and few or no benefits. The series also supported Jeremy Rifkin's thesis (in The End of Work) that high tech and white collar redundancy would follow that for lower skilled workers. In the computer-tech domain this is exactly what has happened. While we do see the occasional piece bragging about Google hiring 12,000 new workers, say in California, the mainstream media leaves unreported how many hundreds of thousands of computer-tech jobs are dispatched to Bangalore or Delhi in a given year. The Post series noted that youngsters planning college aren't stupid either, and having seen swatches of good computer jobs dispatched offshore (including perhaps for their parents) they aren't that convinced a computer science degree will get them very far anymore. Nor are they willing to gamble (leaving university with debts in the tens of thousands of dollars, that they'll nail a Google job by beating 100,000 to 1 odds. Hence, more and more turning to medical tech and health services. But even those jobs will be terminated or never emerge, if the Republicans manage to overturn Obama's Affordable Health Care Act! It is precisely because up to 35 million more patients will be added by 2014 that those medical care jobs have a potential to materialize, but not if the legislation is torpedoed by Repuke bean counters who are penny wise and pound foolish! Meanwhile, the TIME piece (May 20, p. 36) notes two elements that only appear to have been superficially covered by The Economist and which are playing new roles in engendering an American jobless future: 1) The re-definition of productivity: that is, productivity is now defined by "cutting jobs and finding ways of making the same products with fewer people". As pointed out by the author (F. Zakaria): "At many major companies profits have returned to 2007 levels but with many thousands fewer workers". 2)The force of globalization: making a single market for many goods and services which don't require American workers for production, OR American consumers to buy them. This single market amounts to more than 400 million having entered the global labor force, from China, India, South Africa, Indonesia and elsewhere. All now with money to spend, and all willing to work at one third or less the pay of an American, and for NO benefits! Both of these are ominous forces, and any American who still insists on wearing rose-colored glasses has only himself to blame if blind-sided. It is clear that these coupled forces will continue with no imminent signs of abating, unless some hidden, unfactored counterforce causes one or both to halt. In the case of (2) the most likely source of haltage would be a massive energy crisis (maybe induced by Peak Oil arriving very quickly with its worst manifestations) or perhaps a pandemic like Bird Flu or another "1918 Spanish Flu" epidemic (since that virus was recently re-engineered using frozen tissue s extracted from dead Eskimos who died of the virus and were found encased in ice). But even here, the costs inflicted on the global markets would likely be every bit as parlous as inflicted on Americans. The most probable result would be that everyone loses, including Americans. The only remaining hope is to persuade American companies to begin to re-hire American workers for decent paying jobs with benefits, not merely McJobs which (at an average remuneration of $18,000) simply won't allow people to save enough not to have to depend almost entirely on "entitlements". (Indeed, Walmart has consistently advised its workers who can't afford its health plans, to try to sign on to Medicaid). The question is how to entice them to create the jobs, and the only way I see is much higher corporate taxation (with zero loopholes) if they don't. Of course, another option is to resurrect the 700,000 public service jobs cut by Reagan during the hysteria over "big bad government". We can recall a goodly set of these were air traffic controller jobs, which loss we're still paying the price for. And while pundits laugh and make jokes about sleeping controllers, no one wants to go near the real reason: too few experienced controllers on the job (which numbers are necessitated by America's antiquated airline route system, as noted in an earlier blog). Then there is the massive infrastructure repair needed, an effort that could easily employ a skilled army of public workers for YEARS, to build new bridges, water and sewer mains and highways. But in the deficit obsession era, no one again wants to go anywhere near this! So, we just allow our infrastructure - the backbone of our nation - to degenerate into 3rd world status. Apart from these possible major influences or checks on the current 'jobless productivity' dynamic, most Americans face an extremely impoverished and bleak future. Which makes it even more critical that the remaining social support systems, including Social Security and Medicare, be ferociously protected against any further weakening - either by Nazified and brutish tea bagging Repukes, or by pussified, wussified Demos afraid of their own shadows and desperately needing backbone transplants - as well as a pointed reminder of the PEOPLE their party used to represent before too much corporate cash was infused for political campaign contributions! Having examined how heat and mechanics are related, and specifically how Newton’s laws give rise to the basic kinetic theory for ideal gases, we now enter the realm of thermodynamics proper. We begin by further inquiring into the link between kinetic theory and temperature, and that we already saw: PV = 2/3 N(½mv^2) Which can be compared to what is called the “empirical equation of state for an ideal gas”: PV = nkT, where k is the Boltzmann constant. Equating the two: 2/3 N(½mv^2) = N kT And solving for T, temperature: T = 2/3k [(½mv^2)] Which discloses the direct link between temperature and the microscopic behavior of a gas. Thus, temperature T is indeed a direct measure of the average molecular kinetic energy of a gas. We can also write this: 3k T/ 2 = (½mv^2) Now, the total translational kinetic energy (for all N molecules of a gas) is just: E = N((½mv^2) = N (3kT/2) = 3 nRT/2 Where we have replaced k using k = R/N_A Where R is the molar gas constant and N_A is the “Avogradro number” R = 8.3 J/ mole-K Now, with these preliminaries out of the way, we can explore further the First Law of Thermodynamics (introduced in the two earlier blogs) but now in terms of the experimental set up shown. Here we have an apparatus consisting of a gas confined by a movable piston such that when heat (Q) is applied (added) using the Bunsen burner, external work W (= P(V2 – V1) ) is done in expanding the contained gas and hence pushing the piston upward. From the First Law: +Q = +U + delta W or delta Q = delta U + p(V2 – V1) = p(delta V) Note again that (delta U) includes translational, rotational and vibrational kinetic molecular energies. The pressure itself P = F(a)/ A where F(a) > mg and A is the area of the piston. Now, consider the experiment in the context of keeping the pressure P constant, then we say the process is isobaric. We also allow that it is reversible, in other words just as I can increase Q to do work to expand the gas, so also I can reduce Q (by lowering the heat of the burner) to decrease the gas. Given n moles of an ideal gas (taking n = const.) then we can write: delta Q = n Cp,m (delta T) where: C p,m = delta Q/ n (delta T) is the molar heat capacity at constant pressure Then for an ideal gas, taking only V and T changing: PV = nR T -> P(delta V) = nR(delta T) delta Q = delta U + delta W becomes n Cp,m (delta T) = n Cv,m (delta T) + nR (delta T) n Cv,m (delta T) = delta U Using some algebra on the earlier equation, and canceling out delta T’s: C p,m = C v,m + R Or R = C p,m - C v,m In other words, the molar gas constant R is the difference between the molar specific heat capacity at constant pressure and the molar specific heat capacity at constant volume. One mole of a gas has a volume of 0.0223 cubic meters at a pressure P = 1.01 x 10^5 N/m^2 at 0 degrees Celsius. If the molar heat capacity at constant pressure is 28.5 J/mol-K find the molar heat capacity at constant volume, C v,m. We have: PV = nRT and R = PV/T = [(1.01 x 10^5 Pa) (0.0223 m^3)]/ 273 K Note: that 0C = 273 Kelvin (K) and for pressure, 1 Pa (Pascal) = 1 N/m^2 Then R = 8.3 J/ mol K C v,m = C p,m – R = (28.5 – 8.3) J/mol K = 20.2 J /mol K Problem for ambitious readers: 20 g of a gas initially at 27 C is heated at a constant pressure of 101 kPa (kiloPascals), so its volume increases from 0.250 m^3 to 0.375 m^3. Find: a) the external work done in the expansion b) the increase in the internal energy U c) the quantity of heat supplied (Q) to achieve the expansion.
http://brane-space.blogspot.com/2011_05_01_archive.html
13
87
Jordan curve theorem In topology, a Jordan curve is a non-self-intersecting continuous loop in the plane, and another name for a Jordan curve is a simple closed curve. The Jordan curve theorem asserts that every Jordan curve divides the plane into an "interior" region bounded by the curve and an "exterior" region containing all of the nearby and far away exterior points, so that any continuous path connecting a point of one region to a point of the other intersects with that loop somewhere. While the statement of this theorem seems to be intuitively obvious, it takes quite a bit of ingenuity to prove it by elementary means. More transparent proofs rely on the mathematical machinery of algebraic topology, and these lead to generalizations to higher-dimensional spaces. The Jordan curve theorem is named after the mathematician Camille Jordan, who found its first proof. For decades, it was generally thought that this proof was flawed and that the first rigorous proof was carried out by Oswald Veblen. However, this notion has been challenged by Thomas C. Hales and others. Definitions and the statement of the Jordan theorem A Jordan curve or a simple closed curve in the plane R2 is the image C of an injective continuous map of a circle into the plane, φ: S1 → R2. A Jordan arc in the plane is the image of an injective continuous map of a closed interval into the plane. Alternatively, a Jordan curve is the image of a continuous map φ: [0,1] → R2 such that φ(0) = φ(1) and the restriction of φ to [0,1) is injective. The first two conditions say that C is a continuous loop, whereas the last condition stipulates that C has no self-intersection points. Let C be a Jordan curve in the plane R2. Then its complement, R2 \ C, consists of exactly two connected components. One of these components is bounded (the interior) and the other is unbounded (the exterior), and the curve C is the boundary of each component. Furthermore, the complement of a Jordan arc in the plane is connected. Proof and generalizations Let X be a topological sphere in the (n+1)-dimensional Euclidean space Rn+1, i.e. the image of an injective continuous mapping of the n-sphere Sn into Rn+1. Then the complement Y of X in Rn+1 consists of exactly two connected components. One of these components is bounded (the interior) and the other is unbounded (the exterior). The set X is their common boundary. This is proved by induction in k using the Mayer–Vietoris sequence. When n = k, the zeroth reduced homology of Y has rank 1, which means that Y has 2 connected components (which are, moreover, path connected), and with a bit of extra work, one shows that their common boundary is X. A further generalization was found by J. W. Alexander, who established the Alexander duality between the reduced homology of a compact subset X of Rn+1 and the reduced cohomology of its complement. If X is an n-dimensional compact connected submanifold of Rn+1 (or Sn+1) without boundary, its complement has 2 connected components. There is a strengthening of the Jordan curve theorem, called the Jordan–Schönflies theorem, which states that the interior and the exterior planar regions determined by a Jordan curve in R2 are homeomorphic to the interior and exterior of the unit disk. In particular, for any point P in the interior region and a point A on the Jordan curve, there exists a Jordan arc connecting P with A and, with the exception of the endpoint A, completely lying in the interior region. An alternative and equivalent formulation of the Jordan–Schönflies theorem asserts that any Jordan curve φ: S1 → R2, where S1 is viewed as the unit circle in the plane, can be extended to a homeomorphism ψ: R2 → R2 of the plane. Unlike Lebesgues' and Brouwer's generalization of the Jordan curve theorem, this statement becomes false in higher dimensions: while the exterior of the unit ball in R3 is simply connected, because it retracts onto the unit sphere, the Alexander horned sphere is a subset of R3 homeomorphic to a sphere, but so twisted in space that the unbounded component of its complement in R3 is not simply connected, and hence not homeomorphic to the exterior of the unit ball. History and further proofs The statement of the Jordan curve theorem may seem obvious at first, but it is a rather difficult theorem to prove. Bernard Bolzano was the first to formulate a precise conjecture, observing that it was not a self-evident statement, but that it required a proof. It is easy to establish this result for polygonal lines, but the problem came in generalizing it to all kinds of badly behaved curves, which include nowhere differentiable curves, such as the Koch snowflake and other fractal curves, or even a Jordan curve of positive area constructed by Osgood (1903). The first proof of this theorem was given by Camille Jordan in his lectures on real analysis, and was published in his book Cours d'analyse de l'École Polytechnique. There is some controversy about whether Jordan's proof was complete: the majority of commenters on it have claimed that the first complete proof was given later by Oswald Veblen, who said the following about Jordan's proof: |“||His proof, however, is unsatisfactory to many mathematicians. It assumes the theorem without proof in the important special case of a simple polygon, and of the argument from that point on, one must admit at least that all details are not given.||”| However, Thomas C. Hales wrote: |“||Nearly every modern citation that I have found agrees that the first correct proof is due to Veblen... In view of the heavy criticism of Jordan’s proof, I was surprised when I sat down to read his proof to find nothing objectionable about it. Since then, I have contacted a number of the authors who have criticized Jordan, and each case the author has admitted to having no direct knowledge of an error in Jordan’s proof.||”| Hales also pointed out that the special case of simple polygons is not only an easy exercise, but was not really used by Jordan anyway, and quoted Michael Reeken as saying: |“||Jordan’s proof is essentially correct... Jordan’s proof does not present the details in a satisfactory way. But the idea is right, and with some polishing the proof would be impeccable.||”| Jordan's proof and another early proof by de la Vallée-Poussin were later critically analyzed and completed by Shoenflies (1924). Due to the importance of the Jordan curve theorem in low-dimensional topology and complex analysis, it received much attention from prominent mathematicians of the first half of the 20th century. Various proofs of the theorem and its generalizations were constructed by J. W. Alexander, Louis Antoine, Bieberbach, Luitzen Brouwer, Denjoy, Hartogs, Kerékjártó, Alfred Pringsheim, and Schoenflies. Some new elementary proofs of the Jordan curve theorem, as well as simplifications of the earlier proofs, continue to be carried out. - A proof using the Brouwer fixed point theorem by Maehara (1984). - A proof using non-standard analysis by Narens (1971). - A proof using constructive mathematics by Gordon O. Berg, W. Julian, and R. Mines et al. (1975). - A proof using non-planarity of the complete bipartite graph K3,3 was given by Thomassen (1992). - A simplification of the proof by Helge Tverberg. The first formal proof of the Jordan curve theorem was created by Hales (2007a) in the HOL Light system, in January 2005, and contained about 60,000 lines. Another rigorous 6,500-line formal proof was produced in 2005 by an international team of mathematicians using the Mizar system. Both the Mizar and the HOL Light proof rely on libraries of previously proved theorems, so these two sizes are not comparable. Nobuyuki Sakamoto and Keita Yokoyama (2007) showed that the Jordan curve theorem is equivalent in proof-theoretic strength to the weak König's lemma. - Berg, Gordon O.; Julian, W.; Mines, R.; Richman, Fred (1975), "The constructive Jordan curve theorem", Rocky Mountain Journal of Mathematics 5 (2): 225–236, doi:10.1216/RMJ-1975-5-2-225, ISSN 0035-7596, MR 0410701 - Hales, Thomas C. (2007a), "The Jordan curve theorem, formally and informally", The American Mathematical Monthly 114 (10): 882–894, ISSN 0002-9890, MR 2363054 - Hales, Thomas (2007b), "Jordan's proof of the Jordan Curve theorem", Studies in Logic, Grammar and Rhetoric 10 (23) - Jordan, Camille (1887), Cours d'analyse, pp. 587–594 - Maehara, Ryuji (1984), "The Jordan Curve Theorem Via the Brouwer Fixed Point Theorem", The American Mathematical Monthly (Mathematical Association of America) 91 (10): 641–643, doi:10.2307/2323369, ISSN 0002-9890, JSTOR 2323369, MR 0769530 - Narens, Louis (1971), "A nonstandard proof of the Jordan curve theorem", Pacific Journal of Mathematics 36: 219–229, ISSN 0030-8730, MR 0276940 - Osgood, William F. (1903), "A Jordan Curve of Positive Area", Transactions of the American Mathematical Society (Providence, R.I.: American Mathematical Society) 4 (1): 107–112, ISSN 0002-9947, JFM 34.0533.02, JSTOR 1986455 - Ross, Fiona; Ross, William T. (2011), "The Jordan curve theorem is non-trivial", Journal of Mathematics and the Arts (Taylor & Francis) 5 (4): 213–219, doi:10.1080/17513472.2011.634320. author's site - Sakamoto, Nobuyuki; Yokoyama, Keita (2007), "The Jordan curve theorem and the Schönflies theorem in weak second-order arithmetic", Archive for Mathematical Logic 46 (5): 465–480, doi:10.1007/s00153-007-0050-6, ISSN 0933-5846, MR 2321588 - Thomassen, Carsten (1992), "The Jordan–Schönflies theorem and the classification of surfaces", American Mathematical Monthly 99 (2): 116–130, doi:10.2307/2324180, JSTOR 2324180 - Veblen, Oswald (1905), "Theory on Plane Curves in Non-Metrical Analysis Situs", Transactions of the American Mathematical Society (Providence, R.I.: American Mathematical Society) 6 (1): 83–98, ISSN 0002-9947, JSTOR 1986378 - M.I. Voitsekhovskii (2001), "Jordan theorem", in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 - The full 6,500 line formal proof of Jordan's curve theorem in Mizar. - Collection of proofs of the Jordan curve theorem at Andrew Ranicki's homepage - A simple proof of Jordan curve theorem (PDF) by David B. Gauld - Application of the theorem in computer science - Determining If A Point Lies On The Interior Of A Polygon by Paul Bourke
http://en.m.wikipedia.org/wiki/Jordan_curve_theorem
13
54
Electrical impedance or simply impedance is a measure of opposition to a sinusoidal electric current. The concept of electrical impedance generalizes Ohm's law to AC circuit analysis. Unlike electrical resistance, the impedance of an electric circuit can be a complex number. Oliver Heaviside coined the term impedance in July of 1886. AC steady state Edit In general, the solutions for the voltages and currents in a circuit containing resistors, capacitors and inductors (in short, all linear behaving components) are solutions to a linear ordinary differential equation. It can be shown that if the voltage and/or current sources in the circuit are sinusoidal and of constant frequency, the solutions tend to a form referred to as AC steady state. Thus, all of the voltages and currents in the circuit are sinusoidal and have constant peak amplitude, frequency and phase. Let v(t) be a sinusoidal function of time with constant peak amplitude Vp, constant frequency f, and constant phase φ. Where: v(t) is the voltage function - is the voltage maximum amplitude Now, let the complex number V be given by: V is called the phasor representation of v(t). V is a constant complex number. For a circuit in AC steady state, all of the voltages and currents in the circuit have phasor representations as long as all the sources are of the same frequency. That is, each voltage and current can be represented as a constant complex number. For DC circuit analysis, each voltage and current is represented by a constant real number. Thus, it is reasonable to suppose that the rules developed for DC circuit analysis can be used for AC circuit analysis by using complex numbers instead of real numbers. Definition of electrical impedance Edit The impedance of a circuit element is defined as the ratio of the phasor voltage across the element to the phasor current through the element: It should be noted that although Z is the ratio of two phasors, Z is not itself a phasor. That is, Z is not associated with some sinusoidal function of time. For DC circuits, the resistance is defined by Ohm's law to be the ratio of the DC voltage across the resistor to the DC current through the resistor: where the and above are DC (constant real) values. Just as Ohm's law is generalized to AC circuits through the use of phasors, other results from DC circuit analysis such as voltage division, current division, Thevenin's theorem, and Norton's theorem generalize to AC circuits. Impedance of different devices Edit For a resistor, we have the relation: That is, the ratio of the instantaneous voltage and current associated with a resistor is the value of the DC resistance denoted by R. Since R is constant and real, it follows that if v(t) is sinusoidal, i(t) is also sinusoidal with the same frequency and phase. Thus, we have that the impedance of a resistor is equal to R: For a capacitor, we have the relation . Now, Let It follows that Using phasor notation and the result above, write our first equation as: It follows that the impedance of a capacitor is For the inductor, we have: By the same reasoning used in the capacitor example above, it follows that the impedance on an inductor is: See main article: Electrical reactance The term reactance refers to the imaginary part of the impedance. Some examples: A resistor's impedance is R (its resistance) and its reactance is 0. A capacitor's impedance is j (-1/ωC) and its reactance is -1/ωC. An inductor's impedance is j ω L and its reactance is ω L. It is important to note that the impedance of a capacitor or an inductor is a function of the frequency f and is an imaginary quantity - however is certainly a real physical phenomenon relating the shift in phases between the voltage and current phasors due to the existence of the capacitor or inductor. Earlier it was shown that the impedance of a resistor is constant and real, in other words a resistor does not cause a phase shift between voltage and current as do capacitors and inductors. When resistors, capacitors, and inductors are combined in an AC circuit, the impedances of the individual components can be combined in the same way that the resistances are combined in a DC circuit. The resulting equivalent impedance is in general, a complex quantity. That is, the equivalent impedance has a real part and an imaginary part. The real part is denoted with an R and the imaginary part is denoted with an X. Thus: is termed the resistive part of the impedance while is termed the reactive part of the impedance. It is therefore common to refer to a capacitor or an inductor as a reactance or equivalently, a reactive component (circuit element). Additionally, the impedance for a capacitance is negative imaginary while the impedance for an inductor is positive imaginary. Thus, a capacitive reactance refers to a negative reactance while an inductive reactance refers to a positive reactance. A reactive component is distinguished by the fact that the sinusoidal voltage across the component is in quadrature with the sinusoidal current through the component. This implies that the component alternately absorbs energy from the circuit and then returns energy to the circuit. That is, unlike a resistance, a reactance does not dissipate power. It is instructive to determine the value of the capacitive reactance at the frequency extremes. As the frequency approaches zero, the capacitive reactance grows without bound so that a capacitor approaches an open circuit for very low frequency sinusoidal sources. As the frequency increases, the capacitive reactance approaches zero so that a capacitor approaches a short circuit for very high frequency sinusoidal sources. Conversely, the inductive reactance approaches zero as the frequency approaches zero so that an inductor approaches a short circuit for very low frequency sinusoidal sources. As the frequency increases, the inductive reactance increases so that an inductor approaches an open circuit for very high frequency sinusoidal sources. Combining impedances Edit Combining impedances in series, parallel, or in delta-wye configurations, is the same as for resistors. The difference is that combining impedances involves manipulation of complex numbers. In series Edit Combining impedances in series is simple: In parallel Edit Combining impedances in parallel is much more difficult than combining simple properties like resistance or capacitance, due to a multiplication term. In rationalized form the equivalent resistance is: See also Series and parallel circuits. Circuits with general sources Edit Impedance is defined by the ratio of two phasors where a phasor is the complex peak amplitude of a sinusoidal function of time. For more general periodic sources and even non-periodic sources, the concept of impedance can still be used. It can be shown that virtually all periodic functions of time can be represented by a Fourier series. Thus, a general periodic voltage source can be thought of as a (possibly infinite) series combination of sinusoidal voltage sources. Likewise, a general periodic current source can be thought of as a (possibly infinite) parallel combination of sinusoidal current sources. Using the technique of Superposition, each source is activated one at a time and an AC circuit solution is found using the impedances calculated for the frequency of that particular source. The final solutions for the voltages and currents in the circuit are computed as sums of the terms calculated for each individual source. However, it is important to note that the actual voltages and currents in the circuit do not have a phasor representation. Phasors can be added together only when each represents a time function of the same frequency. Thus, the phasor voltages and currents that are calculated for each particular source must be converted back to their time domain representation before the final summation takes place. This method can be generalized to non-periodic sources where the discrete sums are replaced by integrals. That is, a Fourier transform is used in place of the Fourier series. Magnitude and phase of impedance Edit Complex numbers are commonly expressed in two distinct forms. The rectangular form is simply the sum of the real part with the product of j and the imaginary part: The polar form of a complex number is the product of a real number called the magnitude and another complex number called the phase: Where the magnitude is given by: and the angle is given by: Equivalently, the magnitude is given by: Where Z* denotes the complex conjugate of Z: . Peak phasor versus rms phasor Edit A sinusoidal voltage or current has a peak amplitude value as well as an rms (root mean square) value. It can be shown that the rms value of a sinusoidal voltage or current is given by: In many cases of AC analysis, the rms value of a sinusoid is more useful than the peak value. For example, to determine the amount of power dissipated by a resistor due to a sinusoidal current, the rms value of the current must be known. For this reason, phasor voltage and current sources are often specified as an rms phasor. That is, the magnitude of the phasor is the rms value of the associated sinusoid rather than the peak amplitude. Generally, rms phasors are used in electrical power engineering whereas peak phasors are often used in low-power circuit analysis. In any event, the impedance is clearly the same whether peak phasors or rms phasors are used as the scaling factor cancels out when the ratio of the phasors is taken. Matched impedances Edit When fitting components together to carry electromagnetic signals, it is important to match impedance, which can be achieved with various matching devices. Failing to do so is known as impedance mismatch and results in signal loss and reflections. The existence of reflections allows the use of a time-domain reflectometer to locate mismatches in a transmission system. For example, a conventional radio frequency antenna for carrying broadcast television in North America was standardized to 300 ohms, using balanced, unshielded, flat wiring. However cable television systems introduced the use of 75 ohm unbalanced, shielded, circular wiring, which could not be plugged into most TV sets of the era. To use the newer wiring on an older TV, small devices known as baluns were widely available. Today most TVs simply standardize on 75-ohm feeds instead. Inverse quantities Edit The reciprocal of a non-reactive resistance is called conductance. Similarly, the reciprocal of an impedance is called admittance. The conductance is the real part of the admittance, and the imaginary part is called the susceptance. Conductance and susceptance are not the reciprocals of resistance and reactance in general, but only for impedances that are purely resistive or purely reactive. Analogous impedances Edit Electromagnetic impedance Edit In problems of electromagnetic wave propagation in a homogeneous medium, the impedance of the medium is defined as: Acoustic impedance Edit In complete analogy to the electrical impedance discussed here, one also defines acoustic impedance, a complex number which describes how a medium absorbs sound by relating the amplitude and phase of an applied sound pressure to the amplitude and phase of the resulting sound flux. Data-transfer impedance Edit Another analogous coinage is the use of impedance by computer programmers to describe how easy or difficult it is to pass data and flow of control between parts of a system, commonly ones written in different languages. The common usage is to describe two programs or languages/environments as having a low or high impedance mismatch. Application to physical devices Edit Note that the equations above only apply to theoretical devices. Real resistors, capacitors, and inductors are more complex and each one may be modeled as a network of theoretical resistors, capacitors, and inductors. Rated impedances of real devices are actually nominal impedances, and are only accurate for a narrow frequency range, and are typically less accurate for higher frequencies. Even within its rated range, an inductor's resistance may be non-zero. Above the rated frequencies, resistors become inductive (power resistors more so), capacitors and inductors may become more resistive. The relationship between frequency and impedance may not even be linear outside of the device's rated range. See also Edit - Antenna tuner - Characteristic impedance - Balance return loss - Balancing network - Bridging loss - Damping factor - Forward echo - Harmonic oscillator - Impedance bridging - Impedance matching - Log-periodic antenna - Physical constants - Reflection coefficient - Reflection loss, Reflection (electrical) - Return loss - Signal reflection - Smith chart - Standing wave - Time-domain reflectometer - Voltage standing wave ratio - Wave impedance - Electrical reactance - nominal impedance - Mechanical impedance |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
http://engineering.wikia.com/wiki/Impedance
13
60
At least with triangles we are back to straight edges. This means that there is nothing tricky about the perimeter and area formulas. We will, however, need to spend some time after that to review angles, as well as the properties of some special triangles. The perimeter of a triangle is the sum of the lengths of the sides. = a + b + c Cool fact: Any side of a triangle is always shorter than the sum of the other two sides. This is called the triangle inequality theorem. Think about it for a second. Imagine you have a ten foot piece of wood, and two four foot pieces of wood. Could you connect them at their ends to make a triangle? No, it just wouldn't work. The four foot sections wouldn't be long enough. And you could have known this before trying, because 10 > 4 + 4. Triangle Inequality: For any triangle with sides a, b, c : a < b + c The area of a triangle is one half the length of the base times the height. Area = ½ Base × Height Students often ask us where that formula comes from. The shaded region above is a rectangle with Height = h and Width The area of that rectangle is Base × Height. The triangle occupies exactly one half of the space of that rectangle, and so the area of the triangle is half the area of the rectangle. Okay, that about does it for perimeter and area. Let's move on to some more facts about triangles. The vertices of a triangle (or points where the lines meet) form angles. The sum of the measures of the angles within a triangle is 180 degrees. Interesting fact: If side x of a triangle is opposite a larger angle than side y, then side x is longer than side y. Try drawing some triangles and check this out. There are some special triangles to consider. If a triangle has two = sides, it is called an isosceles triangle. The angles opposite the two equal sides are equal (this follows from the interesting fact mentioned on the last page). a triangle has three equal sides, it is an equilateral triangle, and all the angles measure 60 degrees. The most famous family of triangles are the right triangles. Because of the right angle, it is easy to find the area of a right triangle. Just turn it so that the right angle forms the base and the height, and then apply the formula Area = ½ b × h. The Pythagorean theorem helps you solve for the third side of a right triangle when you know two of the other sides. The theorem says that a2 + b2 = c2 The 3-4-5 Right Triangle And within the family of right-angled triangles, there are some even more special triangles. One of the nicest is the 3-4-5 52 = 32 25 = 9 + 16 There are also the multiples of (3,4,5). That is, right triangles can come in the proportions (6,8,10) and ( 12,16,20) , etc. Always study a question with a right angle carefully to see if it contains a 3-4-5 triangle, or a triangle derived from one (i.e. one whose dimensions are scaled). There are two other special right triangles. One is the isosceles right triangle, where the other two angles measures both equal 45 degrees. Here both these two angles and the corresponding opposite sides are =. In this triangle, the length of the hypotenuse equals the length of either leg times the square root of 2. Finally there is the 30-60-90 triangle, where the sides can also be expressed as convenient proportions. Keep an eye out for these triangles appearing either on their own, or as parts of more complicated geometrical figures. Business programs provide a unique blend of courses that prepare students to enter today’s demanding business world. The business degree exposes students to theories and practices of accounting, banking, finance, global management, leadership, marketing, risk management and more. Learn how a business education can propel your career to a new level. Health care is the largest and fasting growing industry in the U.S. As our population grows, so does our need for qualified and well-trained medical professionals. Learn all about programs in the medical field like Dental Hygiene, Health Care Management, Massage Therapy, Medical Assisting, Nursing, Psychology, Physical Therapy, Pharmacy, X-Ray Tech and more.
http://www.justcolleges.com/tests/math-triangles-tutorial-for-college-test-preparation.htm
13
52
Graph of the tangent (tan) function - Trigonometry The tangent of an angle is plotted against that angle measure. of the triangle and see how the tangent function varies with the angle. To graph the tangent function, we mark the angle along the horizontal x axis, and for each angle, we put the tangent of that angle on the vertical y-axis. The result, as seen above, is rather jagged curve that goes to positive infinity in one direction and negative infinity in the other. In the diagram above, drag the point A around in a circular path to vary the angle CAB. As you do so, the point on the graph moves to correspond with the angle and it's tangent. (If you check the "progressive mode" box, the curve will be drawn as you move the point A instead of tracing the existing curve.) The domain of the tangent function has holes in it As you drag the point A around notice that after a full rotation about B, the graph shape repeats. The shape of the tangent curve is the same for each full rotation of the angle and so the function is called 'periodic'. The period of the function is 360° or 2π radians. You can rotate the point as many times as you like. This means you can find the tangent of any angle, no matter how large, with one exception. If you look at the graph above you see that tan90° is undefined, because it requires dividing by zero. Therefore, angles like this are not in the domain of the tan functions and produce an undefined result. Try tan90° on your calculator and you will get an error, whereas say 89.99 will work. So the domain of the tan function is the set of all real numbers except 90°, -90°, 270°, -270° etc. (or the equivalent in radians: plus/minus pi over 2, 3pi over 2 etc). The range of a function is the set of result values it can produce. The tangent function has a range that goes from positive infinity to negative infinity. To see why this happens, click on 'reset' then drag point A counter clockwise. As it approaches the 90° point with AB nearly vertical, you can see that BC is getting very small. Since the tangent of an angle is "Opposite over Adjacent" (TOA), the result of dividing a number by a very small number produces a very large one. Eventually, the side BC approaches zero and the result approaches infinity. A similar thing happens in the second quadrant, except that BC is then negative and so the function approaches negative infinity. Infinity is not a real number, and so tan90° is actually undefined. As the angle gets close to 90° however the function will return some very large numbers. For example tan(89.999°) is over 57,000. The inverse tangent function What if we were asked to find the inverse tangent of a number, let's say 4.0? In other words, we are looking for the angle whose tan is 4.0. If we look at the curve on the right we see four angles whose tangent is 4.0 (red dots). In fact, since the graph goes on forever in both directions, there are an infinite number of angles that have a tangent of a given value. So what does a calculator say? If you ask a calculator to give the arc tangent (tan-1 or atan) of a number, it cannot return an infinitely long list of angles, so by convention, it finds just the first one. But remember there are many more. Other trigonometry topics Solving trigonometry problems (C) 2009 Copyright Math Open Reference. All rights reserved
http://www.mathopenref.com/triggraphtan.html
13
61
First Grade Standards Chinese Language Standards Gr 1: Novice level content and functions, including greetings, naming common objects, numbers, colors, and activities. Students will be able to sing some grade-appropriate songs and participate in age-appropriate games, recognize some key Chinese characters in picture books, and engage in cultural tasks including paper cutting, storytelling and ribbon dancing. English Language Arts Standards » Reading: Foundational Skills » Grade 1 - RF.1.1. Demonstrate understanding of the organization and basic features of print. - Recognize the distinguishing features of a sentence (e.g., first word, capitalization, ending punctuation). - RF.1.2. Demonstrate understanding of spoken words, syllables, and sounds (phonemes). - Distinguish long from short vowel sounds in spoken single-syllable words. - Orally produce single-syllable words by blending sounds (phonemes), including consonant blends. - Isolate and pronounce initial, medial vowel, and final sounds (phonemes) in spoken single-syllable words. - Segment spoken single-syllable words into their complete sequence of individual sounds (phonemes). Phonics and Word Recognition - RF.1.3. Know and apply grade-level phonics and word analysis skills in decoding words. - Know the spelling-sound correspondences for common consonant digraphs (two letters that represent one sound). - Decode regularly spelled one-syllable words. - Know final -e and common vowel team conventions for representing long vowel sounds. - Use knowledge that every syllable must have a vowel sound to determine the number of syllables in a printed word. - Decode two-syllable words following basic patterns by breaking the words into syllables. - Read words with inflectional endings. - Recognize and read grade-appropriate irregularly spelled words. - RF.1.4. Read with sufficient accuracy and fluency to support comprehension. - Read grade-level text with purpose and understanding. - Read grade-level text orally with accuracy, appropriate rate, and expression. - Use context to confirm or self-correct word recognition and understanding, rereading as necessary. Grade 1 Math National Common Core Standards Overview Operations and Algebraic Thinking • Represent and solve problems involving addition and subtraction. • Understand and apply properties of operations and the relationship between addition and subtraction. • Add and subtract within 20. • Work with addition and subtraction equations. Number and Operations in Base Ten • Extend the counting sequence. • Understand place value. • Use place value understanding and properties of operations to add and subtract. Measurement and Data • Measure lengths indirectly and by iterating length units. • Tell and write time. • Represent and interpret data. • Reason with shapes and their attributes. 1. Make sense of problems and persevere in solving them. 2. Reason abstractly and quantitatively. 3. Construct viable arguments and critique the reasoning of others. 4. Model with mathematics. 5. Use appropriate tools strategically. 6. Attend to precision. 7. Look for and make use of structure. 8. Look for and express regularity in repeated reasoning. Operations and Algebraic Thinking 1.OA Represent and solve problems involving addition and subtraction. 1. Use addition and subtraction within 20 to solve word problems involving situations of adding to, taking from, putting together, taking apart, and comparing, with unknowns in all positions, e.g., by using objects, drawings, and equations with a symbol for the unknown number to represent the problem. 2. Solve word problems that call for addition of three whole numbers whose sum is less than or equal to 20, e.g., by using objects, drawings, and equations with a symbol for the unknown number to represent the problem. Understand and apply properties of operations and the relationship between addition and subtraction. 3. Apply properties of operations as strategies to add and subtract.3 Examples: If 8 + 3 = 11 is known, then 3 + 8 = 11 is also known. (Commutative property of addition.) To add 2 + 6 + 4, the second two numbers can be added to make a ten, so 2 + 6 + 4 = 2 + \ 10 = 12. (Associative property of addition.) 4. Understand subtraction as an unknown-addend problem. For example, subtract 10 – 8 by finding the number that makes 10 when added to 8. Add and subtract within 20. 5. Relate counting to addition and subtraction (e.g., by counting on 2 to add 2). 6. Add and subtract within 20, demonstrating fluency for addition and subtraction within 10. Use strategies such as counting on; making ten (e.g., 8 + 6 = 8 + 2 + 4 = 10 + 4 = 14); decomposing a number leading to a ten (e.g., 13 – 4 = 13 – 3 – 1 = 10 – 1 = 9); using the relationship between addition and subtraction (e.g., knowing that 8 + 4 = 12, one knows 12 – 8 = 4); and creating equivalent but easier or known sums (e.g., adding 6 + 7 by creating the known equivalent 6 + 6 + 1 = 12 + 1 = 13). Work with addition and subtraction equations. 7. Understand the meaning of the equal sign, and determine if equations involving addition and subtraction are true or false. For example, which of the following equations are true and which are false? 6 = 6, 7 = 8 – 1, 5 + 2 = 2 + 5, 4 + 1 = 5 + 2. 8. Determine the unknown whole number in an addition or subtraction equation relating three whole numbers. For example, determine the unknown number that makes the equation true in each of the equations 8 + ? = 11, 5 = – 3, 6 + 6 = . Number and Operations in Base Ten Extend the counting sequence. 1. Count to 120, starting at any number less than 120. In this range, read and write numerals and represent a number of objects with a written numeral. Understand place value. 2. Understand that the two digits of a two-digit number represent amounts of tens and ones. Understand the following as special cases: a. 10 can be thought of as a bundle of ten ones — called a “ten.” b. The numbers from 11 to 19 are composed of a ten and one, two, three, four, five, six, seven, eight, or nine ones. c. The numbers 10, 20, 30, 40, 50, 60, 70, 80, 90 refer to one, two, three, four, five, six, seven, eight, or nine tens (and 0 ones). 3. Compare two two-digit numbers based on meanings of the tens and ones digits, recording the results of comparisons with the symbols >, =, and <. Use place value understanding and properties of operations to add and subtract. 4. Add within 100, including adding a two-digit number and a one-digit number, and adding a two-digit number and a multiple of 10, using concrete models or drawings and strategies based on place value, properties of operations, and/or the relationship between addition and subtraction; relate the strategy to a written method and explain the reasoning used. Understand that in adding two-digit numbers, one adds tens and tens, ones and ones; and sometimes it is necessary to compose a ten. 5. Given a two-digit number, mentally find 10 more or 10 less than the number, without having to count; explain the reasoning used. 6. Subtract multiples of 10 in the range 10-90 from multiples of 10 in the range 10-90 (positive or zero differences), using concrete models or drawings and strategies based on place value, properties of operations, and/or the relationship between addition and subtraction; relate the strategy to a written method and explain the reasoning used. Measurement and Data Measure lengths indirectly and by iterating length units. 1. Order three objects by length; compare the lengths of two objects indirectly by using a third object. 2. Express the length of an object as a whole number of length units, by laying multiple copies of a shorter object (the length unit) end to end; understand that the length measurement of an object is the number of same-size length units that span it with no gaps or overlaps. Limit to contexts where the object being measured is spanned by a whole number of length units with no gaps or overlaps. Tell and write time. 3. Tell and write time in hours and half-hours using analog and digital clocks. Represent and interpret data. 4. Organize, represent, and interpret data with up to three categories; ask and answer questions about the total number of data points, how many in each category, and how many more or less are in one category than in another. Reason with shapes and their attributes. 1. Distinguish between defining attributes (e.g., triangles are closed and three-sided) versus non-defining attributes (e.g., color, orientation, overall size); build and draw shapes to possess defining attributes. 2. Compose two-dimensional shapes (rectangles, squares, trapezoids, triangles, half-circles, and quarter-circles) or three-dimensional shapes (cubes, right rectangular prisms, right circular cones, and right circular cylinders) to create a composite shape, and compose new shapes from the composite shape. 3. Partition circles and rectangles into two and four equal shares, describe the shares using the words halves, fourths, and quarters, and use the phrases half of, fourth of, and quarter of. Describe the whole as two of, or four of the shares. Understand for these examples that decomposing into more equal shares creates smaller shares.
http://www.hicshawaii.org/first.html
13
64
Back to the Table of Contents A Review of Basic Geometry - Lesson 1 Undefined: Points, Lines, and Planes In Discrete Geometry, a point is a dot. Lines are composed of an infinite set of dots in a row. Dots may or may not have size and shape, depending on the version studied. Some common applications of discrete geometry include computer displays and printers. The computer screen I am working on at the moment has 80 columns and 25 rows of characters. Each character is composed of dots in an array about 12 wide and 30 high. In total, an array of 1024 by 768 individual pixels is utilized. When printed, a laser printer with 600 dots per inch is being used. At 6 lines per vertical inch and 10 characters per horizontal inch, each character is appropriately spaced in its own array of 60 by 100 pixels. Early laser printers were 300 dpi. This and lower resolution modes are usually available to reduce the volume of data needed for a full page of graphics. A recent development was Resolution Enhancement technology which allowed them to vary the dot size on laser printers, thus smoothing the edges of curves. Dot matrix printers are similar, but have bigger dots and print them a few at a time. Also, our TI-84+ calculators utilize a screen of 95 by 63 pixels. These, in turn, are used for 16 characters wide by 8 characters high, so each character has an 8 by 8 grid, but space must be allowed between them. (The first number should be the horizontal quantity and the second number the vertical quantity, just like an (x,y) ordered pair.) Oblique lines will often look like steps due to this discrete nature. Other examples of discrete geometry include some paintings, signs made of individual light bulbs, and marching bands. Lines are either horizontal, vertical, or oblique. Discrete lines go on forever, so only a portion is ever displayed. Discrete lines cross each other either with or without a point in common. The ancient Greeks idealized points as an exact location, having no size or shape. A line is then the set of points extending in both directions and containing the shortest path between any two points on it. The technical term for shortest path is geodesic. There is then exactly one line containing any two points. The number line is a common example, with each point given a coordinate. Such lines are said to be coordinatized. Number lines are dense like the rationals. This means that between any two points is another point. They are also continuous like the realsthere are no gaps. Once points are coordinatized, distances can be measured. The Cartesian Coordinate System was invented by Pierre de Fermat and René Descartes about 1630. Cartesius was the name Descartes used for himself in his writings which were in Latin. Each point in the plane is now a location in the Cartesian plane and is represented by an ordered pair. The first ordinate is usually termed x and the second ordinate y. The coordinate system has an origin where the x-axis and y-axis intersect. A line is now a set of ordered pairs such that Ax + By = C. This standard form, integer constant form or Ax + By = C form complements the y=mx+b form you already should know. Converting between them should be routine. Related properties such as slope were already studied in numbers lesson 12. A fourth description of point is of a node or vertex in a network. A line is now an arc connecting either two different nodes or one node to itself. This description is utilized in Graph Theory. Historically, the field of topology, often called rubber-sheet geometry, was invented by Euler to solve the Königsberg Bridge Problem in 1736. Networks are either traversable or not, depending on the number of odd nodes. A node is odd if the number of arcs to it is odd, else it is even. The network for the Königsberg Bridge Problem had four odd nodes. Since four is more than two, it is not traversablesince you must either start or finish at an odd node. Hence, the residents could not walk over all the bridges without retracing their steps. Networks commonly appear as telephone or other communication networks, power grids, or even highway systems. A favorite of mathematicians is the of papers published by joint authors. At its center is Paul Erdös. This network has been well studied and has a known "diameter" of 23. Although previously thought to be single digit, when the paper based on my dissertation work is published, I should have a computable approximation for my Erdös number of about 6. Graph theory has other appliciations such as wire-wrapping old computer circuits, or laying out complex chip designs. The words points, lines, and planes are or rather defined by usage in most geometries. We thus avoid circularity: where definitions circle back to one previously defined. This tradition was only started about 100 years ago by David Hilbert. However, we can form definitions using our undefined terms. A figure is a set of points. Space is the set of all points. Three or more points are collinear if and only if they are on the same line. Four or more points are coplanar if and only if they are in the same plane. When all points in space are collinear, the geometry is one-dimensional. When all points in space are coplanar, the geometry is two-dimensional (2D) or plane geometry. Common figures we will study, such as squares, circles, and triangles are two-dimensional. Other figures, such as spheres, boxes, cones, and other tangible objects do not lie in one plane and are three-dimensional or 3D. The study of these is called solid geometry. What our undefined terms really mean depends on which set of axioms or postulates we choose. Historically, axioms were self-evident truths, hence the word postulate, meaning assumption is now more commonly used. The postulates we will use correspond with Euclidean Geometry and fit both the synthetic and coordinate geometries introduced above, but not discrete geometry nor graph theory. Euclidean Geometry is so named because is was well established in the set of thirteen books called Elements written by 300 B.C. These books also dealt with other areas of mathematics. It is widely believed that Euclid summarized much of the known mathematics of his time. His geometry starts with five assumptions (requests), the fifth becoming very controversial by the early 1800's. exist, including a wonderful color from the early 1800's. Note, in Elements, point, line (segment), straight line (line) are all defined terms. Several of Euclid's common notions are the same as our algebraic properties given in numbers or in the Specifically, the following properties are needed: transitive, addition (subtraction) property of equality, and the equality to inequality property. Euclid's five requests are: Compare the wording given here for Euclid's fifth postulate with that - To draw a straight line from any point to any point. - To produce a finite straight line continuously in a straight line. - To describe a circle with any center and distance. - That all right angles are equal to one another. - That, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which are the angles less than the two right angles. Euclid's Fifth Postulate:| Through a point not on a line, one and only one line can be drawn parallel to that line. Rejecting Euclid's Fifth Postulate leads one to Non-euclidean Geometries. A substantial portion of standard geometry can be developed without it and is termed Neutral Geometry. Adapting variations of Euclid's Fifth Postulate leads to several types of Geometries involving positively or negatively curved surfaces. A plane or cylinder has zero curvature. A sphere has positive curvature. A saddle has negative curvature. On a sphere no line parallel can be drawn through a point outside a line. On a saddle, more than one such parallel line can be drawn. The geometry of a saddle shaped surface is known as hyperbolic geometry (from the Greek to exceed). The geometry of a sphere required additional changes to the usual axioms because betweenness is no longer meaningful and must be replaced with separation. This is known as Elliptic Geometry. The last geometry we will discuss is Reimannian Geometry. Its full development requires calculus, which is beyond the scope of these lessons. This geometry was popularized by Albert Einstein when he developed his theory of General Relativity with the notion that space is curved by the presence of mass. Euclid is known as the father of geometry. When Ptolomy asked if there was an easier way to learn geometry Euclid replied: "There is no royal road to Geometry." There are other geometry between incidence and the coordinatized Euclidean version with least upper bound. Just ask the author sometime and he might show you the book he is typing.... All in all, it takes hundreds of pages to cover the ground covered by the point-line-plane postulate given below! One geometry not cover there (yet) is projective geometry which has an important dualism between points and lines. Compare the following: 2 points determine a line; and 2 lines determine a point. We will start with three assumptions known collectively as the - Unique Line Assumption Through any two points is exactly one line. - Number Line Assumption Every line is a set of points which can be put into a one-to-one correspondence with the real numbers. Any point can correspond with 0 (zero) and any other point can correspond with 1 (one). - Dimension Assumption Given a line in a plane, there exists at least one point in the plane that is not on the line. Given a plane in space, there exists at least one point in space that is not in the plane. The first assumption is sometimes stated simply as: two points determine a line. It should be clear that the Unique Line Assumption does not apply to lines in discrete geometry (part of different lines near each other) or graph theory (more than one arc connecting two nodes). The Number Line Assumption also does not apply to lines in graph theory since it guarantees infinitely many points. These postulates herd us quickly down the road toward the development of Euclidean geometry. Many interesting geometries could be investigated if we started with much simpler postulates. The number line assumption in particular immediately gives us measurement, distance, betweenness, and We can now prove our first theorem by using the Unique Line Assumption. (See book for details.) Theorem: Two different lines intersect in at most one point. Two coplanar lines m and n are parallel, written m||n,| if and only if they have no points in common (or they are identical). Note: this definition of parallel is typical for our textbook but is often at odds with what students are taught in middle school. It is an example of an inclusive definition. Along a similar vein, a quadrilateral which happens to be square is still a rectangle. An equilateral triangle is still isosceles. A square is still a trapezoid. Teresa Heinz-Kerry is still a Heinz. Although standardized tests and contests tend to avoid these ambiguities, one must be on the lookout for such problems! Warning our geometry textbook only motivates betweenness using numbers on a number line.| It neither defines it nor adopts an axiom to develop it. Below are four typical axioms of betweenness. A*B*C means point B is between point A and point C. - If A*B*C, then A, B, and C are three distinct points all lying on the same line, and C*B*A. - Given any two distinct points A and B, there exist points C, D, and E lying on C*A*B, A*D*B, and A*B*E. - If A, B, and C are three distinct points lying on the same line, then one and only one of the points is between the other two. - If Q and R are on opposite sides of line L, then line QR intersects L. The segment (or line segment) with endpoints A and B, denoted is the set consisting of the distinct points A and B and all points between A and B. The ray with endpoint A and containing a second point B, consists of the points on and all points for which B is between each of them and A. are opposite rays if and only if A is between B and C. Please note that our segments and rays are closedthey include their endpoints. In other geometries, segments and rays could be open (or half open) and exclude their endpoint(s). Ultimately, another axiom, the least upper bound axiom is needed to deal with this. Three more important assumptions are known as the Distance Postulate. - Uniqueness Property On a line, there is a unique distance between two points. - Distance Formula If two points on a line have coordinates x and y, the distance between them is |x - y|. - Additive Property If C is on , then AC + CB = AB. Distances are positive values and the symbol for absolute value: |-10| = 10 is utilized above. The distance between two points A and B is written AB. You cannot multiply points, so AB always represents the distance between, and never their product. Because distance is always positive, AB = BA. The term directed distance is sometimes uses to convey not only magnitude, but also the direction. It is thus a vector instead of a scalar quantity. Please note carefully the differences among distance AB, segment , and line Although mathematicians don't often draw in perspective, the concept and terminology are important. A perspective drawing gives a two-dimensional object a feeling of depth. Parallel lines now meet in the distance at a vanishing point. Often one thinks of the artist's or observer's eye as this vanishing point and sketches lines of sight to connect them. Objects can be drawn in one- two- or three-point perspective, depending on how many vanishing points are used. Parallel horizontal and vertical lines go to their own vanishing point, depending on their relationship to each other. Multiple vanishing points should line up on the vanishing line which corresponds with the horizon line at the height of the observer's eye. Mathematicians typically draw non-perspective drawings, utilizing dashed or dotted hidden lines to indicate parts not normally seen. Compare the two pictures shown above left and below. Find the vanishing points for the
http://www.andrews.edu/~calkins/math/webtexts/geom01.htm
13
97
|Home||Timeline of Southern History| | ||1600s | 1700s | Early-1800s | Mid-1800s | Late-1800s | Early-1900s | Mid-1900s | Late 1900s| Spanish found St. Augustine, FL - first permanent white settlement in what is now the United States. (1565) Jamestown Settlement (1607) - The first permanent settlement by the English in the new world, it was located in present-day Virginia. The first Africans arrive in Virginia. They appear to have been indentured servants, but the institution of hereditary lifetime service for blacks develops over time. (1619) The Maryland Toleration Act (1649) - This act formally allowed for any and all Christian faiths to be fully tolerated in Maryland. The practice of slavery becomes a legally recognized institution in British America. Colonial assemblies begin to enact laws known as slave codes, which restrict the liberty of slaves and protect the institution of slavery. (1660's) Charlestown, South Carolina founded. (1670) First Spanish settlement in Texas. (1682) La Salle claims the Mississippi and the land it drains for France. (1682) Ft. St. Louis is established in Texas on Matagorda Bay by La Salle. (1685) French establish settlement in Arkansas. (1686) The first church in Texas, San Francisco de Tejas, is organized. (1690) The fort (San Antonio de Bexar, aka the Alamo) and the settlement that will become San Antonio are established. (1718) James Oglethorpe founds Savannah, Georgia. (1733) First permanent settlement in Tennessee. (1769) First permanent settlement in Kentucky. (1774) Mecklenburg Declaration (1775) Patriots in Charleston, South Carolina remove powder from the public magazines (April 21, 1775) Patriots in Savannah, Georgia remove powder from the royal magazines. (May 11, 1775) Josiah Martin, Governor of North Carolina, boards the British sloop Cruzier. (July 18, 1775) Patriots defeat a small Loyalist force at Reedy Creek, South Carolina. (November 22, 1775) Virginia and North Carolina Patriots route Loyalist troops and burn Norfolk. (December 11, 1775) Col. Thomson with 1500 Rangers and militia capture a force of Loyalists in Great Canebreak, South Carolina. (December 22, 1775) Sir James Wright, Royal Governor of Georgia, boards a British warship. (February 11, 1776) Continental Congress establishes the Southern Department of the Continental Army, consisting of Virginia, North and South Carolina and Georgia. (February 27, 1776) Sir James Wright, Royal Governor of Georgia, fails to recapture Savannah, Georgia. (March 7, 1776) Battle of Fort Sullivan (June 28, 1776) Declaration of Independence (1776) Maj. General Moultrie defeats British detachment at Port Royal Island, South Carolina. (February 3, 1779) Battle of Kettle Creek, Georgia. Andrews Pickens and Elijah Clarke defeat North Carolina Tories. (February 14, 1779) Battle of Monck's Corner, South Carolina (April 14, 1780) Patriot militia defeat Tories at Ramsour's Mill. (June 20, 1780) Thomas Sumter defeats Tories at Williamson's Plantation. (July 12, 1780) Georgia Patriots attack Loyalist camp and defeat them at Gowen's Old Fort, South Carolina. (July 13, 1780) Thomas Sumter leads unsuccessful attack at Rocky Mount, South Carolina. (August 1, 1780) Battle of Hanging Rock (August 6, 1780) Thomas Sumter captures the Wateree Ferry. (August 15, 1780) Francis Marion rescues Patriot prisoners at Nelson's Ferry, South Carolina. (August 20, 1780) Battle of King's Mountain (October 7, 1780) Continental cavalry defeats Tories at Hammond's Store, South Carolina. (December 28, 1780) Battle of Cowpens (January 17, 1781) Lt. Col Henry Lee and Francis Marion raid, Georgetown, South Carolina. (January 24, 1781) Pyle's Hacking Match; Haw River, North Carolina. Continental Lt. Colonel Henry Lee surprises and massacres Tory militia. (February 25, 1781) Articles of Confederation (1781) Battle of Guilford Courthouse (March 15, 1781) Lt. Colonel Francis Lord Rawdon abandons Camden, South Carolina. (May 10, 1781) British outpost at Orangeburg, South Carolina surrenders to Thomas Sumter. (May 11, 1781) Lt. Colonel Henry Lee and Francis Marion capture Fort Motte, South Carolina. (May 12, 1781) Lt. Colonel Henry Lee captures Fort Granby, South Carolina. (May 15, 1781) Maj. General Nathanael Greene lays siege to Ninety-Six, South Carolina. (May 22-June 19, 1781) Patriots capture British garrison at Monck's Corner, South Carolina. (July 17, 1781) The French maintained control of Chesapeake Bay at the Second Battle of the Capes. (September 5, 1781) Battle of Yorktown (October 19, 1781) Maj. General Nathanael Greene captures garrison at Dorchester, South Carolina. (December 1, 1781) Formal peace negotiations begin in Paris, France. (September 27, 1782) Treaty of Paris (1783) The first American golf course was built in Charleston, South Carolina and the South Carolina Golf Club was formed. (1786) Delaware admitted to the American union. (1787) Georgia admitted to the American union. (1788) Maryland admitted to the American union. (1788) South Carolina admitted to the American union. (1788) Virginia admitted to the American union. (1788) North Carolina admitted to the American union. (1789) The Bank of the United States created, enacting second element of Hamilton's financial plan. Launches constitutional debate between Jefferson and Hamilton. (1791) Bill of Rights ratified by the member States of the union. (1791) Kentucky admitted to the American union. (1792) Eli Whitney invents the cotton gin. (1793) Tennessee admitted to the American union. (1796) Mint Julep invented in Virginia. (1797) Kentucky - Virginia Resolutions (1798 & 1799) Louisiana Purchase (1803) Louisiana admitted to the American union. (1812) Second Bank of the United States established. (1816) Mississippi admitted to the American union. (1817) Spain cedes remainder of Florida to United States (1819) Alabama admitted to the American union. (1819) Missouri Compromise (1820) Missouri admitted to the American union. (1821) Texas becomes part of new nation of Mexico; Stephen F. Austin founds Anglo-American colony in Texas. (1821) Monroe Doctrine (1823) "The Tariff of Abominations," raising the protective Tariff of 1824, passes through Congress and is signed by President Adams. "South Carolina Exposition and Protest" issued by S.C. state legislature - written anonymously by John C. Calhoun, the essay declares the Tariff of 1828 unconstitutional, and advocates state sovereignty and the doctrine of nullification. (1828) Mexico passes anti-colonization law to prevent Americans from further colonizing Texas. (1830) Texas declares independence from Mexico; Battle of the Alamo. (1836) Arkansas admitted to the American union. (1836) Trail of Tears forces 13,000 Cherokee west of the Mississippi. (1838-39) Nashville Convention (1850) Dred Scott Decision (1857) The Crittenden Compromise (December 1860) Freedman's Bureau established. In addition to providing medical, education and relocation services, the Bureau begins the redistribution of small plots of land to blacks. (1865) President Johnson implements his own Reconstruction Plan. (Summer 1865) Congress refuses to recognize the state governments reconstructed under Johnson's plan. (December 1865) The 13th Amendment, which abolishes slavery, becomes law (January 1866) Congress passes the Southern Homestead Act, opening public lands in Alabama, Mississippi, Louisiana, Arkansas and Florida to all settlers regardless of race. (1866) Tennessee is re-admitted to the American union. (1866) The Ku Klux Klan is formed. (1866) The National Farmers' Alliance is formed. The farmers' plight has taken on catastrophic proportions in the face of high tariffs, flood and drought, unfair railroad rates and high interest on loans and mortgages. (1880) Tuskegee Normal and Industrial Institute is founded by Booker T. Washington. At Tuskegee, Booker T. Washington advocates an education limited to vocational skills, and from this base, Washington rises to national prominence. (1881) U.S. Supreme Court overturns the Civil Rights Act of 1875. (1883) Responding to public pressure, land in Oklahoma formally ceded to the Indians is opened to white settlers by government decree. (1889) The Southern Farmers Alliance, the Farmers' Mutual Benefit Association, and the Colored Farmers' Alliance meet in Ocala, Florida, to see if there is some way to take joint action on their respective grievances. Nothing comes of the meeting. (1890) Supreme Court rules in Plessy v. Ferguson that 'separate but equal' facilities are constitutional. (1896) Orville and Wilbur Wright make the first four successful flights of a heavier-than-air machine in Kitty Hawk, North Carolina. (1903) Oklahoma is admitted to the American union. (1907) Woodrow Wilson is elected president. (1912) Precipitated in part by farm labor wages falling to $ .75 a day and the boll weevil devastation of cotton crops, Southern blacks begin migrating to northern and western cities when war industries seek their services. By 1930, almost one million blacks leave the South in what becomes known as the Great Migration. (1914) D.W. Griffith films Birth of a Nation. (1915) Scopes Monkey Trial takes place in Dayton, Tennessee. (1925) Nashville country radio program becomes "The Grand Ole Opry" (1926) Second major migration of blacks from the South seeking opportunities in northern cities during war years. (1940 - 1945) Southern Democrats break the New Deal coalition, bolting the Democratic Party and forming the State's Rights Democratic Party (the 'Dixiecrats'). (1948) Civil Rights Era (1954 - 1972) The U.S. Supreme Court rules on the landmark case Brown v. Board of Education of Topeka, Kansas, unanimously agreeing that segregation in public schools is unconstitutional. While the case is heralded as a victory for minorities, it is condemned as an outright attack by the federal government on the sovereignty of the states. (May 1954) Rosa Parks refuses to give up her seat on a city bus in Montgomery, Alabama. This starts a boycott of the city buses that lasts for a year until the buses are desegregated. (December 1955) Federal troops are sent to desegregate Arkansas schools. (1957) Four black students from North Carolina Agricultural and Technical College begin a sit-in at a segregated Woolworth's lunch counter. Although they are refused service, they are allowed to stay at the counter. The event triggers many similar nonviolent protests throughout the South. (February 1960) About 250,000 people join the March on Washington. Congregating at the Lincoln Memorial, participants listen as Reverend King delivers his famous "I Have a Dream" speech. (August 1963) Four young girls attending Sunday school are killed when a bomb explodes at the Sixteenth Street Baptist Church, a popular location for civil rights meetings. Riots erupt in Birmingham, leading to the deaths of two more black youths. (September 1963) President Johnson signs the Civil Rights Act of 1964, making segregation in public facilities and discrimination in employment illegal. (July 1964) Malcolm X, black nationalist and founder of the Organization of Afro-American Unity, is shot to death in Harlem. It is believed the assailants are members of the Black Muslim faith, which Malcolm had recently abandoned. (February 1965) Reverend King, at age 39, is shot as he stands on the balcony outside his hotel room in Memphis, TN. Although escaped convict James Earl Ray later pleads guilty to the crime, questions about the actual circumstances of King's assassination remain to this day. (April 1968) The Supreme Court, in Swann v. Charlotte-Mecklenburg Board of Education, upholds busing as a legitimate means for achieving integration of public schools. Seen as another undermining of state's rights, busing is carried out under court orders and continues until the late 1990s. (April 1971) The first permanent English settlement in North America, with all its tragedies and disasters, established in 1607 in Jamestown, Virginia. Roughly 400 years ago, on December 20, 1606, three merchant ships loaded with passengers and cargo embarked from England on a voyage that would later set the course of American history. The Susan Constant, Godspeed and Discovery reached Virginia in the spring of 1607, and on May 14, their 104 passengers all men and boys began building on the banks of the James River what was to be America's first permanent English colony, predating Plymouth in Massachusetts by 13 years. The ambitions of these pioneers and the hardships they faced are vividly depicted at Jamestown Settlement, a museum operated by the Commonwealth of Virginia, through living history, a film and gallery exhibits. Jamestown Settlement is located about a mile from the original site and 10 minutes from the Historic Area, Jamestown's successor as capital of the Virginia colony. This was an act concerning religion. The first two paragraphs read as follows: “Forasmuch as in a well governed and Christian Common Wealth matters concerning Religion and the honor of God ought in the first place to be taken, into serious consideration and endeavored to be settled, Be it therefore ordered and enacted by the Right Honorable Cecilius Lord Baron of Baltemore absolute Lord and Proprietary of this Province with the advise and consent of this General Assembly: That whatsoever person or persons within this Province and the Islands thereunto belonging shall from henceforth blaspheme God, that is Curse him, or deny our Savior Jesus Christ to be the son of God, or shall deny the holy Trinity the father son and holy Ghost, or the Godhead of any of the said Three persons of the Trinity or the Unity of the Godhead, or shall use or utter any reproachful Speeches, words or language concerning the said Holy Trinity, or any of the said three persons thereof, shall be punished with death and confiscation or forfeiture of all his or her lands and goods to the Lord Proprietary and his heirs.” The act goes on to declare in an effort to “better preserve mutual love and amity amongst the inhabitants” that “no person whatsoever within this province…professing to believe in Jesus Christ, shall from henceforth be any ways troubled, molested or discountenanced for or in respect of his or her religion nor in the free exercise thereof within this province…nor any way compelled to the beliefs or exercise of any other religion against his or her consent.” Mason-Dixon Line, boundary between Pennsylvania and Maryland (running between lat. 39°43?26.3??N and lat. 39°43?17.6??N), surveyed by the English astronomers Charles Mason and Jeremiah Dixon between 1763 and 1767. The ambiguous description of the boundaries in the Maryland and Pennsylvania charters led to a protracted disagreement between the proprietors of the two colonies; the dispute was submitted to the English court of chancery in 1735. A compromise between the Penn and Calvert families in 1760 resulted in the appointment of Mason and Dixon. By 1767 the surveyors had run their line 244 mi (393 km) W from the Delaware border, every fifth milestone bearing the Penn and Calvert arms. The survey was completed to the western limit of Maryland in 1773; in 1779 the line was extended to mark the southern boundary of Pennsylvania with Virginia (the present-day West Virginia). Before the Civil War the term "Mason-Dixon Line popularly designated the boundary dividing the slave states from the free states, and it is still used to distinguish the South from the North. There are those who say Polk, then commander of the Mecklenburg County, North Carolina militia, called a meeting in 1775 at the courthouse he had built. In the middle of that meeting, a courier rode into town and announced shocking news: British troops had fired on Americans at Lexington, Mass. The American Revolution had begun. Mecklenburgers were furious. All that day and into the next, they drew up the Mecklenburg Declaration of Independence, declaring their freedom from Britain. ``We the Citizens of Mecklenburg County do hereby desolve the political bands which have connected us to the Mother Country & hereby absolve ourselves from all allegiance to the British crown,'' they wrote. The handwritten original of the Mecklenburg Declaration is said to have burned in a fire at the home of John McKnitt Alexander, secretary to the drafting committee. The document was reconstructed from Alexander's notes, but was not published until decades later. For that reason, some historians question its authenticity. So did Thomas Jefferson, whose national Declaration of Independence was adopted more than a year after the Mecklenburg Declaration. | Battle of Moore's Creek Bridge (1776) | In early 1776, Maj. General William Howe ordered Maj. General Henry Clinton to sail south as part of a campaign to capture the port city of Charleston and gather the support of Southern Tories. As part of the plan, Tories were to join General Clinton at Cape Fear, North Carolina. On February 20, 1776, 1,600 Scottish Highlanders set out for Cape Fear. On February 26, they learned that 1,000 Rebels were waiting with two cannon at Moore's Creek Bridge | Battle of Fort Sullivan (1776) | Following their defeat at Bunker Hill, the British now knew they had a real fight on their hands. They looked to a quick compaign in the Southern colonies where they expected resistance to be weakest and support to be strongest. They believed it would a simple matter to capture the Southern port cities of Savannah, Georgia and Charleston, South Carolina. This would eliminate the Rebels there, swell the army's ranks with Tory volunteers and leave only Virginia and New England to be subjugated. | Battle of King's Mountain (1780) | Following the defeats of Maj. General Benjamin Lincoln at Charleston in May and then Maj. General Horatio Gates at Camden, British Lt. General Charles Cornwallis appeared to now have a clear path all the way to Virginia. In September, General Cornwallis invaded North Carolina and ordered Major Patrick Ferguson to guard his left flank. Ferguson provoked the Mountain Men living in the area by sending out a threat. | Battle of Cowpens (1781) | New Continental Southern Commander Maj. General Nathanael Greene determined that he needed time to rehabilitate his army. He decided to split his force and assigned command of the more mobile force to Brig. General Daniel Morgan. British Lt. General Charles Cornwallis recognized the strategy and sent his own mobile force under Lt. Colonel Banastre Tarleton after Morgan. Before the Constitution....there was The Articles of Confederation-- in effect, the first constitution of the United States. Drafted in 1777 by the same Continental Congress that passed the Declaration of Independence, the articles established a "firm league of friendship" between and among the 13 states. Created during the throes of the Revolutionary War, the Articles reflect the wariness by the states of a strong central government. Afraid that their individual needs would be ignored by a national government with too much power, and the abuses that often result from such power, the Articles purposely established a "constitution" that vested the largest share of power to the individual states. Under the Articles each of the states retained their "sovereignty, freedom and independence." Instead of setting up executive and judicial branches of government, there was a committee of delegates composed of representatives from each state. These individuals comprised the Congress, a national legislature called for by the Articles. The Congress was responsible for conducting foreign affairs, declaring war or peace, maintaining an army and navy and a variety of other lesser functions. But the Articles denied Congress the power to collect taxes, regulate interstate commerce and enforce laws. Eventually, these shortcomings would lead to the adoption of the U.S. Constitution. But during those years in which the 13 states were struggling to achieve their independent status, the Articles of Confederation stood them in good stead. Adopted by Congress on November 15, 1777, the Articles became operative on March 1, 1781 when the last of the 13 states signed on to the document. | Battle of Guilford Courthouse (1781) | Following Brig. General Daniel Morgan's victory over Lt. Colonel Banastre Tarleton at the Battle of Cowpens on January 17, 1781, both Morgan and Maj. General Nathanael Greene retreated to Virginia, while Lt. General Charles Cornwallis vainly chased them. In March, Greene returned to North Carolina and began maneuvering against Cornwallis. He finally chose to stand at Guilford Courthouse. | Battle of Yorktown (1781) | In May 1781, French Admiral de Barras arrived in Rhode Island to take command of the blockaded fleet there and brought word that Admiral de Grasse would be bringing the long-awaited French fleet later in the year. General George Washington met with French Lt. General Rochambeau to plan operations up to and after Admiral de Grasse arrived. They decided to operate around New York City where Lt. General Henry Clinton was located, although Washington feared that Maj. General Nathanael Greene could not keep Lt. General Charles Cornwallis occupied in the Carolinas and would soon move into Virginia in an effort to link up with Clinton. The Treaty of Paris of 1783 formally ended the American Revolution. Great Britain acknowledged the independence of the American colonies, recognizing them as 13 independent and sovereign states. The document that embodies the fundamental principles upon which the American republic is conducted. Drawn up at the Constitutional Convention in Philadelphia in 1787, the Constitution was signed on Sept. 17, 1787, and ratified by the required number of states (nine) by June 21, 1788. It established the system of federal government that began to function in 1789. There are seven articles and a Preamble; 27 amendments have been adopted. From its very beginnings, the Constitution has been subject to stormy controversies, not only in interpretation of some of its phrases, but also between the "loose constructionists and "strict constructionists". The middle of the 19th century saw a tremendous struggle concerning the nature of the Union and the extent of states' rights. Resolutions passed in 1798 and 1799 by the Kentucky and Virginia legislatures in opposition to the Alien and Sedition Acts. The Kentucky Resolutions, written by Thomas Jefferson, from Virginia, stated that the federal government had no right to exercise powers not delegated to it by the Constitution. A further resolution declared that the states could nullify objectionable federal laws (this was known as Nullification). The Virginia Resolution, written by James Madison, from Virginia, was milder. Both were later considered the first notable statements of the State’s Rights doctrine. In 1803, President Thomas Jefferson of Virginia, on behalf of the United States, signed a treaty with France to acquire the Louisiana territory. This land purchase effectively doubled the size of the United States! This was a precursor to the great westward migration that would come half a century later during the period described as America's Manifest Destiny. Missouri Compromise was a plan agreed upon by the United States Congress in 1820 to settle the debate over slavery in the Louisiana Purchase area. The plan temporarily maintained the balance between free and slave states. In 1818, the Territory of Missouri, which was part of the Louisiana Purchase, applied for admission to the Union. Slavery was legal in the Territory of Missouri, and about 10,000 slaves lived there. Most people expected Missouri to become a slave state. When the bill to admit Missouri to the Union was introduced, there were an equal number of free and slave states. Six of the original 13 states and five new states permitted slavery, while seven of the original states and four new states did not. This meant that the free states and the slave states each had 22 senators in the United States Senate. The admission of Missouri threatened to destroy this balance. This balance had been temporarily upset a number of times, but it had always been easy to decide whether states east of the Mississippi River should be slave or free. Mason and Dixon's Line and the Ohio River formed a natural and well-understood boundary between the two sections. No such line had been drawn west of the Mississippi River. In addition, some parts of Missouri Territory lay to the north of the mouth of the Ohio River, while other parts of it lay to the south. A heated debate broke out in Congress when Representative James Tallmadge of New York introduced an amendment to the bill enabling Missouri to become a state. Tallmadge proposed to prohibit the bringing of any more slaves into Missouri, and to grant freedom to the children of slaves born within the state after its admission. This proposal disturbed Southerners, who found cotton growing by means of slave labor increasingly profitable, and feared national legislation against slavery. Because the free states dominated the House of Representatives, the slave states felt they must keep the even balance in the Senate. The Tallmadge Amendment passed the House, but the Senate defeated it. During the next session of Congress, Maine applied for admission to the Union. Missouri and Maine could then be accepted without upsetting the Senate's balance between free and slave states, and the Missouri Compromise became possible. The compromise admitted Maine as a free state and authorized Missouri to form a state constitution. A territory had to have an established constitution before it could become a state. The compromise also banned slavery from the Louisiana Purchase north of the southern boundary of Missouri, the line of 36 degrees 30 minutes north latitude, except in the state of Missouri. The people of Missouri believed they had the right to decide about slavery in their state. They wrote a constitution that allowed slavery and that restricted free blacks from entering the state. Before Congress would admit Missouri, a second Missouri Compromise was necessary. Henry Clay, the Speaker of the House, helped work out this agreement. It required the Missouri legislature not to deny black citizens their constitutional rights. With this understanding, Missouri was admitted to the Union in 1821. Principle of American foreign policy enunciated in President James Monroe's (from Virginia) message to Congress, Dec. 2, 1823. It initially called for an end to European intervention in the Americas, but it was later extended to justify U.S. imperialism in the Western Hemisphere. The tariff bill of 1832 disappointed the pro-tariff Henry Clay, but it also disappointed the anti-tariff Nullifiers. They had hoped that with their proclamation of the principal of Nullification, and the Vice President being the author of the principal, and Jackson's partial tendencies towards States rights -- Jackson and the Congress would go a long way in their direction. On October 22, 1832, the South Carolina legislature declared a convention on November 19, to decide whether the state would, according to Calhoun's formula, Nullify the new tariff. The convention did declare the law null in South Carolina, by a vote of 136 to 26. On December 11, 1832, Jackson published a proclamation, "... ending in a strong plea and threat, which was mostly pure Jackson: "Those who told you that you might peaceably prevent [the execution of the laws] deceived you; they could not have been deceived themselves... Their object is disunion. But be not deceived by names. Are you really ready to incur its guilt? If you are, on the heads of the instigators of the act be the dreadful consequences; on their heads be the dishonor, but on yours may fall the punishment." Most of the nation responded to this with wild enthusiasm. Jackson claimed he could have 100,000 men on the side of the Union in a matter of weeks. Still, the South Carolina legislature authorized its Governor to call a draft, and appropriated $200,000 for arms. Jackson's actual military moves were on a fairly large scale, but careful, and calculated to avoid confrontation while negotiations went on. Meanwhile a battle went on in Congress. Jackson was skillfully wielding threats and promises. On January 8, the administration submitted a bill, known as the Verplanck bill after one of Van Buren's allies, which cut the tariff in half over two years. The Verplanck Bill was rejected by Nullifiers and Clay's pro-tariff men. Then came a move to save Calhoun's face and take credit away from Jackson. Clay stood up to propose a "Compromise bill", and was seconded by Calhoun. The bill was, in fact, much less of a tariff reduction (at least until nearly 10 years out) than the administration bill. Clay got a friend in the house to deftly swap his bill for the Verplanck bill and it was quickly passed, taking the administration by surprise. The Senate then passed this bill with the nullifiers perversely lending their support. In South Carolina, with such face saving as the revised tariff gave them, the legislature rescinded the nullification proclamation against the tariff. A two-session meeting of proslavery Southerners in the United States. John C. Calhoun initiated the drive for a meeting when he urged Mississippi to call for a convention. The resulting Mississippi Convention on Oct. 1, 1849, issued a call to all slave-holding states to send delegates to Nashville, Tennessee, in order to form a united front against what was viewed as Northern aggression. Delegates from nine Southern states met in Nashville on June 3, 1850. Robert Barnwell Rhett, a leader of the extremists, sought support for secession, but moderates from both the Whig and the Democratic parties were in control. The convention ultimately (June 10) adopted 28 resolutions defending slavery and the right of all Americans to migrate to the Western territories. The delegates were ready to settle the question of slavery in the territories, however, by extending the Missouri Compromise line west to the Pacific. In September the U.S. Congress enacted the Compromise of 1850, and six weeks later (November 11-18) the Nashville Convention reconvened for a second session. This time, however, there were far fewer delegates, and the extremists were in control. Although they rejected the Compromise of 1850 and called upon the South to secede, most Southerners were relieved to have the sectional strife seemingly resolved, and the second session of the Nashville Convention had little impact. Dred Scott was the name of a black man who was a slave. He was taken by his master, an officer in the U.S. Army, from the slave state of Missouri to the free state of Illinois and then to the free territory of Wisconsin. When the Army ordered his master to go back to Missouri, he took Scott with him back to that slave state, where his master died. In 1846, Scott was helped by Abolitionist (anti-slavery) lawyers to sue for his freedom in court, claiming he should be free since he had lived on free soil for a long time. The case went all the way to the United States Supreme Court. In March of 1857, Scott lost the decision as seven out of nine Justices on the Supreme Court declared no slave or descendant of a slave could be a U.S. citizen, or ever had been a U.S. citizen. As a non-citizen, the court stated, Scott had no rights and could not sue in a Federal Court and must remain a slave. At that time there were nearly 4 million slaves in America. The court's ruling affected the status of every enslaved and free black person in the United States. The ruling served to turn back the clock concerning the rights of blacks, ignoring the fact that black men in five of the original States had been full voting citizens dating back to the Declaration of Independence in 1776. The Supreme Court also ruled that Congress could not stop slavery in the newly emerging territories and declared the Missouri Compromise of 1820 to be unconstitutional. The Missouri Compromise prohibited slavery north of the parallel 36°30´ in the Louisiana Purchase. The Court declared it violated the Fifth Amendment of the Constitution which prohibits Congress from depriving persons of their property without due process of law. | The Crittenden Compromise (1860) | The Crittenden Compromise, which was supported by Abraham Lincoln, was perhaps the last-ditch effort to resolve the secession crisis of 1860-61 by political negotiation. Authored by Kentucky Senator John Crittenden (whose two sons would become generals on opposite sides of the War for Southern Independence), the Compromise only addressed the issue of slavery, ignoring the greater concerns of the Southern States. The Compromise proposed to extend the right to hold slaves across the American continent south of latitude 36 degrees 30 minutes. In addition, the Compromise proposed a constitutional amendment that would enshrine slavery in the law and bar Congress from ever abolishing it. The Compromise further declared that the fugitive slave laws were to be strictly enforced, and any state laws conflicting these laws were to be declared null and void. | The South Secedes (1861) | Abraham Lincoln was a known advocate for Henry Clay's "American System." This "system" advocated the supremacy of the federal government over the states, in direct contradiction of the founders' expressed intentions in their writings. Clay's "system" also included advocating protective tariffs, and Lincoln strongly supported high tariffs. As the South incurred 80% of the cost of tariffs, and the Northern states reaped all the benefits, Southerners saw no benefit in voting for Lincoln. Therefore, no Southern state provided any electoral votes for Lincoln in the presidential election of 1860. When Lincoln was elected president, the South Carolina legislature perceived a threat. Calling a state convention, the delegates voted to remove the state of South Carolina from the union known as the United States of America. The secession of South Carolina was followed by the secession of all the gulf states -- Mississippi, Florida, Alabama, Georgia, Louisiana, and Texas. | The South Creates a Government (February 1861) | At a convention in Montgomery, Alabama, the seven seceding states created the Confederate Constitution, a document similar to the United States Constitution, but with greater clarification of the autonomy of each state. Jefferson Davis was named provisional president of the Confederacy until elections could be held. | Confederate Troops Occupy Federal Forts (1861) | As President Buchanan -- Lincoln's predecessor -- believed member States of the American union had a right to secede, he allowed southern state troops to seize federal forts in Confederate territory. At Fort Sumter, South Carolina troops denied a supply ship trying to reach federal forces based in the fort access to the fort. The ship was forced to return to New York, its supplies undelivered. | Lincoln's Inauguration (1861) | At Lincoln's inauguration on March 4, the new president said he had no plans to end slavery in those states where it already existed, but he also said he would not accept secession. | Attack on Fort Sumter (1861) | While President Lincoln stalled a delegation from the Confederacy, who wished to talk about settling any part of the federal government's debt the South might owe, as well as reaching a peaceful settlement to the South's separation from the rest of the union, Lincoln was quietly sending supplies to Fort Sumter, in violation of a promise he made to the South that he would not attempt to do so (soldiers at the fort were allowed to freely enter the city of Charleston and buy what they needed). South Carolina, seeing that they had been tricked, fired on the fort to prevent the supply ship from accessing the fort. Fort Sumter eventually was surrendered to South Carolina, and the union troops in the fort were allowed to return to the United States. | Four More States Join the Confederacy (1861) | Lincoln sends out orders to the remaining states in the union to provide troops to invade the Confederacy. This is considered by Virginia, Tennessee, Arkansas and North Carolina to be a violation of the Constitution. Realizing that Lincoln cares less about the Constitution and the law, and more about forcing all the member states of the union to remain in the union, even if against their will, these states vote to leave the union and join the Confederacy. With Virginia's secession, Richmond was named the Confederate capitol. | Four Southern States make a Decision About Staying in the Union (1861) | Delaware - Political pressures ensured that this Southern state chose to remain within the union. Kentucky - Initially this state voted to stay neutral in the war, and thereby remain in the union. Later on, a questionable state convention was held that voted for secession. The Confederacy recognized the convention's desire to join the CSA, but the USA did not. Maryland - When this state prepared to vote on secession, Lincoln sent in troops to arrest many of the state legislators (especially those who were known to be sympathetic with the South), deny citizens the right to vote, and replaced the arrested legislators with union officers. In this way, Lincoln ensure the vote for secession went down to defeat. Maryland grudgingly stayed in the union and under the watchful eye of Lincoln's troops. Missouri - In the beginning, the state's legitimately elected legislature voted to secede, and the South accepted the state into their new Confederation. Union troops invaded the state, and the legitimate state government went into exile. The north set up a "rump government" which immediately took another vote on secession and voted to stay in the union. The USA recognized this rump government's vote, not the vote of the legitimate government. * As the Confederacy recognized Kentucky and Missouri as being part of the Confederacy, the total number of states in the Confederacy (according to the CSA) totaled thirteen (not the eleven usually mentioned). These thirteen states are recognized on the Confederate Battle Flag with thirteen stars. | Battle of 1st Manassas (1861) | Union General-in-Chief Winfield Scott advanced on the South before with high hopes from the public that the war would be short. Scott ordered General Irvin McDowell to advance on Confederate troops stationed at Manassas Junction, Virginia. McDowell attacked on July 21, and was initially successful, but the introduction of Confederate reinforcements resulted in a Southern victory and a chaotic retreat toward Washington by federal troops. | Blockading the South (1861) | Realizing that if the Confederacy was to remain in existence, the lower tariffs the South establish would mean large financial losses to the north. To prevent commerce, as well as the import of food, arms and munitions for their war effort, Lincoln ordered a blockade of all Southern ports. In response to the blockade by the heavier military vessels of the north, the South responded by building small, fast ships that could outmaneuver Union vessels. | Battle of the CSS Virginia and USS Monitor (1862) | In an attempt to reduce the North's great naval advantage, Confederate engineers converted a scuttled Union frigate, the U.S.S. Merrimac, into an iron-sided vessel rechristened the C.S.S. Virginia. On March 9, in the first naval engagement between ironclad ships, the Monitor fought the Virginia to a draw, but not before the Virginia had sunk two wooden Union warships off Norfolk, Virginia. | Battle of Shiloh (1862) | On April 6, Confederate forces attacked Union forces under General Ulysses S. Grant at Shiloh, Tennessee. By the end of the day, the federal troops were almost defeated. Yet, during the night, reinforcements arrived, and by the next morning the Union commanded the field. When Confederate forces retreated, the exhausted federal forces did not follow. Casualties were heavy -- 13,000 out of 63,000 Union soldiers died, and 11,000 of 40,000 Confederate troops were killed. | Battle of Fair Oaks (1862) | On May 31, the Confederate army attacked federal forces at Seven Pines, almost defeating them; last-minute reinforcements saved the Union from a serious defeat. Confederate commander Joseph E. Johnston was severely wounded, and command of the Army of Northern Virginia fell to Robert E. Lee. | Seven Days Battles (1862) | Between June 26 and July 2, Union and Confederate forces fought a series of battles: Mechanicsville (June 26-27), Gaines's Mill (June 27), Savage's Station (June 29), Frayser's Farm (June 30), and Malvern Hill (July 1). On July 2, the Confederates withdrew to Richmond, ending the Peninsular Campaign. | Harper's Ferry (1862) | Union General McClellan defeated Confederate General Lee at South Mountain and Crampton's Gap in September, but did not move quickly enough to save Harper's Ferry, which fell to Confederate General Jackson on September 15, along with a great number of men and a large body of supplies. | Sharpsburg (1862) | On September 17, Confederate forces under General Lee were caught by General McClellan near Sharpsburg, Maryland. This battle proved to be the bloodiest day of the war; 2,108 Union soldiers were killed and 9,549 wounded -- 2,700 Confederates were killed and 9,029 wounded. The battle had no clear winner, but because General Lee withdrew to Virginia, McClellan was considered the victor. The battle convinced the British and French -- who were contemplating official recognition of the Confederacy -- to reserve action. | Battle of Fredericksburg (1862) | General McClellan's slow movements, combined with General Lee's escape, and continued raiding by Confederate cavalry, dismayed many in the North. On November 7, Lincoln replaced McClellan with Major-General Ambrose E. Burnside. Burnside's forces were defeated in a series of attacks against entrenched Confederate forces at Fredericksburg, Virginia, and Burnside was replaced with General Joseph Hooker. | Emancipation Proclamation (1863) | In an effort to discourage Britain and France from officially recognizing the Confederacy, Lincoln released the Emancipation Proclamation. Lincoln privately acknowledged that this proclamation did nothing to actually free blacks from slavery, and was nothing more than a political tool to keep the European powers from entering the war on the side of the Confederacy. It was also an unconstitutional proclamation in accordance with the Fifth Amendment to the Constitution. In fact, the proclamation only purported to "free" slaves in parts of the Confederacy not under control of union forces, and slaves in union-controlled territories of the CSA and slave states that had remained in the union, remained in slavery. Some Union generals, such as General B. F. Butler, declared slaves escaping to their lines "contraband of war," not to be returned to their masters. In response to the proclamation, there was an uproar in the north. Many union troops deserted, declaring they had been fighting to maintain the union, not to free slaves, and a riot broke out in New York City in which rioters lynched blacks on the streets. | Battle of Chancellorsville (1863) | On April 27, Union General Hooker crossed the Rappahannock River to attack General Lee's forces. Lee split his army, attacking a surprised Union army in three places and almost completely defeating them. Hooker withdrew across the Rappahannock River, giving the South a victory, but it was the Confederates' most costly victory in terms of casualties. | Vicksburg Campaign (1863) | Union General Grant won several victories around Vicksburg, Mississippi, the fortified city considered essential to the Union's plans to regain control of the Mississippi River. On May 22, Grant began a siege of the city. After six weeks, Confederate General John Pemberton surrendered, giving up the city and 30,000 men. The capture of Port Hudson, Louisiana, shortly thereafter placed the entire Mississippi River in Union hands. The Confederacy was split in two. | Gettysburg Campaign (1863) | Confederate General Lee decided to take the war to the enemy. On June 13, he defeated Union forces at Winchester, Virginia, and continued north to Pennsylvania. General Hooker, who had been planning to attack Richmond, was instead forced to follow Lee. Hooker, never comfortable with his commander, resigned on June 28, and General George Meade replaced him as commander of the Army of the Potomac. On July 1, a chance encounter between Union and Confederate forces began the Battle of Gettysburg. In the fighting that followed, Meade had greater numbers and better defensive positions. He won the battle, but failed to follow Lee as he retreated back to Virginia. Militarily, the Battle of Gettysburg was the high-water mark of the Confederacy; it is also significant because it ended Confederate hopes of formal recognition by foreign governments. | West Virginia is Born (1863) | Some residents of the western counties of Virginia did not wish to secede from the USA. In a move similar to the convention of Kentucky that voted to secede, a small contingent of Virginians in the western part of the state met and declared themselves the "legitimate" state government. They voted to remove their counties from control of Confederate Virginia and become their own state. With the help of union troops, and in clear violation of Article IV, Section 3 of the U.S. Constitution, West Virginia was admitted to the union June 20, 1863. West Virginia was admitted as a slave state to the union, offering more proof that the War for Southern Independence was not about slavery. | Battle of Chickamauga (1863) | On September 19, Union and Confederate forces met on the Tennessee-Georgia border, near Chickamauga Creek. After the battle, Union forces retreated to Chattanooga, and the Confederacy maintained control of the battlefield. | Lincoln's Reconstruction Plan (1863) | Lincoln announces his reconstruction plan, offering general amnesty to all white Southerners who take an oath of future loyalty and accept wartime measures abolishing slavery. Whenever 10% of the number of 1860 voters take the oath in any state, those 'loyal' citizens can then establish a state government. In early 1864 the governments of Louisiana, Arkansas, and Tennessee are reconstructed under Lincoln's 'Ten Percent Plan.' Radical Republicans are furious at the policy's leniency, so Congress refuses to recognize the governments or seat their elected federal representatives. | Wilderness Campaign (1864) | General Grant, promoted to commander of the Union armies, planned to engage Lee's forces in Virginia until they were destroyed. North and South met and fought in an inconclusive three-day battle in the Wilderness. Lee inflicted more casualties on the Union forces than his own army incurred, but unlike Grant, he had no replacements. | Battle of Spotsylvania (1864) | General Grant continued to attack Lee. At Spotsylvania Court House, he fought for five days, vowing to fight all summer if necessary. | Battle of Cold Harbor (1864) | Grant again attacked Confederate forces at Cold Harbor, losing over 7,000 men in twenty minutes. Although Lee suffered fewer casualties, his army never recovered from Grant's continual attacks. This was Lee's last clear victory of the war. | Confederate Troops Approach Washington D.C. (1864) | Confederate General Jubal Early led his forces into Maryland to relieve the pressure on Lee's army. Early got within five miles of Washington, D.C., but on July 13, he was driven back to Virginia. | Radical Republican's pass their own Reconstruction Plan (1864) | Disapproving of Lincoln's plans, the Republican-controlled U.S. Congress passes the Wade-Davis bill. It requires a majority of 1860 voters to take a loyalty oath, but only those who swear an 'ironclad' oath of never having fought against the Union can participate in reconstructing their state's government. Congress requires the state constitutions to include bans on slavery, disfranchisement of Confederate political and military leaders, and repudiation of Confederate state debts. Lincoln refuses to sign the bill, pocket-vetoing the bill. | Sherman's Atlanta Campaign (1864) | Union General Sherman departed Chattanooga, and was soon met by Confederate General Joseph Johnston. Skillful strategy enabled Johnston to hold off Sherman's force -- almost twice the size of Johnston's. However, Johnston's tactics caused his superiors to replace him with General John Bell Hood, who was soon defeated. Hood surrendered Atlanta, Georgia, on September 1; Sherman occupied the city the next day. The fall of Atlanta greatly boosted Northern morale. | Sherman's March to the Sea (1864) | General Sherman continued his march through Georgia to the sea. He declared that he planned to "make Georgia howl." In the course of the march, Sherman and his troops took intentional actions to brutalize the civilian population, and enact a scorched earth policy. His men cut a path 300 miles in length and 60 miles wide as they passed through Georgia, destroying factories, bridges, railroads, and public buildings, as well as civilian's homes, livestock and crops. Union troops carried out a barbaric assault on the Southern people, raping, pillaging and plundering across Georgia. | Fall of the Confederacy (1865) | Transportation problems and successful blockades caused severe shortages of food and supplies in the South. Starving soldiers became less effective in battle, and the number of soldiers continued to be depleted, while the north continued to increase their number of troops by enlisting mercenaries and all foreigners that came to the U.S. Although President Jefferson Davis approved the arming of slaves as a means of augmenting the shrinking army, the measure was never put into effect. | Chance for Reconciliation (1865) | The last of several secret conferences between north and south in an attempt to settle the dispute, Confederate President Jefferson Davis agreed to send delegates to a peace conference with President Lincoln and Secretary of State William Seward. However, Jefferson insisted on Lincoln's recognition of the South's independence as a prerequisite and Lincoln insisted on the South agreeing to rejoin the union before any talks could take place. The conference never occurred. | Surrender at Appomattox Courthouse (1865) | General Lee's troops were soon surrounded, and, not seeing the need for more bloodshed as Lee determined that he could not prevail, on April 7 Lee to surrendered. On April 9, the two commanders met at Appomattox Courthouse, and agreed on the terms of surrender. Lee's men were sent home on parole -- soldiers with their horses, and officers with their side arms. All other equipment was surrendered. | Johnson's Reconstruction Plan (1865) | President Johnson implements his own reconstruction plan. It offers general amnesty to those taking an oath of future loyalty, although high-ranking Confederate officials and wealthy Confederates have to petition the president for individual pardons. The plan also requires states to ratify the 13th Amendment which prohibits slavery and to repudiate Confederate debts. This last part violated the U.S. Constitution, in that it set requirements for statehood in the Union not enumerated in the Constitution. However, all the proposed reconstruction plans, in effect, acknowledged what Lincoln had always denied: The Southern states had successfully seceded, and were not a part of the Union. This was a back-handed recognition of the legitimacy of the Confederate States. | Ku Klux Klan is formed (1866) | A group of former Tennessee Confederate army officers, who were all fraternity men, in 1866 formed a convivial society to which they gave the name of Kuklos, the Greek word for circle. For alliterative purposes, the word Klan was added, and Kuklos became Kuklux or Ku Klux. The organization shortly began to emphasize "patriotism" and a "fraternity" among their fellow Southerners. It originated in the desire to keep alive the horse play, hazing, and camaraderie of the truncated college days of the members; but these impulses morphed under Reconstruction and the organization began to carry out acts of vigilantism and use scare tactics in an attempt to maintain a semblance of order (as they interpreted things) in their communities, as the occupational Union forces did little to protect civilians. The organization spread, or was imitated, across the South. Rejecting the lenient reconstruction measures initiated by Presidents Abraham Lincoln and Andrew Johnson, the U.S. Congress, under the control of the Radical Republicans, passed the punitive Reconstruction Act of 1867 on March 7, over Johnson's veto. This act sought to rebuild the governments of the Southern states in the Northern mold and ensure the civil rights of the freed blacks. The members of the existing state governments in the South, made up of the leaders of the Confederacy, were removed, and the states were placed under the military rule of the U.S. Army. No one who had supported the Confederate government was allowed to vote or hold political office. As a result, the state governments were controlled by scalawags and carpetbaggers and the military rulers of the Radical Republican Congress. The South was divided into five military districts, with a U.S. Army general in charge of each. Virginia, the first district, was commanded by Gen. John Schofield. The second district brought North and South Carolina under the command of Gen. Daniel E. Sickles, and Gen. John Pope oversaw the reconstruction of Georgia, Alabama, and Florida in the third district. The fourth district, comprising Mississippi and Arkansas, was commanded by Gen. Edward Ord, and in the fifth, Texas and Louisiana came under the control of Gen. Philip H. Sheridan. Some 200,000 U.S. soldiers were stationed throughout the South to preserve order and carry out the dictates of Congress. These first military commanders had virtually unlimited power. They removed thousands of civil officials from their jobs and actively cultivated the registration of black voters, thereby placing former slaves in position to dominate their former masters and to wring from the South what little was left after four years of devastating war. Military rule in the South lasted for 10 years, until 1877, when Rutherford B. Hayes agreed to return the states to home rule in exchange for Southern support in his bid for the presidency. Fascinating Fact: Because of its large Unionist population and its submission to congressional demands, the State of Tennessee was the only Southern state to escape harsh reconstruction measures. | First and Second Reconstruction Acts (1867) | The Confederacy is divided into five military districts under the direction of military officers, supported by federal troops. Military courts can be used to try cases involving civil and property rights violations, as well as criminal trials. Southern states are forced to enact new constitutions, with the content dictated by the north. Confederate officials are barred from political participation. States must ratify the 14th Amendment in order to be re-admitted to the Union (another violation of the U.S. Constitution, as only member states can vote on Amendments. Non-member states cannot be used in tallying the two-thirds needed for ratification of amendments to the Constitution). The Southern states initially resist, but the Second Reconstruction Act gives military district commanders the 'right' to hold state constitutional conventions. This allows the Radical Republicans and the military to violate the rights of Southern citizens, and make the South virtual slaves of the Union. President Johnson attempts to veto these actions, but Congress over rides the vetoes, thus becoming almost completely impotent in office. | Scopes Monkey Trial (1925) | The Scopes Monkey Trial has been portrayed as a battle between teaching in public schools the theory of evolution and teaching creationism. Contrary to the distortions promoted in the play 'Inherit the Wind,' this famous trial was about freedom of speech, due process of law, and then about the idea of religion in the public school system. Click here to view the facts. The Civil Rights Era was a time in American history when American blacks started to realize that conformity to traditional social mores was not in their personal long-term interest. Blacks started to demand the rights provided to them under the U.S. Constitution. The swift changes in society concerning opportunities and restrictions on blacks in business, society and public places caused an upheaval that resulted in many riots, marches, court cases and deaths. Much of the study of the turmoil has been focused on the South, even though cities such as Los Angeles, Detroit, Chicago and New York faced their own riots and racial problems. Resistance to change is a typical human trait. Resistance by the white majority to federal intrusion in what was considered by many a state domain brought turmoil to many communities. Through the struggle of those seeking to fulfill the promise of the American ideal, today anyone from any background, regardless of their ethnic, racial or religious background has the same opportunity to succeed in the American dream.
http://www.knowsouthernhistory.net/History/
13
77
Note: A briefer, edited version of this article appeared in D.M. Wegner & J. Pennebaker (Eds.) Handbook of Mental Control. Englewood Cliffs, N.J.: Prentice-Hall, 1993. THE FIVE DISTINCTIONS AND SEVEN PRINCIPLES OF MEMORY By way of background, we summarize here some general principles that seem to govern the operation of the memory system, as abstracted from the research literature. Space does not permit complete documentation of each of the assertions that follow. For a thorough treatment of the cognitive psychology of memory, see the texts by Anderson (1990), Baddeley (1976, 1990), Crowder (1976), Ellis and Hunt (1989), Klatzky (1980), and Loftus and Loftus (1976). Forms of Memory and the Classification of Knowledge Memory is the repository of knowledge stored in the mind, but not all knowledge is alike. One important distinction is between declarative and procedural knowledge (Anderson, 1976; Winograd, 1975). Declarative knowledge is knowledge of facts, knowledge that has truth value. Procedural knowledge is knowledge of the skills, rules, and strategies that are used to manipulate and transform declarative knowledge in the course of perceiving, remembering, and thinking. Within the domain of declarative knowledge, a further distinction can be drawn between episodic and semantic knowledge (Tulving, 1972, 1983). Episodic knowledge is autobiographical memory: such memories record a raw description of an event; but they also contain information about the spatiotemporal context in which the event took place, and the self as the agent or experiencer of the event. Semantic knowledge is generic and categorical: it is stored in a format that is independent of episodic context and self-reference. In forming episodic memories, the cognitive system draws on pre-existing world-knowledge stored in semantic memory; similarly, the accumulation of similar episodic memories may lead to the development of a context-free representation of what these events had in common. Declarative knowledge, whether episodic or semantic in nature, can be represented in propositional format: that is, as assertions about subjects, verbs, and objects; these abstract propositions, in turn, are connected in larger networks where nodes stand for concepts (or for propositions about concepts), and links stand for the relations between concepts. By contrast, procedural knowledge can be represented in productions: that is, statements having an if-then, goal-condition-action format. Individual productions are then linked into whole production systems that accomplish some mental or behavioral task. Of course, the goals and conditions in a production system are also nodes in declarative memory. When these nodes are activated by acts of perception, memory, and thought, the corresponding productions are executed (as in the ACT* theory of Anderson, 1983a). It should be noted that our focus on meaning-based propositional representations is for convenience only. A number of theorists, particularly Paivio (1971, 1986), have argued that knowledge is also represented in concrete, analog formats that preserve the perceptual structure of objects and events. Anderson (1983a) has argued for at least two forms of perception-based representations: spatial images, which preserve the spatial configurations of objects and their components (e.g., up/down, left/right, front/back); and linear strings preserve the temporal relations among events (e.g., first/last, before/after/inbetween). The use of spatial image representations is illustrated by classic research on mental rotation (e.g., Shepard & Cooper, 1982) and image scanning (Kosslyn, 1980). The use of linear string representations is illustrated by work on scripts in social judgment (Shank & Abelson, 1977; Wyer, Shoben, Fuhrman, & Bodenhausen, 1985) and memory for public events (Huttenlocher, Hedges, & Prohaska, 1988). For a critique of dual-code theories of memory, see Anderson (1978) and Pylyshyn (1981). Expressions of Memory Memories can be expressed in a variety of ways. In free recall, the person is simply asked to remember one or more events that occurred at a particular place and time; the term "free" indicates that there are no constraints on the manner in which these items are recalled. In serial recall, the person must recall the items in the order in which they occurred. In cued recall, the person is given specific prompts or hints concerning the item(s) to be recalled -- the first letter or first syllable, a category label, or a semantic associate. In recognition, the person is asked to examine a list of items, and to distinguish between those that occurred at a specified place and time (targets, or "old" items) and those that did not (lures, distractors, or "new" items). Particularly in the case of recognition, the subjects' responses may be accompanied by rating of their confidence that they are correct. There are many variants and combinations of recall and recognition, but all such tests have one thing in common: they require the person to bring a memory into phenomenal awareness -- to become conscious of a past event, so that it can be described to someone else. However, there are other expressions of memory that do not require conscious recollection. Consider, for example, savings in relearning, in which the subjects show facilitation in relearning a list that had been studied sometime in the past. Significant savings are obtained regardless of whether the subject recalls or recognizes the list items. The same is true for positive and negative transfer effects. For example, if subjects study a list of words such as APPEAL, MINERAL, ELASTIC, BOULDER, AND FOREST, and then are asked to complete three-letter stems with the first word that comes to mind, they are much more likely to complete the stem ELA___ with ELATED than with ELASTIC -- what is known as a priming effect. However, significant priming is obtained even in subjects who are densely amnesic for the wordlist itself. Thus, there are some expressions of memory that do not seem to require conscious recollection of a past event. On the basis of results such as these, Schacter (1987) has drawn a distinction between two forms of memory, explicit and implicit (for similar distinctions see Jacoby and Dallas, 1981; Eich, 1984; Johnson & Hasher, 1987; Richardson-Klavehn & Bjork, 1988). Explicit memory involves the conscious recollection of some previous episode. Explicit memory tasks make clear reference to some event in the past, and ask the subject to deliberately remember some aspect of the incident. By contrast, implicit memory is demonstrated by any change in experience, thought, or action that is attributable to some past event. Implicit memory tasks do not necessarily refer to prior episodes in the subject's life, and do not require him or her to remember any experiences, qua experiences, at all. A large body of research indicates that explicit and implicit memory are dissociable in at least two senses. First, studies of a variety of amnesic states associated with brain damage, electroconvulsive shock, general anesthesia, and hypnosis (see below) reveal that explicit memory can be impaired while implicit memory is spared. Second, elaborative processing at the time of encoding affects explicit but not implicit memory, while a change in modality of presentation at the time of test affects implicit but not explicit memory. There is some controversy about whether explicit and implicit memory reflect the operations of two independent memory systems in the brain (Roediger, 1990; Roediger, Weldon, & Challis, 1989; Schacter, 1987, 1990; Tulving & Schacter, 1990). In any case, the phenomena of implicit memory, reflecting the influence of past episodes on the performance of procedural and semantic memory tasks, as well as other forms of perceptual and language processing, comprise a clear case of the unconscious influence of a past event on current functioning (Kihlstrom, 1987, 1990; Kihlstrom, Barnhardt, & Tataryn, 1991). Although explicit memory is epitomized by recall and recognition, and implicit memory by priming effects, the distinction should not be drawn too sharply. Every explicit memory test has its implicit memory counterpart. This relationship is clearest in the case of cued recall. A subject who has studied a list including the word ELASTIC may be cued with the stem ELA- and asked to complete it either with a word from the study list (an explicit memory test) or with the first word that comes to mind (an implicit memory test). For recognition, the subject may be presented with ELASTIC and asked either whether it was on the list (explicit memory), or whether it was presented prior to a masking stimulus (implicit memory). Even for free recall, subjects may be asked either to remember the items of the list (explicit memory) or to report whatever words come into their minds (implicit memory). More to the point, perhaps, ostensibly explicit memory tasks have an implicit memory component, and vice-versa. For example, Mandler (1980) has argued that successful recognition of an item may reflect a feeling of familiarity mediated by priming effects (and thus close to implicit memory) in the absence of actual retrieval of the episode in which the item was presented (essentially explicit memory). And subjects may strategically use their conscious recollections of list items to generate a mental set, facilitating performance on perceptual identification, stem completion, and other ostensibly implicit memory tests. Stages of Processing In analyzing the success or failure of any attempt at remembering (or, for that matter, at forgetting), it is convenient to divide memory processing into three stages (Crowder, 1976). Encoding has to do with the acquisition of knowledge -- in the general case, the creation of a memory trace representing some experience. Storage has to do with the retention of trace information over a period of time. Retrieval has to do with the utilization of stored information in the course of experience, thought, and action. In principle, and instance of remembering of forgetting can be attributed to processes occurring at any of these stages, along or in combination. Thus, an event can be forgotten because it has not been encoded; because it was lost from storage during the retention interval; or because an available memory was not retrieved. The Encoding Stage. Traditional theories of memory, as represented by the work of Ebbinghaus (1885), Thorndike's (1913) Law of Practice, and indeed the entire passive-association tradition of S-R learning theory, emphasizes the role of repetition and rehearsal in memory encoding. However, classic studies by Craik and his colleagues (e.g., Craik & Lockhart, 1972; Craik & Tulving, 1975; Craik & Watkins, 1973) support a distinction between maintenance rehearsal and elaborative rehearsal. Maintenance rehearsal, or rote repetition, maintains items in an active state; elaborative rehearsal links new items to pre-existing knowledge. These experiments, and many others, illustrate the elaboration principle (Anderson & Reder, 1979): The probability of remembering an event is a function of the degree to which that event is related to pre-existing knowledge during processing. The elaboration principle applies to the processing of individual events; but memory is also improved if we connect individual events to each other. This effect is illustrated by other classic studies, on the role of associative clustering, category clustering, or subjective organization (Bower, 1970b; Mandler, 1967, 1979). That is, list items tend to be reorganized in memory, so that items which are associatively or conceptually related tend to be recalled together regardless of their order of presentation. Subjective organization is a similar phenomenon, except that the order of recall tends to be determined by an image or narrative that is idiosyncratic to the subject, rather than widely shared semantic relationships. All three phenomena illustrate the organization principle: The probability of remembering an event is a function of the degree to which that event was related to other events during processing. The difference between elaborative and organizational processing corresponds to the distinction between item-specific information, which increases the distinctiveness of each item, and relational information, which highlights the similarities between items (Hunt & Einstein, 1981). The Storage Stage. Assuming that a memory trace has been adequately encoded, it is now available for use. So long as attention is devoted to the item, it remains in a high state of readiness, and is extremely likely to be retrieved; when the trace is no longer an object of attention, the probability of successful retrieval progressively diminishes. This empirical fact, known since Ebbinghaus, may be summarized as the time-dependency principle: The probability of remembering an event is a negative function of the length of time between encoding and retrieval. Of course, there are instances in which knowledge is preserved at remarkably high levels over extremely long periods of time, raising the question of a "permastore" (Bahrick, 1984). In general, there are two accounts of what happens over the retention interval. One view, which may be attributed to Ebbinghaus (1885), and Thorndike (1913), emphasizes the passive decay of unrehearsed memories, just as footprints are washed away by wind and tide. Another view, which forms the basis for the interference tradition in memory, asserts that other items, especially those newly encoded during the retention interval weaken the target memory traces, or otherwise compete with their retrieval. Interference is dramatically illustrated in the fan effect, in which increases in the number of facts associated with a concept increases the time required to retrieve any one of these facts. Although there is some evidence for trace decay, and for the actual destruction of memory traces (Loftus & Loftus, 1980) once a trace has been encoded in memory, the chief cause of forgetting appears to be some sort of proactive or retroactive interference. The Retrieval Stage. The implication of interference is that once a trace has been consolidated in memory, its storage is essentially permanent. Assuming that a memory trace has been adequately encoded, and has been preserved over the retention interval, it must be retrieved in order to answer a query or used in other information-processing functions. However, memory fluctuates from trial to trial. For example, Tulving (1964) presented subjects with a list of words, followed by a series of memory tests, with no further opportunity to study. The number of items remained essentially constant from trial to trial (about 50% of the original list). However, the exact items recalled varied: an item remembered on one trial might be forgotten on the next, and vice-versa. This finding illustrates the distinction between availability and accessibility (Tulving & Pearlstone, 1966): Items that are available in memory may not be accessible on any particular attempt at retrieval. To some extent, accessibility is affected by encoding and storage factors: elaborate, organized memories are more reliably accessible than those that are not; and recent memories are more reliably accessible than remote ones. However, accessibility is also determined by factors present at the time of retrieval. One important determinant of accessibility is the amount of cue information supplied with the query. Consider a comparison of three measures of episodic memory: as a general rule, free recall tests produce less memory than cued recall tests (Tulving & Pearlstone, 1966), while recognition tests produce the most. This is to be expected on the basis of the amount of information supplied with the retrieval cue. In free recall, the cue ("What were the words on the list you learned?") is very impoverished: at best, is specifies only the spatiotemporal context of the to-be-remembered event; in cued recall, additional information is supplied about the nature of the target event ("What were the animal names on the list?"); in recognition, the cue is a copy of the event itself ("Was one of the words LION or BEAR?). Such comparisons yield the cue-dependency principle (Tulving, 1974): The probability of remembering an event increases with the amount of information supplied by the retrieval cue. But effective retrieval cues must also contain the right kind of information, as well as the right amount. Thus, the word AMBER, studied in a list of words including ORANGE and RED, may be retrieved when cued by the category COLOR, but not when cued by the category FIRST NAME OF A GIRL OR WOMAN. This finding illustrates the encoding specificity principle (Tulving & Thomson, 1973): The probability of remembering an event is a function of the extent to which cues processed at the time of encoding are also processed at the time of retrieval. Encoding specificity appears to underlie the phenomena of state-dependent retention, in which psychoactive drugs such as alcohol or barbiturate are administered during encoding or retrieval: in these cases, memory is best when there is a match between the state in which the material was studied, and the state in which memory is tested (for a review, see Eich, 1980, 1989). Similar effects have been found for environmental setting (e.g., Smith, 1988), and emotional state (e.g., Eich & Metcalfe, 1989). Such "context-dependent memory" effects are themselves cue-dependent: they are typically found with free-recall tests, and only rarely on tests of cued recall and recognition (Eich, 1980). This suggests that contextual information is relatively weak, and can be swamped by other cues (Eich, 1980; Kihlstrom, 1989; Kihlstrom, Brenneman, Pistole, & Shor, 1985). Students taking multiple-choice exams are not aided by being seated in the same room in which they heard the lecture (and, in any event, much of the test concerns textbook material, which presumably was encoded in the library or dormitory). Nevertheless, such context effects do illustrate the importance of congruence between encoding and retrieval conditions, which is what the encoding specificity principle is all about. Memory for particular events is importantly determined by our expectations and beliefs, represented as generic knowledge structures known as schemata. The first to appreciate this point was Bartlett (1932), in his attack on the associationistic tradition represented by Ebbinghaus and Thorndike. The important role played by such factors organized pre-existing knowledge illustrates the schematic processing principle (Hastie, 1980): The probability of remembering an event is a function of the degree to which that event is congruent with pre-existing expectations and beliefs. It turns out, however, that the precise relationship between event and schema is important. Some events are schema-congruent, meaning that they would be expected by the schema in place; others are schema-incongruent, or counterexpectational; still others are schema-irrelevant, meaning that they do not bear on the schema one way or the other. Although considerable research converges on the conclusion that schema-congruent events are remembered better than schema-irrelevant ones, Hastie and his colleagues (e.g., Hastie & Kumar, 1979; Hastie, 1980, 1981) have pointed out that schema-incongruent items are remembered best of all. The U-shaped function relating schema-congruence and memorability appears to find its explanation in two different processes. Schema-incongruent events, because of their surprising value, receive extra processing at the time of encoding, as the perceiver tries to take account of them. And at the time of retrieval, the subject can draw on the schema itself to generate cues that will help gain access to schema-congruent events. Schema-irrelevant events enjoy neither of these advantages, and thus are poorly remembered. Bartlett's view of memory as schema-driven lies at the foundation of his view of remembering as reconstructive rather than reproductive. Just as perceiving an object is sometimes more like painting a picture than inspecting a photograph (Neisser, 1967, 1976), so remembering an event is more like writing a book than retrieving one from the shelf. Some evidence for the reconstructive nature of remembering is provided by Bransford and Franks' (1971) studies of sentence memory, in which subjects falsely recognize sentences whose meanings are consistent with those that they actually studied; and by studies by Loftus (1978) and others on the effects of leading questions and other misinformation on eyewitness testimony. Although Loftus' notion that post-event misinformation overwrites, and replaces, event information in memory has been strongly challenged (e.g., McCloskey & Zaragoza, 1985), nothing contradicts the notion that memory cannot be misled, confused, and biased by changes in perspective and other events occurring after the fact. These errors, confusions, and biases illustrate the reconstruction principle: A memory of an event reflects a blend of information retrieved from a specific trace of that event with knowledge, expectations, and beliefs derived from other sources. These five distinctions -- between declarative and procedural knowledge, episodic and semantic memory, explicit and implicit expressions, the stages of processing, and availability and accessibility -- and seven principles -- elaboration, organization, time-dependency, cue-dependency, schematic processing, encoding specificity, and reconstruction -- provide sort of an user's manual for the human memory system. We will have many occasions to observe their operation as we examine the prospects for the self-regulation of memory. MNEMONICS AND MNEMOTECHNICS Given these principles, it would seem that the prospects for the self-control of memory functioning would be relatively good, at least on the positive end. There are things we can do to promote remembering and prevent forgetting: for example, elaborating and organizing the material at the time of encoding, or supplying sufficient and appropriate cues at the time of retrieval. Since the time of ancient Greeks, these sorts of strategies have been codified in the form of a set of mnemonic devices, or techniques to aid in memory. The history of mnemonic devices, from ancient times through the Renaissance, has been documented authoritatively by Yates (1966). In ancient Greece and Rome, when parchment was expensive and printing unknown, some system of memorizing was required by poets and orators, who had to deliver long addresses with a high degree of accuracy. Along with invention, disposition, elocution, and pronunciation, memory was one of the five aspects of rhetoric defined by Cicero in his De oratore of 55 B.C. The classical system of memory aids is commonly attributed to the poet Simonides of Ceos, who dramatically demonstrated their use by identifying the victims of a disaster through his knowledge of where they had been sitting at a banquet table. Simonides relied on the mnemonic of places and images, by which familiar places were selected as storage spaces for the items, represented by images, comprising that which we wish to remember. The techniques of artificial memory were referred to in Aristotle's treatise, De memoria et reminiscentia (4th century B.C.), and codified in the anonymous Ad Herennium ("To Herennius") of 82 B.C.. The author of Ad Herennium, commonly (but wrongly, says Yates, 1966) thought to be Cicero himself, presents a detailed set of rules for the selection of places and images for memorizing. Ad Herennium formed the basis of all subsequent treatments of the ars memorativa, including Cicero's De oratore and Quintillian's Institutio oratore (1st century A.D). The mnemotechnics of Ad Herennium were revived in medieval Europe by Albertus Magnus (in De bono) and by Thomas Aquinas (in Summa Theologiae) -- both seeing artificial memory as an aspect of the virtue of prudence. In 1596, the Jesuit missionary Matteo Ricci brought the system of places and images to China, as an example of the powers to be acquired with conversion to Christianity (Spence, 1984). The method of places and images also forms the basis of the most popular mnemonics of the modern era, the method of loci, the pegword technique, the link method, bizarre imagery, and the keyword system (Bellezza, 1981; for popular treatments, see Cermak, 1976; Herrmann, 1988; Higbee, 1977; Lorayne & Lucas, 1974). The method of loci is the "places and images" technique in pure form: subjects mentally associate each item to be remembered with a familiar spatial location on a mental map. In the pegword system, subjects first memorize a simple rhyming scheme -- ONE-BUN, TWO-SHOE, THREE-TREE, etc. -- and then associate an image of each to-be-remembered item with the concrete objects referred to. In the link method, the pegwords are dropped, and subjects are instructed to associate adjacent items together in an interactive image -- the more unusual, even bizarre, the better. Finally, in the keyword mnemonic for learning foreign-language vocabulary, a foreign word is represented by a substitute word in the native language, and the two words are associated by means of visual imagery. There are also verbal mnemonic systems, such as the use of the acronym ROY G BIV (in the United States) or the sentence RICHARD OF YORK GAINS BATTLES IN VAIN (in England) to remember the colors of the visible spectrum in order of wavelength; EVERY GOOD BOY DOES FINE for the lines of the treble clef in music, and GOOD BOYS DO FINE ALWAYS (or GOOD BOYS DESERVE FUDGE ALWAYS, depending on your childhood piano teacher) for the corresponding lines in the bass clef; the spaces are F-A-C-E for the treble clef and ALL COWS EAT GRASS for the bass. Such systems are familiar in the training of health-care providers, who must often memorize ordered lists of things like bones and muscles. SOME CRIMINALS HAVE UNDERESTIMATED ROYAL CANADIAN MOUNTED POLICE gives the bones of the upper limbs (scapula, clavicle, humerus, ulna, radius, carpals, metacarpals, and phalanges); LAZY ZULU PURSUING DARK DAMOSELS gives the stages of cell division (leptotene, zygotene, pachytene, dilotene, and diakinesis); LAZY FRENCH TART LYING NAKED IN ANTICIPATION gives the order of cranial nerves in the superior orbital tissue of the skull (lacrimal, frontal, trochlear, lateral, nasociliary, internal, and abducens). By the way, the racism and sexism of these mnemonics is in the original, and it is probably not an accident that these sentences were originally designed to serve the professional advancement of white men. There is probably an entire sociological thesis to be written on the role of racial, ethnic, gender, and sexual categories in mnemonic devices. Finally, the following rhyme offered by Baddeley (1976) provides the first 20 digits of pi (each digit given by the number of letters in the word): I wish I could remember Pi Eureka cried the great inventor Christmas Pudding Christmas Pie Is the problem's very center. One important feature of the modern approach, however, is that it puts folk wisdom and rhetorical claims to empirical test (Bower, 1970a; Higbee, 1978, 1988; McDaniel & Pressley, 1987; Morris, 1978; Pressley & Mullally, 1984; Pressley & McDaniel, 1988; Roediger, 1980; Wood, 1967). So, for example, Bugelski, Kidd, and Segmen (1968) showed that the pegword system actually works, except under those conditions where subjects are not given enough time to form appropriate images. Bower and Reitman (1972) showed that the same pegs could be used to memorize several different lists of words, so long as each new image was included in a compound of previously formed images. Under these conditions of progressive elaboration, the pegword system and the method of loci were equally effective. Roediger (1980) confirmed this finding, and showed further that the loci, pegword, and link methods were superior to rote rehearsal, or the formation of mental images of the objects represented by individual words. Wollen, Weber, and Lowry (1972) showed that while mnemonically effective images interact, they need not be bizarre; in fact, bizarre images may even interfere with memory (Collyer, Jonides, & Bevan, 1972). Regardless of these qualifications, ample testimony to the effectiveness of mnemonic techniques is provided by single-case studies of amateur and professional mnemonists (Brown & Deffenbacher, 1975; Gordon, Valentine, & Wilding, 1984; Wilding & Valentine, 1985). Shereshevskii (S.), the subject of the classic study by Luria (1968), possessed a remarkable talent for synesthesia, which apparently allowed him to form extremely rich, distinctive images of to-be-remembered material. He also made extensive use of the method of loci, drawing on his detailed knowledge of Moscow. He also used a variety of verbal and semantic strategies, including the grouping of nonsense syllables to produce pseudo-Russian letter strings. Other mnemonists who have been studied, including W.J. Bottell, known in England as "Datas" (1904) and the model for "Mr. Memory" in Alfred Hitchcock's The 39 Steps, relied heavily on visual imagery. Similarly, A.C. Aitken, a mathematician at the University of Edinburgh with a reputation as a lightning calculator, employed verbal recoding, as well as rhythm (Hunter, 1962, 1977): he was able to recall the value of pi to 1,000 decimal places! Subject SF, an athlete studied by Chase and Ericsson (1982), learned to memorize strings of up to 81 digits, after only one presentation trial, by converting chunks of digits into running times that were meaningful to him. On the other hand, Subject VP, a store clerk studied by Hunt and Love (1972), generally neglected the classic mnemonic techniques, relying instead on verbal recoding strategies and rote memorization. Interest in mnemonic devices continues, especially among those concerned with the treatment of individuals with learning and memory disorders: mentally retarded and learning disabled children and adults, the aged, and the brain-damaged; mnemonics are also popular with teachers of foreign language vocabulary. An extremely interesting aspect of research mnemonics concerns cross-cultural differences in memory, particularly comparisons between literate and preliterate cultures (Cole & Gay, 1972; Cole, Gay, Click, & Sharp, 1971; Wagner, 1978a, 1978b). Clearly, the effectiveness of these mnemonic devices illustrates the principles of encoding and retrieval discussed earlier. The method of loci, and the pegword system, connects list items to things that are already known -- in our terms, it promotes elaboration. Similarly, the images involved in the pegword link systems provide for elaborate, rich, and distinctive encodings of single items. The organizational principle is illustrated by the pegword system, in which the sound of the integer serves as a cue for the name of the pegword; and then the pegword provides a contextual cue for retrieval of the list item itself -- as does the familiar place in the method of loci. In the link method, successive items are grouped together in images -- another instance of organizational processing. Perhaps the relative inefficacy of bizarre images reflects the difficulty of retrieving unusual or unfamiliar interitem links. Finally, the success of interactive images in bringing list items to mind illustrates the principles of cue-dependency and encoding specificity in retrieval. At the same time, the utility of mnemonic devices would seem to be limited. Perhaps reflecting their origins in the needs of poets and orators with limited access to paper and printing, they are best at preserving the order in which items were presented. When order need not be preserved, their advantages over other encoding strategies is reduced (Roediger, 1980). More important, some mnemonic devices require enormous expenditures of cognitive effort on the part of subjects, both in memorizing the mnemonic, and in memorizing the material that the mnemonic is supposed to help reproduce. Consider the remark of one of Matteo Ricci's pupils, as reported by Spence (1984): "It takes a lot of memory to remember these things". Is the "Zulu" sentence for bones or nerves? And what was that "pi" jingle (and who needs to know that value to more than four digits, anyway? (Roediger & Thorpe, 1978) and increasing practice with retrieval (Roediger & Payne, 1982). Similarly, early findings from Erdelyi's laboratory yielded hypermnesia when pictures, but not words, served as the to-be-remembered items (Erdelyi & Becker, 1974). This situation led Erdelyi (1982, 1984, 1988) to suggest that pictorial materials were privileged with respect to hypermnesia, and to speculate that imaginal processing is an important mediator of the effect. However, some experiments (e.g., Belmore, 1981; Erdelyi, Buschke, & Finkelstein, 1977; Roediger & Thorpe, 1978) have obtained hypermnesia for verbal materials, so the difference between verbal and nonverbal representations, or verbal vs. nonverbal processing, cannot be critical. A series of experiments by Mross, Klein, and Kihlstrom have shed more light on the conditions under which hypermnesia for words, and perhaps hypermnesia in general, occurs (Klein, Loftus, Kihlstrom, & Aseron, 1989; Mross, Klein, Loftus, & Kihlstrom, 1991). Mross et al. (1991), replicating the procedures of Erdelyi and Becker (1974), found significant hypermnesia for both pictures and words, although the magnitude of the effect was greater in the former case. In a second study, their stimulus materials shifted from words and pictures representing concrete objects to trait adjectives representing highly abstract personality descriptors. Following the "levels of processing" paradigm of Craik and Lockhart (1972), independent groups of subjects studied the items under one of four conditions: orthographic, phonemic, semantic, and self-referent: they then completed a series of two or three recall trials without any further study of the list. Significant hypermnesia was observed only in the self-referent condition. A third study replicated this finding, substituting an imagery task for the phonemic condition of Experiment 2. A fourth experiment compared just the phonemic and self-referent condition, and found evidence of hypermnesia only in the latter. A final experiment by Klein et al. (1989) showed that pleasantness ratings (an elaborative task involving the processing of single items) increases the intertrial recovery component of hypermnesia, while category sorting (an organizational task involving the processing of interitem associations) decreases the intertrial forgetting component. Hypermnesia results from an net advantage of intertrial recovery over intertrial forgetting. Thus, both elaborative and organizational processing promote hypermnesia, though the end is accomplished by different means in the two cases. The findings of these experiments speak to a number of theoretical controversies concerning the nature of the hypermnesia effect. For example, Erdelyi (1982, 1984, 1988) has suggested that imaginal (nonverbal) processing is critical for the occurrence of hypermnesia. Mross et al. (1990) obtained hypermnesia for words in four separate experiments, and Klein et al. (1989) added a fifth, even though no imagery instructions were given to the subjects. Of course, it might be the case that the subjects spontaneously engaged in such a recoding process. However, the use of highly abstract personality trait terms as stimuli in the work of Mross et al. (1990), and the failure of explicit imagery instructions to produce hypermnesia, diminish this possibility to a considerable extent. The effects of imaginal processing may be mediated by a more general effect of elaborative processing at the time of encoding. Imaginal processing may be a highly effective way to produce elaborate encodings, but other processing tasks could be equally or more effective in this regard (Belmore, 1981; Klein et al., 1989). In the final analysis, Mross' and Klein's experiments indicate that the amount of hypermnesia observed with words, at least, is a function of the manner in which they are processed: self-referent processing yielded hypermnesia, while orthographic, phonemic, and semantic processing did not. Elaborative processing promotes intertrial recovery, while organizational processing prevents intertrial forgetting. These results join those of others who have found effect of encoding variables on hypermnesia within an intentional-learning paradigm -- although they differ in that significant hypermnesia was not obtained in the semantic condition of Experiment 2 (Belmore, 1981; Roediger, Payne, Gillespie, & Lean, 1982). On the other hand, Roediger and his colleagues have argued that retrieval factors are critical in producing hypermnesia. Roediger (1982; Roediger et al., 1982; see also Roediger & Thorpe, 1978) noted that cumulative recall functions, of which hypermnesia could be considered a special case (in which intertrial recovery exceeds intertrial forgetting), have the property that the higher the asymptote of recall, the more slowly that asymptote is approached. Thus, according to their argument, hypermnesia is more likely to be shown in cases where initial levels of recall are high. Pictures generally show higher initial recall than words; and words subject to imaginal or elaborative encoding show higher initial recall then those that are not. However, they argue that hypermnesia is not due to encoding conditions per se; rather, any condition resulting in high initial levels of recall would have the same effect. Thus, they showed that claim that high levels of cumulative recall -- their characterization of hypermnesia -- is more likely to be obtained on a semantic memory task involving the generation of instances from large rather than small categories (Roediger et al., 1982). On the other hand, Mross et al. (1991, Experiment 4) arranged their stimulus materials in such a way as to reverse the normal relation between level of processing and level of recall. Paralleling the set-size manipulation of Roediger et al. (1982), four times as many items were presented for a phonemic judgment as for a semantic one. More phonemic than semantic items were recalled on the initial trial, and overall, indicating that asymptotic levels of recall were higher in the phonemic condition. Nevertheless, there was no hypermnesia observed in the phonemic condition. These results are not consistent with the hypothesis that level of recall determines the extent of hypermnesia; but they are consistent with the hypothesis that encoding factors play an important role. Of course, as Erdelyi (1982) argued and Roediger and Challis now (1988) agree, cumulative recall is not the same as hypermnesia (see also Payne, 1986, 1987). Almost any set of conditions will show an incremental recall function, reflected in the appearance of new items over trials, but not all conditions yield hypermnesia, reflected in a net increase in recall from trial to trial. In terms of the usual repeated-testing procedure, cumulative recall is sensitive only to intertrial recovery. The problem is that intertrial recovery is necessary, but not sufficient, for hypermnesia to occur. What is needed additionally is either for intertrial forgetting to be reduced, or for intertrial recovery to exceed intertrial forgetting. Intertrial forgetting is the key to hypermnesia, and cumulative recall functions ignore this factor altogether. In any event, the findings of Roediger et al's set-size experiment may be amenable to an alternative explanation in terms of encoding rather than retrieval processes. In the specific categories employed by Roediger, category size seems to have been confounded with the extent of interitem associations. The greatest cumulative recall was observed for sports and the least for U.S. presidents, with birds falling somewhere inbetween. Accessing one item from the sport category, then, would be very likely to lead to the retrieval of another item from that category, and so on. In addition, more items in the sports category could serve as subject- generated retrieval cues, suggesting sports that have not yet been retrieved. Thus, the important factor is not the number of items in the set, but rather the richness of the associative network linking the items to each other. Consistent with this point, research by Klein et al. (1989) indicates that tasks promoting well-organized and richly elaborated encodings are powerful determinants of hypermnesia for verbal material. In their study, reliable hypermnesia for word lists was found with tasks encouraging either elaborative or organizational processing at encoding; and when encoding conditions encouraged both elaborative and organizational processing, more hypermnesia was found than for either type of processing alone. It should be recalled that the recovery of previously unrecalled items is ubiquitous in multitrial experiments; thus, cumulative recall always increases across trials, and hypermnesia occurs in those instances where intertrial recovery exceeds intertrial forgetting. In the final analysis, Klein et al. found that both elaborative and organizational activity contributed to hypermnesia; elaborative activity promotes intertrial recovery, while organization prevents intertrial forgetting. In summary, studies of hypermnesia offer a new perspective on the enhancement of memory, by showing that items, once lost, are not necessarily gone forever. Continued efforts at retrieval will almost always yield previously forgotten material, even in the absence of changes in cue information provided to the subject (such as are accomplished by shifts from free recall to cued recall or recognition). However, there are clear limits on the magnitude of this effect. Under ordinary circumstances, the number of initially forgotten items that are subsequently recovered is equalled, or even surpassed, by the number of initially remembered items that are subsequently forgotten. Thus, in many cases, net recall remains constant at best; more likely it decreases, producing the phenomenon of time-dependency retrieval. But just as there are strategies that can be employed to promote good initial recall, there are strategies that enhance intertrial recovery and diminish intertrial forgetting. Item gains are enhanced by elaborative activity, which produces a rich, distinctive memory trace that is more likely to be contacted by search and retrieval processes. Similarly, item losses are reduced by organizational activity, which focuses on the similarities among items, and thus enhances the likelihood that recollection of one item will serve as a cue for the retrieval of another one. The popular reputation of hypnosis as a means of transcending one's normal voluntary capacity -- as reflected in the "generation of hypers" noted by Marcuse (1959), coupled with fact that hypnotic suggestions can produce profound alterations in cognitive functioning, has led some investigators to suggest that it can be employed to enhance memory, over and above whatever effects can be achieved by the use of mnemonic devices and other strategies available to nonhypnotized subjects. This technique was employed by Breuer and Freud (1893-1895) in their Studies on Hysteria, and was revived in World War I and again in World War II as an adjunct to brief hypnotherapy for war neurosis (Grinker & Spiegel, 1945; Watkins, 1949; for a particularly vivid portrayal of this technique, see John Huston's 1944-1945 propaganda film, Let There Be Light). More recently, hypnotic techniques have been employed in "past lives therapy", an occult practice in which patients search for the source of their present troubles in the sins and misfortunes of their previous existences; and in forensic situations, where witnesses and victims, and even suspects and defendants, may be hypnotized in the process of gathering evidence in civil and criminal cases. Hypnosis and Learning Although most attention in this area focuses on the effects of hypnosis on the retrieval of memories initially encoded in the normal waking state, a number of studies have examined the question of whether learning itself can be enhanced through hypnosis. Certainly this line of research received some impetus from the reports of many 19th-century authorities that mesmerized or hypnotized subjects gave evidence of the transcendence of normal voluntary capacity: changes ranging from increases in verbal fluency and physical strength to clairvoyance. Nevertheless, an early study by Gray (1934) answered the question only weakly in the affirmative: a small group of poor spellers improved their spelling ability somewhat when the learning occurred in hypnosis. Similarly, Sears (1955) reported that subjects who learned Morse Code in hypnosis made fewer errors than those whose learning took place under nonhypnotic conditions. More dramatic results were reported in a series of studies by Cooper and his associates, employing hypnotic time distortion and hallucinated practice. Briefly, subjects were asked to hallucinate engaging in some activity, and at its conclusion were given suggestions that a long interval had passed (e.g., 30 minutes) when the actual elapsed time had been considerably shorter (e.g., 10 seconds). The idea is that this expansion of subjective time effectively increases the amount of study, or practice, that could be performed per unit of objective time. Cooper and Erickson (1950, 1954) reported, for example, that hallucinated practice led to marked improvement in a subject's ability to play the violin. A more systematic study by Cooper and Rodgin (1954), concerned with the learning of nonsense syllables, also gave positive results. Unfortunately, there were no statistical tests of the differences between treatment conditions; even so, the effects of hypnotic time distortion and hallucinated practice were seen only on the immediate test; the superiority of hypnosis virtually disappeared at retest, 24 hours later. Another study, by Cooper and Tuthill (1954), found no objective improvements in handwriting with hallucinated practice in time distortion, even though the subjects generally perceived themselves as having improved. A more recent experiment also yielded negative results (Barber & Calverley, 1964). On the other hand, Krauss, Katzell, and Krauss (1974) reported positive findings in a study of verbal learning: hypnotized subjects were allotted three minutes to study the list, but were told they had studied it for 10 minutes. Unfortunately, Johnson (1976) and Wagstaff and Ovenden (1979) failed to replicate these results: in fact, their subjects did worse under time distortion than in control conditions. In the most comprehensive study to date, St. Jean (1980) repeated the essential features of the Krauss et al. design, paying careful attention to details of subject selection and the wording of the suggestion. Although the highly hypnotizable subjects reported that they experienced distortions of the passage of time, as suggested, there were no effects on learning. The combination of time distortion and hallucinated practice is ingenious, but of course it makes some assumptions that are not necessarily valid. First, can mental practice substitute for actual physical practice? In fact, there is considerable evidence for this proposition (Feltz & Landers, 1983). Because hypnotic hallucinations are closely related to mental images, there is no reason to think that hallucinated practice might not be effective as well. But time distortion is another matter: the assumption is that the hallucination of something is the same as the thing itself, and there is no reason to think that this is the case. In fact, such an assumption flies in the face of a wealth of literature on hypnotic hallucinations, which shows that they are inadequate substitutes for the actual stimulus state of affairs (Sutcliffe, 1960, 1961; Kihlstrom & Hoyt, 1988). Thus, while hypnosis, and hypnotic suggestion, can produce distortions in time perception, just as they can produce other distortions in subjective experience, these distortions do not necessarily have consequences for learning and memory (St. Jean, 1989). A rather different approach to this problem has been taken by investigators who have offered subjects direct suggestions for improved learning, without reference to time distortion or hallucinated practice (e.g., Fowler, 1961; Parker & Barber, 1964). Unfortunately, interpretation of such studies is made difficult by a number of methodological considerations (for a review of methodological problems in hypnosis research, see Sheehan & Perry, 1977). For example, the induction of hypnosis might merely increase the motivation of subjects to engage in the experimental task (Barber, 1969; London & Fuhrer, 1961), independent of any effects of hypnosis per se. Moreover, subjects may respond to the demand characteristics of such an experiment by holding back on their performance during baseline tests and other nonhypnotic conditions, thus manifesting an illusory improvement under hypnosis (Scharf & Zamansky, 1963; Zamansky, Scharf, & Brightbill, 1964). Some of these problems have been addressed by a special paradigm introduced by London and Fuhrer (1961), in which hypnotizable subjects are compared to objectively insusceptible subjects who have been persuaded that they are responsive to hypnosis. To this may be added procedures adopted by Zamansky and Scharf (Scharf & Zamansky, 1963; Zamansky, Scharf, & Brightbill, 1964) to evaluate order effects driven by expectancies generated by the comparison of hypnotic and nonhypnotic conditions. Studies of muscular performance using the unadorned London-Fuhrer design have generally found that when subjects are given hypnotic exhortations for enhancement, equivalent performance levels are shown by hypnotizable subjects and insusceptible subjects who believe that they are hypnotizable (e.g., Evans & Orne, 1965; London & Fuhrer, 1961). Similar results have been obtained for measures of rote learning (London, Conant, & Davison, 1965; Rosenhan & London, 1963; Schulman & London, 1963). Thus, the available evidence suggests that hypnotic suggestions do not enhance the learning process. However, it should be noted that most of these studies have used an hypnotic induction based on suggestions for relaxation and sleep, which might interfere with both motor performance and learning. Relaxation is not necessary for hypnosis, however (Banyai & Hilgard, 1976), and it remains possible that different results would be obtained if suggestions for an active, alert form of hypnosis were given instead. Moreover, suggestions that capitalize on the hypnotized subject's capacity for imaginative involvement may prove to be better than mere exhortations (Slotnick, Liebert, and Hilgard, 1965). Thus, the issue of the hypnotic enhancement of learning and performance should not be considered closed. Hypnosis and Remembering Laboratory studies of hypermnesia have a history extending back to the beginnings of the modern period of hypnosis research (for other reviews, see Erdelyi, 1988; Smith, 1983). For example, Young (1925, 1926) taught his subjects lists of nonsense syllables in the normal waking state, and then subsequently tested recall in and out of hypnosis, each time motivating subjects for maximal recall. There was no advantage of hypnosis over the waking test. Later experiments employing nonsense syllables also failed to find any effect of hypnosis (Baker, Haynes, & Patrick, 1983; Barber & Calverly, 1966; Huse, 1930; Mitchell, 1932). By contrast, studies employing meaningful linguistic or pictorial material have sometimes shown hypermnesia effects. Stalnaker and Riddle (1932) tested college students on their recollections for prose passages and verse that had been committed to memory at least one year previously. Testing in hypnosis, with suggestions for hypermnesia, resulted in a significant enhancement over waking recall. These findings have been confirmed by other investigators who tested memory for prose, poetry, filmed material, and real-world memories (DePiano & Salzberg, 1982; Hofling, Heyl, & Wright, 1971; Young, 1926). In the first direct comparison of nonsense with meaningful material, White, Fox, and Harris (1940) found that hypermnesia suggestions resulted in a striking improvement in memory for the poetry and travelogue, but had no effect on memory for nonsense syllables. Similar results were also obtained by Rosenthal (1944) and Dhanens and Lundy (1975), who compared nonsense syllables with poetry and with prose, respectively. On the basis of this kind of evidence, it might be concluded that laboratory studies tend to support the conclusions from uncontrolled case studies. However, it should be noted that the effects achieved in the laboratory, while sometimes statistically significant, are rarely dramatic. Moreover, it fairly clear that any gains obtained during hypnosis are not attributable to hypnosis per se, but rather to normal hypermnesia effects of the sort described earlier. Thus, at least four investigations (Nogrady, McConkey, & Perry, 1985; Register & Kihlstrom, 1987, 1988; Whitehouse, Dinges, Orne, & Orne, 1991), adapting the hypermnesia paradigm introduced by Erdelyi and Becker (1974), found significant increments in memory for pictures or words in trials conducted during hypnosis; but these increments were matched, if not exceeded, by gains made by control subjects tested without hypnosis. Two studies have observed small gains in memory attributable to hypnosis (Shields & Knox, 1986; Stager & Lundy, 1985), but neither finding has been replicated (Lytle & Lundy, 1988). Moreover, Register and Kihlstrom (1987, 1988) found in that levels of hypermnesia were no higher in hypnotizable subjects than in those who were insusceptible to hypnosis -- thus strengthening the inference that whatever improvements occurred were the result of nonhypnotic processes. Most important, it seems clear that the increase in valid memory may be accompanied by an equivalent or greater increment in confabulations and false recollections. In the experiment by Stalnaker and Riddle (1932), for example, hypnosis produced a substantial increase in confabulation over the normal waking state, so that overall memory accuracy was very poor. Apparently the hypnotized subjects were more willing to attempt recall, and to accept their productions -- however erroneous they proved to be -- as reasonable facsimiles of the originals. These conclusions are supported by more recent experiments by Dywan (1988; Dywan & Bowers, 1983) and Nogrady et al. (1986), who found that hypnotic suggestions for hypermnesia produced more false recollections by hypnotizable than insusceptible subjects. Whitehouse et al. (1991) found that hynosis increased the confidence associated with memory reports that had been characterized as mere guesses on a prehypnotic test. Dywan and Bowers (1983) have suggested that hypnosis impairs the process of reality monitoring, so that hypnotized subjects are more likely to confuse imagination with perception (Johnson & Raye, 1981). Proponents of forensic hypnosis often discount these sorts of findings on the ground that they are obtained in sterile, laboratory investigations that bear little resemblance to the real-world circumstances in which hypnosis is actually used -- an argument that closely resembles that made by some researchers allied with the "ecological memory" movement (for critiques, see Banaji & Crowder, 1989, 1991; for more positive views, see Aanstoos, 1991; Bahrick, 1991; Conway, 1991; Ceci & Bronfenbrenner, 1991; Gruenberg, Morris, & Sykes, 1991; Loftus, 1991; Morton, 1991; Neisser, 1978, 1991; for attempts at reconciliation, see Bruce, 1991; Klatzky, 1991; Roediger, 1991; Tulving, 1991). However, the evidence supporting this assertion is rather weak. Reiser (1976), a police department psychologist who has trained many investigators in hypnosis, has claimed that the vast majority of investigators who tried hypnosis found it to be helpful; but such testimonials cannot substitute for actual evidence. In fact, a remarkable doctoral dissertation by Sloane (1981), conducted under Reiser's supervision randomly assigned witnesses and victims in actual cases being investigated by the Los Angeles Police Department to hypnotic and nonhypnotic conditions, and found no advantage for hypnosis. A study by Timm (1981), in which police officers themselves were witnesses to a mock crime (after having been relieved of their firearms through a ruse!), gave similar results. A later study by Geiselman, Fisher, MacKinnon, and Holland (1985), employing very lifelike police training films as stimuli and actual police officers as investigators, did show some advantage for hypnosis over an untreated control condition; however, the benefits of hypnosis were matched by unhypnotized subjects led through a "cognitive interview" capitalizing on various cognitive strategies (unfortunately, there was no comparison condition in which the cognitive interview was administered during hypnosis). Thus, the available evidence does not indicate that hypnosis has any privileged status as a technique for enhancing memory. To paraphrase Nogrady et al. (1985), trying hypnosis seems to be no better than merely trying again. In fact, trying hypnosis may make things worse, because hypnosis -- almost by definition -- entails enhanced responsiveness to suggestion. Therefore, if memory is tainted by leading questions and other suggestive influences, as Loftus' work suggests, these elements may be even more likely to incorporated into memories that have been refreshed by hypnosis. Putnam (1979) was the first to demonstrate this effect. He exposed his subjects to a variant of Loftus' (1975) paradigm, in which subjects viewed a videotape of a traffic accident followed by an interrogation that included leading questions. Those subjects who were interviewed while they were hypnotized were more likely to incorporate the misleading postevent information into their memory reports. Similar results were obtained by Zelig and Beidelman (1981) and Sanders and Simmons (1983). Register and Kihlstrom (1987), employing a variant of Loftus' procedure introduced by Gudjonsson (1984), failed to find that hypnosis increased interrogative suggestibility; but errors introduced during the hypnotic test did carry over to subsequent nonhypnotic tests. An extensive and complex series of studies by Sheehan and his colleagues (e.g., Sheehan, 1987, 1988a, 1988b; Sheehan & Grigg, 1985; Sheehan, Grigg, & McCann, 1984; Sheehan & Tilden, 1983, 1984, 1986) found that subjects tested during hypnosis were more confident in their memory reports than were those tested in the normal waking state -- regardless of the accuracy of these reports. The situation is even worse, apparently, when the suggestions are more explicit, as in the case of hypnotically suggested paramnesias (Kihlstrom & Hoyt, 1990; Levitt & Chapman, 1979; Reyher, 1967). Laurence and Perry (1983) suggested (falsely, of course) to a group of hypnotized subjects that on a particular night they had awakened to a noise. After hypnosis was terminated, all the subjects remembered the suggested event as if it had actually occurred; almost half of the subjects maintained this belief even when told that the event had been suggested to them by the hypnotist. Similar results were obtained by a number of investigators (Labelle, Laurence, Nadon, & Perry, 1990; Lynn, Milano, & Weekes, 1991; McCann & Sheehan, 1988; McConkey & Kinoshita, 1985-1986; McConkey, Labelle, Bibb, & Bryant, 1990; Sheehan, Statham, & Jamieson, 1991; Spanos, Gwynn, Comer, Baltruweit, & deGroh, 1989; Spanos & McLean, 1985-1986). Unfortunately, the precise conditions under which the pseudomemory effect can be obtained remain obscure. Equally important, it remains unclear whether the pseudomemories reflect actual changes in stored memory traces or biases in memory reporting -- an issue that also has been raised in the postevent misinformation effect observed outside hypnosis (e.g., McCloskey & Zaragoza, 1985; Loftus, Schooler, & Wagenaar, 1985; Metcalfe, 1990; Tversky & Tuchin, 1989). Direct suggestions for hypermnesia are often accompanied by suggestions for age-regression: that the subject is reverting to an earlier period in his or her own life, reliving an event, and acting in a manner characteristic, of that age (for reviews, see Nash, 1987; Perry, Laurence, D'Eon, & Tallant, 1988; Reiff & Scheerer, 1959; O'Connell, Shor, & Orne, 1970; Yates, 1961). Most research on this phenomenon has addressed the question of whether the age-regressed adult reverts to modes of psychological functioning that are characteristic of the target age, typically in childhood. Upon closer examination, however, the naive concept hypnotic age regression proves to be a complex blend of three elements: ablation, the functional loss of the person's knowledge, abilities, and memories acquired after the suggested age; reinstatement, a return to archaic, or at least chronologically earlier modes, of cognitive and emotional functioning (i.e., procedural and semantic knowledge); and revivification, improved access to memories (i.e., episodic knowledge) from the suggested age (and before). There is no evidence that the subject age-regressed to childhood loses access to his or her adult knowledge and abilities (O'Connell et al., 1970; Orne, 1951; Perry & Walsh, 1978). Thus, adults regressed to childhood, and asked to take dictation from the hypnotist, may write, in a childlike hand but without spelling errors, the sentence "I am conducting an experiment which will assess my psychological capacities" -- a behavior that is clearly beyond the capacity of most children; alternatively, an adult who arrived in America as a monolingual child may reply in his native tongue to questions posed to him in English (Orne, 1951). Such conduct is one of the classic examples of what Orne (1959) called trance logic -- the hypnotized subject's tendency to freely mix illusion and reality while responding to hypnotic suggestions. Although the interpretation of trance logic is controversial (e.g., Spanos, 1986; McConkey, Bryant, Bibb, & Kihlstrom, 1991), contradictions between childlike and adult behavior have been observed too often to sustain the notion that age-regression involves the forgetting of adult procedural and declarative knowledge. It is possible, as Spanos (1986) has suggested, that trance logic reflects incomplete responding on the part of hypnotized subjects. On the other hand, it is also possible that the contradictions observed in age regression reflect the impact of adult knowledge that is denied to conscious awareness, but nevertheless continues to influence the behavior and experience of the age-regressed subject -- much in the manner of an implicit memory (Schacter, 1987). In principle, however, the prospects for reinstatement are more promising: the hallucinated environment created by age regression may provide a context that facilitates the retrieval of procedural knowledge characteristic of childhood. Nevertheless, the evidence for reinstatement is ambiguous. As (1962) found a college student who had spoken a Finnish-Swedish dialect until age eight, but who no longer remembered the language; his knowledge of the language improved somewhat under hypnotic age-regression. More dramatic findings were obtained by Fromm (1970) in a nisei student who denied any knowledge of Japanese; when age-regressed, she broke into fluent if childish Japanese. In contrast, Kihlstrom (1978a) reported an unsuccessful attempt to revive Mandarin in a college undergraduate who had not spoken the language since kindergarten in Taiwan. What accounts for these different outcomes is not clear: Fromm's subject was highly hypnotizable, and had been imprisoned in an American concentration camp during World War II (suggesting that her knowledge of Japanese had been covered by repression); Kihlstrom's subject was completely refractory to hypnosis. In terms of experimental studies, Nash (1987) has found no convincing evidence favoring the reinstatement of childlike modes of mental functioning, whether these are defined in terms of physiological responses (e.g., the Babinski reflex, in which the toes fan upward in response to plantar stimulation), loss of mental age on IQ tests (e.g., the Stanford-Binet), reversion to preconceptual (Werner) or preformal (Piaget) modes of thought (e.g., failing to predict the order in which three spheres will emerge from a hollow tube after it has been rotated through half or whole turns; defining right or wrong in terms of what is rewarded or punished), or perceptual processes (e.g., changes in magnitude of the Ponzo and Poggendorf illusions; the return of eidetic imagery ostensibly prominent in children). Perhaps the most compelling evidence for reinstatement are studies by Nash and his colleagues (Nash, Johnson, & Tipton, 1979; Nash, Lynn, Stanley, Frauman, & Rhue, 1985), in which subjects regressed to age 3 and imagining a frightening situation, behaved in an age-appropriate manner: searching for teddy bears and other "transitional objects". Interestingly, insusceptible subjects simulating hypnosis do not behave in this manner. However, these results are vitiated to some extent by interviews of the subjects' mothers, which revealed that the transitional objects chosen by the age-regressed subjects were not typically those actually possessed by those subjects as children (Nash, Drake, Wiley, Khalsa, & Lynn, 1986). Thus, as Nash (1987) noted, age-regression may reinstate childlike modes of emotional functioning, but it does not necessarily revive specific childhood memories. The revivification component of age regression is conceptually similar to the recovery of memory in hypermnesia; and, as with reinstatement, it is possible, at least in principle, that the hallucination of an age-appropriate environment might facilitate the retrieval of childhood memories. Everyone who has administered a the Stanford Hypnotic Susceptibility Scale Form C, which includes a suggestion for age regression, has observed subjects who appear to relive episodes from childhood that have been forgotten, or not remembered for a long time. Supporting these observations, Young (1926) was able to elicit a substantial number of early recollections, whose accuracy was independently verified, in two hypnotizable subjects. And more recently, Hofling, Heyl, and Wright (1971) compared subjects' recall of personal experiences to actual diary entries made at the time, and found superior memory during hypnosis compared to a nonhypnotic session. Unfortunately, neither of these experiments examined false recollections that may have been produced by the subjects; and the obvious difficulty in obtaining independent verification effectively prevents many more studies of this sort from being done, in order to understand better the conditions under which these improvements in memory might be obtained. In the absence of independent confirmation, it should be understood that the apparent enhancement of memory occurring as a result of hypnosis may be illusory. But even independent confirmation does not guarantee that hypnosis itself is responsible for the appearance of revivification: the enhancement of memory may come from general world-knowledge or cues provided by the experimenter, rather than improved access to trace information. The salient cautionary tale is provided by True (1949), who reported that age-regressed subjects were able to identify at better than chance levels the day of the week on which their birthdays, and Christmas, fell in their fourth, seventh, and tenth years. Yates (1961) and Barber (1969) noted that the correct day can be calculated by the use of a fairly simple algorithm. However, it remains to be seen whether most, or even many, subjects know the formula in question; moreover, the procedure requires that subjects know the day of the week on which these holidays fall in the current year -- information that is probably not known by most subjects. More to the point, it is now known that the experimenter in question knew the answers to the questions as they were asked; when the experimenter is kept blind to the correct answer, response levels fall to chance (O'Connell et al., 1970). Notes on Forensic Hypnosis Despite the poverty of evidence supporting the idea that memory can be enhanced by hypnotic suggestions, hypnosis has come to be used by police officers, attorneys, and even judges in an effort to refresh or bolster the memories of witnesses, victims, and suspects in criminal investigations. Their conviction in the utility of forensic hypnosis is bolstered by occasional cases in which the use of hypnosis was associated with the recovery of useful clues (e.g., Dorcus, 1960; Raginsky, 1969). One such case was the kidnapping, in Chowchilla, California, of a schoolbus full of children: when hypnotized, the driver recalled a portion of a license plate that was eventually traced to a vehicle used by the perpetrators (Kroger & Douce, 1979). Such successes, when combined with reports of the hypnotic recovery of traumatic memories during psychotherapy (e.g., Breuer & Freud, 1893-1895) has led to the development of a virtual industry of forensic hypnosis. Of course, Freud later concluded that the reports of his patients were fantasies, not veridical memories. And although the Chowchilla kidnapping is often counted as a success, it is often forgotten that the driver also recalled a license tag that had no connection to the crime; it was other evidence that led to the successful solution to the case. Then, too, Dorcus (1960) had reported as many successes as failures in his own experience: reviewing his cases, the operative factor seems to have been the extent to which the memories were encoded in the first place. Moreover, a number of instances have been recorded where the memories produced by hypnotized witnesses and victims have proved highly implausible or even false (for a sampling, see Orne, 1979). The inherent unreliability of hypnotically elicited memories -- the difficulty of distinguishing between illusion and reality, the susceptibility of hypnotically refreshed memory to distortion by inadvertent suggestion, and the tendency of subjects to enhance the credibility of memories produced through hypnosis -- creates problems in the courtroom. These problems are enhanced by the possibility that investigators, and jurors, will give more credence than they deserve to memories refreshed by hypnosis (Labelle, Lamarche, & Laurence, 1990; McConkey, 1986; McConkey & Jupp, 1985, 1985-1986; Wilson, Greene, & Loftus, 1986). Thus, under the worst-case scenario, a hypnotized witness may produce an entirely false memory under hypnosis; testify to it convincingly; and be believed; even if the witness' memory does not change under hypnotic interrogation, the fact that a particular item of information, true or false, is remembered both in and out of hypnosis may lead the witness, and jurors, to give more credibility to the testimony than would be warranted. For these reasons, and in response to a number of cases that were prosecuted on the basis of evidence that later proved to be incorrect, both the medical establishment (American Medical Association, 1985) and the courts (Diamond, 1980; Kuplicki, 1988; Laurence & Perry, 1988; Orne, 1979; Orne, Dinges, & Orne, 1990; Orne, Soskis, Dinges, & Orne, 1984; Orne, Whitehouse, Dinges & Orne, 1988; Udolf, 1983, 1990) have begun to establish guidelines for the introduction and evaluation of hypnotically elicited memories. By this time, the issue of hypnosis has been considered by courts in more than half of the United States (and by courts in Canada, Australia, and other countries as well). In a recent review, Scheflin and Shapiro (1989) cite more than 400 appellate cases from more than 40 states in which hypnosis has been involved in one way or another. A full review of the legal status of forensic hypnosis is beyond the scope of this paper. In general, however, courts in the United States have taken one of three positions: (1) total exclusion of testimony based on hypnotically refreshed memory (e.g., State vs. Mack, 1980; People vs. Shirley, 1982); (2) total admission, with the weight of the evidence to be determined by the jury (e.g., Harding vs. State, 1968); and (3) admission of hypnotically refreshed memory, provided that certain procedural safeguards (such as those proposed by Orne, 1979; Orne et al., 1984; see also Ault, 1979) have been followed during the hypnotic session (e.g., State vs. Hurd, 1981; State vs. Armstrong, 1983). Perhaps the dominant position in the state courts is a per se exclusion of all hypnotically elicited evidence, and some courts have gone so far as to exclude from testimony even the pre-hypnotic memories of a witness who has been subsequently hypnotized (Kuplicki, 1988), on the grounds that hypnosis may distort prehypnotic as well as hypnotic memories -- for example, by inflating the subject's confidence in what he or she had already remembered. The conflicting laws operative in different jurisdictions virtually guarantee that the issue of forensic hypnosis will eventually come before the Supreme Court. In fact, while a number of cases involving hypnotized witnesses and victims have been denied certiorari, a case involving a hypnotized defendant was recently decided: Rock v. Arkansas (1987; for reviews of this case, see Kuplicki, 1988; Orne et al., 1990; Perry & Laurence, 1990; Udolf, 1990). By a hairline (5-4) majority, the Court (whose majority decision was authored by Justice Blackmun) decided that a defendant's hypnotically refreshed memories are admissible in court, without any restrictions or constraints. However, a reading of the opinion makes it clear that the Court's decision rested more on a concern for the defendant's Fifth Amendment right to testify in his or her own behalf, than it did on any acceptance of hypnotic technique. Under the United States Constitution, defendants are given every opportunity to defend themselves, and this includes resort to hypnosis. In fact, the Court's opinion (especially the minority view, authored by Chief Justice Rhenquist) clearly recognizes the problems posed by the use of hypnosis in the legal system. There are a number of different legal issues here (for early treatments, see Diamond, 1980; Warner, 1979; Worthington, 1979; for a recent overview, see Kuplicki, 1988). First of all, the question of whether hypnosis, as a scientific technique for the enhancement of memory, meets the standards for the admission of scientific evidence. Under the "Frye Rule" (Frye vs. United States, 1923) which currently governs the admissability of scientific evidence, "the thing from which the deduction is made must be sufficiently established to have gained general acceptance in the particular field to which it belongs". While hypnosis is clearly established as a potentially efficacious treatment modality in medicine and psychotherapy (American Medical Association, 1958) and a legitimate topic for scientific research (as evidenced by the establishment of Division 30, Psychological Hypnosis, of the American Psychological Association), there is no consensus about the reliability of hypnotically enhanced memory. In fact, if there is such a consensus, it is represented by the recent position paper of a committee of the American Medical Association (American Medical Association, 1985): hypnotically refreshed memory is inherently unreliable. There are also constitutional issues at stake, particularly surrounding the Sixth Amendment, which gives defendants the right to confront witnesses against them. After all, hypnosis has the potential to permanently distort a witness' memory -- thus leading, in effect, to the destruction of potential exculpatory evidence. Hypnosis can increase the likelihood of both unintended confabulations and the influence of leading questions and other misinformation. The confusion between illusion and reality that is part and parcel of the hypnotic experience may be fascinating in the laboratory and perhaps useful in the clinic; but it is potentially fatal in the courtroom. The myths surrounding the wonders of hypnosis may lead witnesses to inappropriately inflate their confidence in what they remember; or they may lead jurors to inappropriately accept their memories as accurate. In any event, the result is a threat to the validity of the evidence presented to factfinders. Because defendants have rights that the state does not, the decision in Rock v. Arkansas does not imply that testimony by hypnotized witnesses and victims will be allowed without restraint. The result is likely to be a bifurcated rule (Kuplicki, 1988) in which hypnosis is permitted to defendants with few restrictions, but severely constrained when used with witnesses and victims. For the present, however, those who use hypnosis forensically should be aware of the dangers posed by its use, and should conform their procedures to the sorts of guidelines adopted in many jurisdictions. The purpose of these procedural safeguards is twofold: (1) to minimize the possibility that the witness' independent memory will be contaminated by hypnosis; and (2) to maximize the likelihood that such contamination will be detected if it has occurred. One set of guidelines, based on those proposed by Orne (1979) and adopted in the United States by the Federal Bureau of Investigation (Ault, 1979), follows. It should be understood that it is the responsibility of the party employing hypnosis to affirmatively establish that these guidelines have been followed. (1) There should be a prima facie case that hypnosis is appropriate. Memories that have not been properly encoded are not likely to be retrieved, even by heroic means. Thus, hypnosis will be of no use in cases where the witness did not have a good view of the critical events, or was intoxicated or sustained head injury at the time of the crime. (2) For the same reason, there should be an objective assessment of the subject's hypnotizability, employing one or another of the standardized scales developed for this purpose. Hypnosis will be of no use with subjects who are not at least somewhat hypnotizable. (3) The hypnotist should be an experienced professional, knowledgeable of basic principles of psychological functioning and the scientific method. Forensic hypnosis raises cognitive issues, such as the nature of memory, and clinical issues, such as the subject's emotional reactions to any new information yielded by the procedure, and the hypnotist must be capable of evaluating and dealing with the situation on both counts. (4) The hypnotist should be a consultant acting independently of any investigative agency, either prosecution/plaintiff or defense/respondent, so as to emphasize the goal of the procedure: collecting information rather than supporting a particular viewpoint. (5) The hypnotist should be informed of only the barest details of the case at hand, so as to minimize the possibility that his or her preconceptions will influence the course of the hypnotic interview. In any event, a written record of all information transmitted to the hypnotist should be preserved. (6) A thorough interview should be conducted by the hypnotist, in advance of the hypnotic session, in order to establish a baseline against which any subsequent changes in memory can be evaluated. (7) Throughout the pre-hypnotic and hypnotic interview, the hypnotist and the subject should be isolated from other people, especially those who have independent knowledge of the facts of the case, suspects, etc., so as to preclude the possibility of inadvertent cuing and contamination of the subject's memory. (8) A complete recording of all interactions between hypnotist and subject should be kept, to permit evaluation of the degree to which untoward influence may have occurred. These standards are obviously difficult (though not impossible) to meet. For this reason, and because of the continuing constitutional controversy attached to forensic hypnosis, investigators are advised to confine their use of hypnosis to the gathering of investigative leads. Under these circumstances, hypnotically refreshed memories are not introduced into evidence, and the case is based solely on independently verifiable evidence. ?four decimals, anyway) the phenomenon of hypermnesia is no longer in doubt, controversy continues over the mechanisms responsible for the effect. For example, Roediger and his colleagues have suggested that hypermnesia is mediated by the increased time permitted for recall PROSPECTS FOR THE STRATEGIC CONTROL OF MEMORY The conclusion that emerges from this review is that the strategic self-regulation of memory is possible. The possibility of successful self-regulation flows naturally from the point of view that memory is a skilled activity as well as a mental storehouse, and from the reasonable assumption that people can acquire and perfect cognitive as well as motor skills. Certainly, the sorts of principles that control memory function can serve as guides for successful self-regulation. We can remember things better by paying active attention to them at the time they occur, deliberately engaging in elaborative and organizational activity that will establish links between one item of information and another; and we can facilitate forgetting by neglecting to do so. Forgetting will increase with the passage of time, if we allow it to happen; but continued rumination about the to-be-forgotten material may prevent this natural process from occurring. Once-forgotten items can be recovered, too, if somehow we are able to find the right cues to gain access to them; and some spontaneous recovery is to be expected as well, especially if the information was well-encoded in the first place. Remembering an event can be facilitated by returning to the environment, or mood state, present at the time the event occurred. Remembering is improved by taking generic world-knowledge into account, so that the person need not rely exclusively on trace information. And, perhaps, memories can be recoded, and thus altered, in the light of information acquired after the event in question. In the absence of conscious recollection, sheer guessing -- influenced by implicit memory, which is much less constrained by the conditions of encoding and retrieval -- may lead the person to better-than-chance levels of memory performance. At the same time, there are clear constraints on what can be achieved through strategic remembering and forgetting. Aside from hope and luck, little can be done to improve the situation where encodings were poor, and the retrieval environment is impoverished. Elaborative and organizational activity both require active cognitive effort, and thus are affected by limitations on attentional resources. Retrieval cues help memory, but they must be the right sorts of cues, compatible with the way in which the information was processed at the time of its original encoding. World-knowledge, and postevent information, may distort a person's memory for what actually occurred. And attempts at deliberate forgetting, or the retrospective alteration of memories, may change accessibility but not availability. Thus the forgotten knowledge, or the original memory, may nonetheless continue to influence the person's experience, thought, and action in the form of implicit memory. Hypnosis, for all its apparent wonders, does not eliminate these constraints. It can be a powerful technique for altering conscious experience, but it does so by following, rather than transcending, the laws that govern ordinary mental life. Thus, hypnosis presents some interesting possibilities for the self-control of memory, but it confronts the same sorts of limitations as well. Thus, hypnotic suggestions for amnesia may be very effective in reducing the person's conscious awareness of some event, but -- like nonhypnotic directed forgetting -- it can be breached, to some extent, by deliberate efforts at recall, and by cued recall and recognition procedures. More important, the forgotten memories may still be expressed implicitly, outside of conscious awareness. So far as we can tell, hypnosis does not, in and of itself, facilitate learning; and it does not appear to add anything to the hypermnesia that occurs in the normal waking state. Although, in principle, a hypnotically hallucinated environment might supply new cues to facilitate remembering, it must be remembered that the cues in question are hallucinatory, not veridical, and thus may produce misleading results -- the more so because hypnotized subjects are highly responsive to suggestions. Hypnosis, in its classic manifestations, has a profoundly delusional quality: thus, the subjective conviction that accompanies hypnotic remembering should not be confused with accuracy. For this reason, the clinical and forensic use of hypnosis to refresh recollection is fraught with dangers, and is to be used, if at all, with considerable circumspection. But the mere existence of limitations, and the sad fact that hypnosis cannot make us better than we are, should not deter us from acquiring, and deploying, our skills of remembering and forgetting. There is much that we can do in both respects. The point of view represented here is based on research supported by Grant MH-35856 from the National Institute of Mental Health. We thank Jill Booker, Jeffrey Bowers, Jennifer Dorfman, Elizabeth Glisky, Martha Glisky, Lori Marchese, Susan McGovern, Sheila Mulvaney, Robin Pennington, Michael Polster, Barbara Routhieux, Victor Shames, and Michael Valdessari for their comments. Ammons, H., & Irion, A. L. (1954). A note on the Ballard reminiscence phenomenon. Journal of Experimental Psychology, 48, 184-186. Aanstoos, C.M. (1991). Experimental psychology and the challenge of real life. American Psychologist, 46, 77-78. American Medical Association (1958). Medical use of hypnosis. Journal of the American Medical Association, 168, 186-189. American Medical Association (1985). Scientific status of refreshing recollection by the use of hypnosis. Journal of the American Medical Association, 253, 1918-1923. Anderson, J.R. (1976). Language, memory, and thought. Hillsdale, N.J.: Erlbaum. Anderson, J.R. (1978). Arguments concerning representations for mental imagery. Psychological Review, 85, 249-277. Anderson, J.R. (1983). The architecture of cognition. Cambridge: Harvard University Press. Anderson, J.R. (1990). Cognitive psychology and its implications. 3rd Ed. San Francisco: Freeman. Anderson, J.R., & Reder, L.M. (1979). An elaborative processing explanation of depth of processing. In L.S. Cermak & F.I.M. Craik (Eds.), Levels of processing in human memory (pp. 385-403). As, A. (1962). The recovery of forgotten language knowlege through hypnotic age regression: A case report. American Journal of Clinical Hypnosis, 5, 24-29. Ault, R.L. (1979). FBI guidelines for use of hypnosis. International Journal of Clinical & Experimental Hypnosis, 27, 449-451. Baddeley, A.D. (1976). The psychology of memory. New York: Basic Books. Baddeley, A.D. (1990). Human memory: Theory and practice. Boston: Allyn & Bacon. Bahrick, H.P. (1984). Semantic memory content in permastore: 50 years of memory for Spanish learned in school. Journal of Experimental Psychology: General, 113, 1-29. Bahrick, H.P. (1991). A speedy recovery from bankruptcy for ecological memory research. American Psychologist, 46, 76-77. Baker, R.A., Haynes, B., & Patrick, B.S. (1983). Hypnosis, memory, and incidental memory. American Journal of Clinical Hypnosis, 25, 253-262. Ballard, P.B. (1913). Oblivescence and reminiscence. British Journal of Psychology (Monograph Supplements), 1 (2). Banaji, M.R., & Crowder, R.G. (1989). The brkruptcy of everyday memory. American Psychologist, 44, 1185-1193. Banaji, M.R., & Crowder, R.G. (1991). Some everyday thoughts on ecologically valid methods. American Psychologist, 46, 78-79. Banyai, E.I., & Hilgard, E.R. (1976). A comparison of active-alert hypnotic induction with traditional relaxation induction. Journal of Abnormal Psychology, 85, 218-224. Barber, T.X. (1969). Hypnosis: A scientific approach. New York: Van Nostrand Reinhold. Barber, T.X., & Calverley, D.S. (1964). Toward a theory of "hypnotic" behavior: An experimental study of "hypnotic time distortion". Archives of General Psychiatry, 10, 209-216. Barber, T.X., & Calverley, D.S. (1966). Toward a theory of "hypnotic" behavior: Experimental analyses of suggested amnesia. Journal of Abnormal Psychology, 71, 95-107. Bartlett, F.C. (1932). Remembering: A study in experimental and social psychology. Cambridge: Cambridge University Press. Bellezza, F.S. (1981). Mnemonic devices: Classification, characteristics and criteria. Review of Educational Research, 51, 247-275. Belmore, S. M. (1981). Imagery and semantic elaboration in hypermnesia for words. Journal of Experimental Psychology: Human Learning and Memory, 7, 191-203. Bertrand, L.D., Spanos, N.P., & Parkinson, B. (1983). Test of the dissipation hypothesis of posthypnotic amnesia. Psychological Reports, 52, 667-671. Bertrand, L.D., Spanos, N.P., & Radtke, H.L. (1990). Contextual effects on priming during hypnotic amnesia. Journal of Research in Personality, 24, 271-290. Bjork, R.A. (1970). Positive forgetting: The noninterference of items intentionally forgotten. Journal of Verbal Learning & Verbal Behavior, 9, 255-268. Bjork, R.A. (1972). Theoretical implications of directed forgetting. In A.W. Melton & E. Martin Eds.), Coding proceses in human memory (pp. 217-235). Washington, D.C.: Winston. Bjork, R.A. (1978). The updating of human memory. In G.H. Bower (Eds.), The Psychology of Learning and Motivation (Vol. 12, pp. 235-259). New York: Academic. Bjork, R.A. (1989). Retrieval inhibition as an adaptive mechanism in human memory. In H.L. Roediger & F.I.M. Craik (Eds.), Varieties of memory and consciousness: Essays in honour of Endel Tulving (pp. 195-210). Hillsdale, N.J.: Erlbaum. Bjork, R.A., & Bjork, E.L. (1991, August). Dissociations in the impact of to-be-forgotten infomration on memory. Paper presented at the annual meeting of the American Psychological Association, San Francisco. Bjork, R.A., & Bjork, E.L. (in press). A new theory of disuse and an old theory of stimulus fluctuation. In A.F. Healy, S.M. Kosslyn, & R.M. Shiffrin (Eds.), From learning processes to cognitive processes: Essays in honor of William K. Estes (Vol. 2, pp. xxx-xxx). Hillsdale, N.J.: Erlbaum. Block, R.A. (1971). Effects of instructions to forget in short-term memory. Journal of Experimental Psychology, 89, 1-9. Booker, J. (1991). Interference effects in implicit and explicit memory. Unpublished doctoral dissertation, Universiy of Arizona. Bower, G.H. (1970a). Analysis of a mnemonic device. American Scientist, 58, 496-510. Bower, G.H. (1970b). Organizational factors in memory. Cognitive Psychology, 1, 18-46. Bower, G.H., & Reitman, J.S. (1972). Mnemonic elaboration in multilist learning. Journal of Verbal Learning & Verbal Behavior, 11, 478-485. Bransford & Franks, J.J. (1971). The abstraction of linguistic ideas. Cognitive Psychology, 2, 331-350. Breuer & Freud. (1895/1955). Studies on hysteria. In J. Strachey (Ed.), The standard edition of the complete psychological works of Sigmund Freud, Vol. 2. London: Hogarth Press. Brown, E., & Deffenbacher, K. (1975). Forgotten mnemonists. Journal of the History of the Behavioral Sciences, 11, 342-349. Brown, W. (1923). To what extent is memory measured by a single recall? Journal of Experimental Psychology, 6, 377-385. Bruce, D. (1991). Mechanistic and functional explanations of memory. American Psychologist, 46, 46-48. Bugelski, B.R., Kidd, E., & Segmen, J. (1968). Image as a mediator in one-trial paired-associate learning. Journal of Experimental Psychology, 76, 69-73. Buxton, C. E. (1943). The status of research in reminiscence. Psychological Bulletin, 40, 313-340. Ceci, S.J., & Bronfenbrenner, W. (1991). On the demise of everyday memory: "The rumors of my death are much exaggerated" (Mark Twain). American Psychologist, 46, 27-31. Cermak, L.S. (1976). Improving your memory. New York: McGraw-Hill. Chase, W.G., & Ericsson, K.A. (1982). Skill and working memory. In G.H. Bower (Ed.), The psychology of learning and motivation (Vol. 16, pp. 1-58). New York: Academic Press. Coe, W.C. (1978). The credibility of posthypnotic amnesia: A contextualist's view. International Journal of Clinical & Experimental Hypnosis, 26, 218-245. Coe, W.C., Basden, B., Basden, D., & Graham, C. (1976). Posthypnotic amnesia: Suggestions of an active process in dissociative phenomena. Journal of Abnormal Psychology, 85, 455-458. Coe, W.C., Basden, B.H., Basden, D., Fikes, T., Gargano, G.J., & Webb, M. (1989). Directed forgetting and posthypnotic amnesia: Information-processing and social contexts. Journal of Personality & Social Psychology, 56, 189-198. Coe, W.C., Baugher, R.J., Krimm, W.R., & Smith, J.A. (1976). A further examination of selective recall following hypnosis. International Journal of Clinical & Experimental Hypnosis, 24, 13-21. Coe, W.C., & Sluis, A.S.E. (1989). Increasing contextual pressures to breach posthypnotic amnesia. Journal of Personality & Social Psychology, 57, 885-894. Coe, W.C., Taul, J.H., Basden, D., & Basden, B. (1973). Investigation of the dissociation hypothesis and disorganized retrieval in posthypnotic amnesia with reptoactive inhibition in free-recall learning. Proceedings of the 81st annual convention of the American Psychological Association, 8, 1081-1082. Coe, W.C., & Yashinski, E. (1985). Volitional experiences associated with breaching amnesia. Journal of Personality & Social Psychology, 48, 716-722. Cole, M., & Gay, J. (1972). Culture and memory. American Anthropologist, 74, 1066-1084. Cole, M., Gay, J., Glick, J., & Sharp, D. (1971). The cultural context of learning and thinking. New York: Basic Books. Collyer, S.C., Jonides, J., & Bevan, W. (1972). Images as memory aids: Is bizarreness helpful? American Journal of Psychology, 85, 31-38. Conway, M.A. (1991). In defense of everyday memory. American Psychologist, 46, 19-26. Cooper, L.F., & Erickson, M.H. (1950). Time distortion in hypnosis II. Bulletin of the Georgetown University Medical Center, 4, 50-68. Cooper, L.F., & Rodgin, D.W. (1952). Time distortion in hypnosis and non-motor learning. Science, 115, 500-502. Cooper, L.F., & Tuthill, C.E. (1952). Time distortion in hypnosis and motor learning. Journal of Psychology, 34, 67-76. Cooper, L.F., & Erickson, M.H. (1954). Time distortion in hypnosis. Baltimore: Williams & Wilkins. Cooper, L.M. (1979). Hypnotic amnesia. In E. Fromm & R.E. Shor (Eds.), Hypnosis: Developments in research and new perspectives (pp. 305-349). New York: Aldine. Cooper, L.M., & London, P. (1973). Reactivation of memory by hypnosis and suggestion. International Journal of Clinical & Experimental Hypnosis, 21, 312-323. Craik, F. I. M., & Lockhart, R. S. (1972). Levels of processing: A framework for memory research. Journal of Verbal Learning & Verbal Behavior, 11, 671-684. Craik, F.I.M., & Tulving, E. (1975). Depth of processing and the retention of words in episodic memory. Journal of Experimental Psychology: General, 104, 268-294. Craik, F.I.M., & Watkins, M.J. (1973). The role of rehearsal in short-term memory. Journal of Verbal Learning & Verbal Behavior, 12, 598-607. Crowder. (1976). Principles of learning and memory. Hillsdale, N.J.: Erlbaum. Datas. (1904). A simple system of memory training. London: Gale & Polden. Davidson, T.M., & Bowers, K.S. (1991). Selective posthypnotic amnesia: Is it a successful attempt to forget or an unsuccessful attempt to remember? Journal of Abnormal Psychology, 100, 133-143. DePiano, F.A., & Salzberg, H.C. (1981). Hypnosis as an aid to recall of meaningful information presented under three types of arousal. International Journal of Clinical & Experimental Hypnosis, 29, 383-400. Dhanens, T.P., & Lundy, R.M. (1975). Hypnotic and waking suggestions and recall. International Journal of Clinical & Experimental Hypnosis, 23, 68-79. Diamond, B. (1980). Inherent problems in the use of pritrial hypnosis on a prospective witness. California Law Review, 68, 313-349. Dillon, R.F., & Spanos, N.P. (1983). Proactive interference and the functional ablation hypothesis: More disconfirmatory data. International Journal of Clinical & Experimental Hypnosis, 13, 47-56. Dorcus, R.M. (1960). Recall under hypnosis of amnestic events. International Journal of Clinical & Experimental Hypnosis, 8, 57-60. Dywan, J. (1987). The imagery factor in hypnotic hypermnesia. International Journal of Clinical & Experimental Hypnosis, 36, 312-326. Dywan, J., & Bowers, K. S. (1983). The use of hypnosis to enhance recall. Science, 222, 184-185. Ebbinghaus, H. (1885). Memory: A contribution to experimental psychology (H.A. Ruger & C.E. Bussenues, trans.). New York: Teachers College, Columbia University. Translation published, 1913. Eich, E. (1984). Memory for unattended events: Remembering with and without awareness. Memory & Cognition, 12, 105-111. Eich, E. (1989). Theoretical issues in state dependent memory. In H.L. Roediger & F.I.M. Craik (Eds.), Varieties of memory and consciousness: Essays in honour of Endel Tulving (pp. 331-354). Hillsdale, N.J.: Erlbaum. Eich, E., & Metcalfe, J. (1989). Mood dependent memory for internal versus external events. Journal of Experimental Psychology: Learning, Memory, & Cognition, 15, 443-455. Eich, J.E. (1980). The cue-dependent nature of state-dependent retrieval. Memory & Cognition, 8, 157-173. Ellis, H.C., & Hunt, R.R. (1989). Fundamentals of human memory and cognition. Dubuque, Iowa: Brown. Epstein, W. (1969). Poststimulus output specification and differential retrieval from short-term memory. Journal of Experimental Psychology, 82, 168-174. Epstein, W. (1970). Facilitation of retrieval resulting from post-input exclusion of part of the input. Journal of Experimental Psychology, 86, 190-195. Epstein, W. (1972). Mechanisms of directed forgetting. In G.H. Bower (Ed.), The Psychology of learning and motivation (Vol. 6, pp. 147-191). New York: Academic. Erdelyi, M. H. (1982). A note on the level of recall, level of processing, and imagery hypothesis of hypermnesia. Journal of Verbal Learning and Verbal Behavior, 21, 656-661. Erdelyi, M.H. (1984). The recovery of unconscious (inaccessible) memories: Laboratory studies of hypermnesia. In G.H. Bower (Ed.), The psychology of learning and motivation (Vol. 18, pp. 95-127). New York: Academic. Erdelyi, M.H. (1988). Hypermnesia: The effect of hypnosis, fantasy, and concentration. In H.M. Pettinati (Ed.), Hypnosis and memory (pp. 64-94). New York: Guilford. Erdelyi, M.H. (1990). Repression, reconstruction, and defense: History and integration of the psychoanalytic and experimental frameworks. In J. Singer (Ed.), Repression and dissociation: Implications for personality theory, psychopathology, and health. Chicago: University of Chicago Press. Erdelyi, M. H., & Becker, J. (1974). Hypermnesia for pictures: Incremental memory for pictures but not words in multiple recall trials. Cognitive Psychology, 6, 159-171. Erdelyi, M. H., Buschke, H. & Finkelstein, S. (1977). Hypermnesia for Socratic stimuli: The growth of recall for an internally generated memory list abstracted from a series of riddles. Memory and Cognition, 5, 283-286. Erdelyi, M. H., Finkelstein, S., Herrell, N., Miller, B., & Thomas, J. 1976). Coding modality vs. input modality in hypermnesia: Is a rose a rose a rose? Cognition, 4, 311-319. Erdelyi, M.H., & Goldberg, B. (1979). Let's not sweep repression under the rug: Toward a cognitive psychology of repression. In J.F. Kihlstrom & F.J. Evans (Eds.), Functional disorders of memory (pp. 355-402). Hillsdale, N.J.: Erlbaum. Erdelyi, M. H., & Kleinbard, J. (1978). Has Ebbinghaus decayed with time?: The growth of recall (hypermnesia) over days. Journal of Experimental Psychology: Human Learning and Memory, 4, 275-289. Erdelyi, M. H., & Stein, J. B. (1981). Recognition hypermnesia: The growth of recognition memory (d') over time with repeated testing. Cognition, 9, 23-33. Evans, F.J. (1979). Contextual forgetting: Posthypnotic source amnesia. Journal of Abnormal Psychology, 88, 556-563. Evans, F.J. (1988). Posthypnotic amnesia: Dissociation of content and context. In H.M. Pettinati (Ed.), Hypnosis and memory (pp. 157-192). New York: Guilford. Evans, F.J., & Kihlstrom, J.F. (1973). Posthypnotic amnesia as disrupted retrieval. Journal of Abnormal Psychology, 82, 317-323. Evans, F.J., & Orne, M.T. (1965). Motivation, performance, and hypnosis. International Journal of Clinical & Experimental Hypnosis, 19, 277-296. Evans, F.J., & Thorn, W.A.F. (1966). Two types of posthypnotic amnesia: Recall amnesia and source amnesia. International Journal of Clinical & Experimental Hypnosis, 14, 333-343. Feltz, D., & Landers, D. (1983). The effects of mental practice on motor skill learning and performance: A meta-analysis. Journal of Sport Psychology, 5, 25-57. Fowler, W.L. (1961). Hypnosis and learning. International Journal of Clinical & Experimental Hypnosis, 9, 223-232. Frankel, F.H. (1988). The clinical use of hypnosis in aiding recall. In H.M. Pettinati (Ed.), Hypnosis and memory (pp. 247-264). New York: Guilford. Freud, S. (1915/1957). Repression. In J. Strachey (Ed.), The standard edition of the complete psychological works of Sigmund Freud (Vol. 14, pp. 141-158). London: Hogarth Press. Fromm, E. (1970). Age regression with unexpected reappearance of a repressed childhood language. International Journal of Clinical & Experimental Hypnosis, 18, 79-88. Frye v. United States, 203 F. 1013, 1923. Geiselman, R.E., & Bagheri, B. (1985). Repetition effects in directed forgetting: Evidence for retrieval inhibition. Memory & Cognition, 13, 57-62. Geiselman, R.E., Bjork, R.A., & Fishman, D. (1983a). Disrupted retrieval in directed forgetting: A link with posthypnotic amnesia. Journal of Experimental Psychology: General, 112, 58-72. Geiselman, R.E., Fisher, R.P., MacKinnnon, D.P., & Holland, H.L. (1985). Eyewitness memory enhancement in the police interview: Cognitive retrieval mnemonics versus hypnosis. Journal of Applied Psychology, 70, 401-412. Geiselman, R.E., MacKinnon, D.P., Fishman, D.L., Jaenicke, C., Larner, B.R., Schoenberg, S., & Swartz, S. (1983b). Mechanisms of hypnotic and nonhypnotic forgetting. Journal of Experimental Psychology: Learning, Memory, & Cognition, 9, 626-635. Golding, J.M., Fowler, S.B., Long, D.L., & Latta, H. (1990). Instructions to disregard potentially useful information: The effects of prgmatics on evaluative judgments and recall. Journal of Memory & Language, 29, 212-227. Gordon, P., Valentine, E., & Wilding, J.M. (1984). One man's memory: A study of a mnemonist. British Journal of Psychology, 75, 1-14. Graham, K.R., & Patton, A. (1968). Retroactive inhibition, hypnosis, and hypnotic amnesia. International Journal of Clinical & Experimental Hypnosis, 16, 68-74. Gray, W.H. (1934). The effect of hypnosis on learning to spell. Journal of Educational Psychology, 25, 471-473. Gregg, V.H. (1979). Posthypnotic amnesia and general memory theory. Bulletin of the British Society of Experimental & Clinical Hypnosis, 2, 11-14. Gregg, V.H. (1980). Posthypnotic amnesia for recently learned material: A comment on the paper by J.F. Kihlstrom (1980). Bulletin of the British Society of Experimental & Clinical Hypnosis, 2, 11-14. Grinker, R.R., & Spiegel, J.P. (1945). Men under stress. New York: McGraw-Hill. Gruneberg, M.M., Morris, P.E., & Sykes, R.N. (1978). Practical aspects of memory. London: Academic. Gruneberg, M.M., Morris, P.E., & Sykes, R.N. (1988). Practical aspects of memory: Current research issues. Vol. 1: Memory in everyday life. Vol. 2: Clinical and educational implications. Chichester: Wiley. Gruneberg, M.M., Morris, P.E., & Sykes, R.N. (1991). The obituary on everday memory and its practical application is premature. American Psychologist, 46, 74-75. Gudjonsson, G.H. (1984). A new scale of interrogative suggestibility. Personality & Individual Differences, 5, 303-314. Haber, R.N. (1979). Twenty years of haunting eidetic imagery: Where's the ghost? Behavioral & Brain Sciences, 2, 583-629. Harding vs. State, 5 Md. App. 230, 246 A. 2d 302, 1968. Hasher, L., Riebman, B., & Wren, F. (1976). Imagery and the retention of free recall learning. Journal of Experimental Psychology: Human Learning and Memory, 2, 172-181. Hastie, R. (1980). Memory for behavioral infomration that confirms or contradicts a personality impression. In R. Hastie, T.M. Ostrom, E.B. Ebbesen, R.S. Wyer, D.L. Hamilton, & D.E. Carlston (Eds.), Person memory: The cognitive basis of social perception (pp. 155-177). Hillsdale, N.J.: Erlbaum. Hastie, R. (1981). Schematic principles in human memory. In E.T. Higgins, C.P. Herman, & M.P. Zanna (Eds.), Social cognition: The Ontario Symposium (Vol. 1, pp. 39-88). Hillsdale, N.J.: Erlbaum. Hastie, R., & Kumar, P.A. (1979). Person memory: Personality traits as organizing principles in memory for behaviors. Journal of Personality & Social Psychology, 37, 25-38. Herrmann, D.J. (1988). Memory improvement techniques. New York: Ballantine. Higbee, K.L. (1977). Your memory: How it works and how to improve it. Englewood Cliffs, N.J.: Prentice-Hall. Higbee, K.L. (1978). Some pseudo-limitations of mnemonics. In M.M. Gruneberg, P.E. Morris, & R.N. Sykes (Eds.), Practical aspects of memory (pp. 147-154). London: Academic. Higbee, K.L. (1988). Practical aspects of mnemonics. In M.M. Gruneberg, P.E. Morris, & R.N. Sykes (Eds.), Practical aspects of memory: Current research and issues (Vol. 2, pp. 403-408). Chichester: Wiley. Hilgard, E.R., & Cooper, L.M. (1965). Spontaneous and suggested posthypnotic amnesia. International Journal of Clinical & Experimental Hypnosis, 13, 261-273. Hilgard, E.R., & Hommel, L.S. (1961). Selective amnesia for events within hypnosis in relation to repression. Journal of Personality, 29, 205-216. Hirst, W., Johnson, M.K., Kim, J.K., Phelps, E.A., Risse, G., & Volpe, B.T. (1986). Recognition and recall in amnesics. Journal of Experimental Psychology: Learning, Memory, & Cognition, 12, 445-451. Hofling, C.K., Heyl, B., & Wright, D. (1971). The ratio of total recoverable memories to conscious memories in normal subjects. Comprehensive Psychiatry, 12, 371-379. Howard, M.L., & Coe, W.C. (1980). The effect of context and subjects' perceived control in breaching posthypnotic amnesia. Journal of Personality, 48, 342-359. Huber, P.W. (1991). Galileo's revenge: Junk science in the coutroom. New York: Basic Books. Huesmann, L.R., Gruder, C.L., & Dorst, G. (1987). A process model of posthypnotic amnesia. Cognitive Psychology, 19, 33-62. Hull, C.L. (1933) Hypnosis and suggestibility: An experimental approach. New York: Appleton-Century-Crofts. Hunt, E., & Love, T. (1972). How good can memory be? In a.W. Melton & E. Martin (Eds.), Coding processes in human memory (pp. 237-260). Washington, D.C.: Winston. Hunt, R.R. & Einstein, G.O. (1981). Relational and item-specific information in memory. Journal of Verbal Learning and Verbal Behavior, 20, 497-514. Hunter, I.M.L. (1962). An exceptional talent for calculative thinking. British Journal of Psychology, 34, 243-258. Hunter, I.M.L. (1977). An exceptional memory. British Journal of Psychology, 68, 155-164. Huse, B. (1930). Does the hypnotic trance favor the recall of faint memories? Journal of Experimental Psychology, 13, 519-529. Huttenlocher, J., Hedges, L., & Prohaska, V. (1988). Hierarchical organization in ordered domains: Estimating the dates of events. Psychological Review, 95, 471-484. Jacoby, L.L., & Dallas, M. (1981). On the relationship between autobiographical memory and perceptual learning. Journal of Experimental Psychology: General, 110, 306-340. Johnson, M.K., & Hasher, L. (1987). Human learning and memory. Annual Review of Psychology, 39, 631-668. Johnson, M.K., & Raye. (1981). Reality monitoring. Psychological Review, 88, 67-85. Johnson, R.F.Q. (1976). Hypnotic time-distortion and the enhancement of learning: New data pertinent to the Krauss-Katzell-Krauss experiment. American Journal of Clinical Hypnosis, 19, 89-102. Kihlstrom, J.F. (1978a). Attempt to revive a forgotten childhood language by means of hypnosis (Hypnosis Research Memorandum #148). Stanford, Ca.: Laboratory of Hypnosis Research, Department of Psychology, Stanford University. Kihlstrom, J.F. (1978b). Context and cognition in posthypnotic amnesia. International Journal of Clinical & Experimental Hypnosis, 26, 246-267. Kihlstrom, J.F. (1980). Posthypnotic amnesia for recently learned material: Interactions with "episodic" and "semantic" memory. Cognitive Psychology, 12, 227-251. Kihlstrom, J.F. (1983). Instructed forgetting: Hypnotic and nonhypnotic. Journal of Experimental Psychology: General, 112, 73-79. Kihlstrom, J.F. (1985). Posthypnotic amnesia and the dissociation of memory. In G.H. Bower (Ed.), The psychology of learning and motivation (Vol. 19, pp. 131-178). New York: Academic Press. Kihlstrom, J.F. (1987). The cognitive unconscious. Science, 237, 1445-1452. Kihlstrom, J.F. (1989). On what does mood-dependent memory depend? Journal of Social Behavior and Personality, 4, 23-32. Kihlstrom, J.F. (1990). The psychological unconscious. In L. Pervin (Ed.), Handbook of personality: Theory and research (pp. 445-464). New York: Guilford. Kihlstrom, J.F. (1991). Hypnosis: A sesquicentennial essay. International Journal of Clinical & Experimental Hypnosis, in press. Kihlstrom, J.F., Barnhardt, T.M., & Tataryn, D.J. (1991). The psychological unconscious: Found, lost, and regained. American Psychologist, in press. Kihlstrom, J.F., Brenneman, H.A., Pistole, D.D., & Shor, R.E. (1985). Hypnosis as a retrieval cue in posthypnotic amnesia. Journal of Abnormal Psychology, 94, 264-271. Kihlstrom, J.F., Easton, R.D., & Shor, R.E. (1983). Spontaneous recovery of memory during posthypnotic amnesia. International Journal of Clinical & Experimental Hypnosis, 26, 246-267. Kihlstrom, J.F., & Evans, F.J. (1976). Recovery of memory after posthypnotic amnesia. Journal of Abnormal Psychology, 85, 564-569. Kihlstrom, J.F., & Evans, F.J. (1977). Residual effect of suggestions for posthypnotic amnesia: A reexamination. Journal of Abnormal Psychology, 86, 327-333. Kihlstrom, J.F., & Evans, F.J. (1978). Generic recall during posthypnotic amnesia. Bulletin of the Psychonomic Society, 12, 57-60. Kihlstrom, J.F., & Evans, F.J. (1979). Memory retrieval processes in posthypnotic amnesia. In J.F. Kihlstrom & F.J. Evans (Eds.), Functional disorders of memory (pp. 179-218). Hillsdale, N.J.: Erlbaum. Kihlstrom, J.F., Evans, F.J., Orne, E.C., & Orne, M.T. (1980). Attempting to breach posthypnotic amnesia. Journal of Abnormal Psychology, 89, 603-616. Kihlstrom, J.F., & Hoyt, I.P. (1988). Hypnosis and the psychology of delusions. In T.F. Oltmanns & B.A. Maher (Eds.), Delusional beliefs (pp. 66-109). New York: Wiley-Interscience. Kihlstrom, J.F., & Hoyt, I.P. (1990). Repression, dissociation, and hypnosis. In J.L. Singer (Ed.), Repression and dissociation: Implications for personality theory, psychopathology, and health (pp 181-208). Chicago: University of Chicago Press. Kihlstrom, J.F., & McConkey, K.M. (1990). William James and hypnosis: A centennial reflection. Psychological Science, 1, 174-178. Kihlstrom, J.F., & Shor, R.E. (1978). Recall and recognition during posthypnotic amnesia. International Journal of Clinical & Experimental Hypnosis, 26, 330-349. Kihlstrom, J.F., & Wilson, L. (1984). Temporal organization of recall during posthypnotic amnesia. Journal of Abnormal Psychology, 93, 200-208. Kihlstrom, J.F., & Wilson, L. (1988). Rejoinder to Spanos, Bertrand, and Perlini. Journal of Abnormal Psychology, 97, 381-383. Klatzky, R.L. (1980). Human memory: Structures and processes. 2nd Ed. San Francisco: Freeman. Klatzky, R.L. (1991). Let's be friends. American Psychologist, 46, 43-45. Klein, S.B., Loftus, J., Kihlstrom, J.F., & Aseron, R. (1989). The effects of item-specific and relational information on hypermnesic recall. Journal of Experimental Psychology: Learning, Memory, & Cognition, 15, 1192-1197. Kosslyn, S.M. (1980). Image and mind. Cambridge: Harvard University Press. Krauss, H.K., Katzell, R., & Krauss, B.J. (1974). Effect of hypnotic time-distortion upon free-recall learning. Journal of Abnormal Psychology, 83, 141-144. Kroger, W.S., & Douce, R.G. (1979). Hypnosis in criminal investigation. International Journal of Clinical & Experimental Hypnosis, 27, 358-374. Kuplicki, F.P. (1988). Fifth, Sixth, and Fourteenth Amendments: A constitutional paradigm for determining the admissibility of hypnotically refreshed memory. Journal of Criminal Law and Criminology, 78, 853-876. Labelle, L., Lamarche, M.C., & Laurence, J.-R. (1990). Potential jurors' opinions on the effects of hypnosis on eyewitness identification. International Journal of Clinical & Experimental Hypnosis, 38, 315-319. Labelle, L., Laurence, J.-R., Nadon, R., & Perry, C. (1990). Hypnotizability, preference for an imagic cognitive style, and memory creation in hypnosis. Journal of Abnormal Psychology, 99, 222-228. Laurence, J.-R., & Perry, C. (1983). Hypnotically created memory among highly hypnotizable subjects. Science, 222, 523-524. Laurence, J.-R., Nadon, R., Nogrady, H., & Perry, C. (1986). Duality, dissociation, and memory creation in highly hypnotizable subjects. International Journal of Clinical & Experimental Hypnosis, 34, 295-310. Laurence, J.-R., & Perry, C. (1988). Hypnosis, will, and memory: A psycho-legal history. New York: Guilford. Levitt, E.E., & Chapman, R. (1979). Hypnosis as a research metho. In E. Fromm & R.E. Shor (Eds.), Hypnosis: Developments in research and new perspectives (pp. 185-216). New York: Aldine. Lockhart, R.S., Craik, F.I.M., & Jacoby, L.L. (1976). Depth of processing, recognition, and recall. In J. Brown (Ed.), Recall and recognition (pp. 75-102). New York: Wiley. Loftus, E.F. (1975). Leading questions and the eyewitness report. Cognitive Psychology, 7, 560-572. Loftus, E.F. (1978). Reconstructive memory processes in eyewitness memory. In B.D. Sales (Ed.), Perspectives in law and psychology (pp. xxx-xxx). New York: Plenum. Loftus, E.F. (1991). The glitter of everyday memory... and the gold. American Psychologist, 46, 16-18. Loftus, E.F., & Loftus, G.R. (1980). On the permanence of stored information in the human brain. American Psychologist, 35, 409-420. Loftus, E.F., Schooler, J., & Wagenaar, W.A. (1985). The fate of memory: Comment on McCloskey and Zaragoza. Journal of Experimental Psychology: General, 114, 375-380. Loftus, G.R., & Loftus, E.F. (1976). Human memory: The processing of information. Hillsdale, N.J.: Erlbaum. London, P., Conant, M., & Davison, G.C. (1966). More hypnosis in the unhypnotizable: Effects of hypnosis and exhortation on rote learning. Journal of Personality, 34, 71-79. London, P., & Fuhrer, M. (1961). Hypnosis, motvation, and performance. Journal of Personality, 29, 321-333. Lorayne, H., & Lucas, J. (1974). The memory book. New York: Stein & Day. Luria, S. (1968). The mind of a mnemonist: A little book about a big memory. New York: Basic Books. Lynn, S.J., Milano, M., & Weekes, J.R. (1991). Hypnosis and pseudomemories: The effects of prehypnotic expectancies. Journal of Personality & Social Psychology, 60, 318-326. MacLeod, C.M. (1989). Directed forgettng affets both direct and indirect tests of memory. Journal of Experimental Psychology: Learning, Memory, & Cognition, 15, 13-21. Mandler, G. (1967). Organization and memory. In K.W. Spence & J.T. Spence (Eds.), The psychology of learning and motivation, Vol. 1 (pp. 327-372). New York: Academic Press. Mandler, G. (1979). Organization, memory, and mental structures. In C.R. Puff (Ed.), Memory organization and structure (pp. 303-319). New York: Academic Press. Mandler, G. (1980). Recognizing: The judgment of previous occurrence. Psychological Review, 87, 252-271. Mandler, J. (1979). Categorical and schematic organization in memory. In C.R. Puff (Ed.), Memory organization and structure (pp. 259-299). New York: Academic. Marcuse, F.L. (1959). Hypnosis: Fact and fiction. Harmondsworth: Penguin. McCann, T., & Sheehan, P.W. (1988). Hypnotically induced pseudomemories: Sampling their conditions among hypnotizable subjects. Journal of Personality & Social Psychology, 54, 339-346. McCloskey, M., & Zaragoza, M. (1985). Misleading postevent information and memory for events: Arguments and evidence against memory impairment hypotheses. Journal of Experimental Psychology: General, 114, 1-16. McConkey, K.M. (1986). Opinions about hypnosis and self-hypnosis before and after hypnotic testing. International Journal of Clinical & Experimental Hypnosis, 34, 311-319. McConkey, K.M., Bryant, R.A., Bibb, B.C., & Kihlstrom, J.F. (1991). Trance logic in hypnosis and imagination. Journal of Abnormal Psychology, 100, 464-472. McConkey, K.M., & Jupp, J.J. (1985). Opinions about the forensic use of hypnosis. Australian Psychologist, 20, 283-291. McConkey, K.M., & Jupp, J.J. (1985-1986). A survey of opinions about hypnosis. British Journal of Experimental & Clinical Hypnosis, 3, 87-93. McConkey, K.M., & Kinoshita, S. (1985-1986). Creating memories and reports: Comment on Spanos & McLean. British Journal of Experimental & Clinical Hypnosis, 3, 162-166. McConkey, K.M., Labelle, L., Bibb, B.C., & Bryant, R.A. (1990). Hypnosis and suggested pseudomemory: The relevance of text context. Australian Journal of Psychology, 42, 197-205. McConkey, K.M., & Sheehan, P.W. (1981). The impact of videotape playback of hypnotic events on posthypnotic amnesia. Journal of Abnormal Psychology, 90, 46-54. McConkey, K.M., & Sheehan, P.W., & Cross, D.G. (1980). Posthypnotic amnesia: Seeing is not remembering. British Journal of Social & Clinical Psychology, 19, 99-107. McDaniel, M.A., & Pressley, M. (1987). Imagery and related mnemonic processes: Theories, individual differences, and applications. New York: Springer Verlag. McGeoch, G.O. (1935). The conditions of reminiscence. American Journal of Psychology, 47, 65-89. Metcalfe, J. (1990). Composite holographic associative recall model (CHARM) and blended memories in eyewitness testimony. Journal of Experimental Psychology: General, 119, 145-160. Mitchell, M.B. (1932). Retroactive inhibition and hypnosis. Journal of General Psychology, 7, 343-358. Morris, P.E. (1978). Sense and nonsense in traditional mnemonics. In M.M. Gruneberg, P.E. Morris, & R.N. Sykes (Eds.), Practical aspects of memory (pp. 155-163). London: Academic. Morton, J. (1991). The bankruptcy of everyday memory. American Psychologist, 46, 32-33. Mross, E.F., Klein, S.B., Loftus, J., & Kihlstrom, J.F. (1990). Levels of processing and levels of recall in hypermnesia. Unpublished manuscript, University of Colorado, Boulder, Co. Nash, M.R. (1987). What, if anything, is regressed about hypnotic age regression? A review of the empirical literature. Psychological Bulletin, 102, 42-52. Nash, M.R., Drake, S.D., Wiley, S., Khalsa, S., & Lynn, s.J. (1986). The accuracy of recall by hypnotically age regressed subjects. Journal of Abnormal Psychology, 95, 298-300. Nash, M.R., Johnson, L.S., & Tipton, R. (1979). Hypnotic age regression and the occurrence of transitional object relationships. Journal of Abnormal Psychology, 88, 547-555. Nash, M.R., Lynn, S.J., Stanley, S., Frauman, D.C., & Rhue, J. (1985). Hypnotic age regression and the importance of assessing interpersonally relevant affect. International Journal of Clinical & Experimental Hypnosis, 33, 224-235. Neisser, U. (1967). Cognitive psychology. New York: Appleton-Century-Crofts. Neisser, U. (1976). Cognition and reality. San Francisco: Freeman. Neisser, U. (1978). Memory: What are the important questions? In M.M. Gruneberg, P.E. Morris, & R.N. Sykes (Eds.), Practical aspects of memory (pp. 3-24). London: Academic. Neisser, U. (1991). A case of misplaced nostalgia. American Psychologist, 46, 34-36. Nelson, T.O. (1978). Detecting small amounts of information in memory: Savings for nonrecognized items. Journal of Experimental Psychology: Human Learning & Memory, 4, 453-468. Nogrady, H., McConkey, K.M., & Perry, C. (1985). Enhancing visual memory: Trying hypnosis, trying imagination, trying again. Journal of Abnormal Psychology, 94, 195-204. O'Connell, D.N. (1966). Selective recall of hypnotic susceptibility items: Evidence for repression or enhancement? International Journal of Clinical & Experimental Hypnosis, 14, 150-161. O'Connell, D.N., Shor, R.E., & Orne, M.T. (1970). Hypnotic age regression: An empirical and methodological analysis. Journal of Abnormal Psychology Monograph, 76(3, Pt. 2), 1-32. Orne, M.T. (1951). The mechanisms of hypnotic age regression: An experimental study. Journal of Abnormal & Social Psychology, 46, 213-225. Orne, M.T. (1959). The nature of hypnosis: Artifact and essence. Journal of Abnormal & Social Psychology, 58, 277-299. Orne, M.T. (1962). On the social psychology of the psychological experiment: With special reference to demand characteristics and their implications. American Psychologist, 17, 776-783. Orne, M.T. (1979). The use and misuse of hypnosis in court. International Journal of Clinical & Experimental Hypnosis, 27, 311-341. Orne, M.T., Dinges, D.F., & Orne, E.C. (1990). Rock v. Arkansas: Hypnosis, the defendant's privilege. International Journal of Clinical & Experimental Hypnosis, 38, 250-265. Orne, M.T., Soskis, D.A., Dinges, D.F., & Orne, E.C. (1984). Hypnotically induced testimony. In G.L. Wells & E.F. Loftus (Eds.), Eyewitness testimony: Psychological perspectives (pp. 171-213). Cambridge: Cambridge University Press. Orne, M.T., Whitehouse, W.G., Dinges, D.F., & Orne, E.C. (1988). Reconstructing memory through hypnosis: Forensic and clinical implications. In H.M. Pettinati (Ed.), Hypnosis and memory (pp. 21-63). New York: Guilford. Paivio, A. (1971). Imagery and cognitive processes. New York: Holt, Rinehart, & Winston. Paivio, A. (1986). Mental representations: A dual-coding approach. New York: Oxford University Press. Parker, P.D., & Barber, T.X. (1964). Hypnosis, task-motivating instructions, and learning performance. Journal of Abnormal & Social Psychology, 69, 499-504. Patten, E.F. (1932). Does posthypnotic amnesia apply to practice effects? Journal of General Psychology, 7, 196-201. Payne, D.G. (1986). Hypermnesia for pictures and words: Testing the recall level hypothesis. Journal of Experimental Psychology: Learning, Memory, and Cognition, 12, 16-29. Payne, D.G. (1987). Hypermnesia and reminiscence in recall: A historical and empirical review. Psychological Bulletin, 101, 5-27. People vs. Shirley, 31 Cal. 3d 18; 641 P.2d 775; 181 Cal. Rptr. 243, 1982. Perry, C., & Laurence, J.-R. (1990). Hypnosis with a criminal defendant and a crime witness: Two recent related cases. International Journal of Clinical & Experimental Hypnosis, 38, 266-282. Perry, C.W., Laurence, J.-R., D'Eon, J., & Tallant, B. (1988). Hypnotic age regression techniques in the elicitation of memories: Applied uses and abuses. In H.M. Pettainati (Ed.), Hypnosis and memory (pp. 128-154). New York: Guilford. Perry, C., & Walsh, B. (1978). Inconsistencies and anomalies of response as a defining characteristic of hypnosis. Journal of Abnormal Psychology, 87, 574-577. Pettinati, H.M., & Evans, F.J. (1978). Posthypnotic amnesia: Evaluation of selective recall of successful experiences. International Journal of Clinical & Experimental Hypnosis, 26, 317-329. Pettinati, H.M., Evans, F.J., Orne, E.C., & Orne, M.T. (1981). Restricted use of success cues in retrieval during posthypnotic amnesia. Journal of Abnormal Psychology, 90, 345-353. Pressley, M., & McDaniel, M.A. (1988). Doing mnemonics research well: Some general guidelines and a study. In M.M. Gruneberg, P.E. Morris, & R.N. Sykes (Eds.), Practical aspects of memory: Current research and issues (Vol. 2, pp. 409-414). Chichester: Wiley. Pressley, M., & Mullally, J. (1984). Alternative research paradigms in the analysis of mnemonics. Contemporary Educational Psychology, 9, 48-60. Putnam, W.H. (1979). Hypnosis and distortions in eyewitness memory. International Journal of Clinical & Experimental Hypnosis, 27, 437-448. Pylyshyn, Z. (1981). The imagery debate: Analogue media versus tacit knowledge. Psychological Review, 88, 16-45. Rabinowitz, J.C., Mandler, G., & Patterson, K.E. (1977). Determinants of recognition and recall: Accessibility and generation. Journal of Experimental Psychology: General, 106, 302-329. Radtke, H.L., & Spanos, N.P. (1981). Temporal sequencing during posthypnotic amnesia: A methodological critique. Journal of Abnormal Psychology, 90, 476-485. Radtke, H.L., Thompson, V.A., & Egger, L.A. (1987). Use of retrieval cues in breaching hypnotic amnesia. Journal of Abnormal Psychology, 96, 335-340. Raginsky, B. (1969). Hypnotic recall of aircrash cause. International Journal of Clinical & Experimental Hypnosis, 17, 1-19. Reed, H. (1970). Studies of the interference process in short-term memory. Journal of Experimental Psychology, 84, 452-457. Register, P.A., & Kihlstrom, J.F. (1986). Hypnotic effects on hypermnesia. International Journal of Clinical and Experimental Hypnosis, 35, 155-170. Register, P.A., & Kihlstrom, J.F. (1987). Hypnosis and interrogative suggestibility. Personality and Individual Differences, 9, 549-558. Reiff, R., & Scheerer, M. (1959). Memory and hypnotic age regression: Develpmental aspects of cognitive function explored through hypnosis. New York: International Universities Press. Reiser, M. (1976). Hypnosis as an aid in criminal investigation. Police Chief, 46, 39-40. Reyher, J. (1967). Hypnosis in research on psychopathology. In J.E. Gordon (Ed.), Handbook of clinical and experimental hypnosis (pp. 110-147). New York: Macmillan. Richardson-Klavehn, A., & Bjork, R.A. (1988). Measures of memory. Annual Review of Psychology, 39, 475-543. Rock v. Arkansas, 288 Ark. 566; 708 S.W. 2d 78, 1986; 55 L.W. 4925, 1987; 107 S. Ct. 2704, 1987. Roediger, H.L. (1980). The effectiveness of four mnemonics in ordering recall. Journal of Experimental Psychology: Human Learning & Memory, 6, 558-567. Roediger, H.L. (1982). Hypermnesia: The importance of recall time and asymptotic level of recall. Journal of Verbal Learning and Verbal Behavior, 21, 662-665. Roediger, H.L. (1990). Implicit memory: A commentary. Bulletin of the Psychonomic Society, 28, 373-380. Roediger, H.L. (1991). They read an article? A commentary on the everyday memory controversy. American Psychologist, 46, 37-40. Roediger, H.L., & Challis, B.H. (1988). Hypermnesia: Improvements in recall with repeated testing. In C. Izawa (Ed.), Current issues in cognitive processes: The Tulane Floweree Symposium on Cognition (p. 175-199). Hillsdale, N.J.: Erlbaum. Roediger, H. L., & Payne, D. G. (1982). Hypermnesia: The role of repeated testing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 8, 66-72. Roediger, H. L., Payne, D. G., Gillespie, G. L., & Lean, D. S. (1982). Hypermnesia as determined by level of recall. Journal of Verbal Learning and Verbal Behavior, 21, 635-655. Roediger, H. L., & Thorpe, L. A. (1978). The role of recall time in producing hypermnesia. Memory and Cognition, 6, 286-305. Roediger, H.L., Weldon, M.S., & Challis, B.H. (1989). Explaining dissociations between implicit and explicit measures of retention: A processing account. In H.L. Roediger & F.I.M. Craik (Eds.), Varieties of memory and consciousness: Essays in honour of Endel Tulving (pp. 3-41). Hillsdale, N.J.: Erlbaum. Rosenhan, D., & London, P. (1963). Hypnosis in the unhypnotizable: A study in rote learning. Journal of Experimental Psychology, 65, 30-34. Rosenthal, B.G. (1944). Hypnotic recall of material learned under anxiety and non-anxiety producing conditions. Journal of Experimental Psychology, 34, 369-389. Sanders, G.S., & Simmons, W.L. (1983). Use of hypnosis to enhance eyewitness accuracy: Does it work? Journal of Applied Psychology, 68, 70-77. Schacter, D.L. (1987). Implicit memory: History and current status. Journal of Experimental Psychology: Learning, Memory, & Cognition 13, 501-518. Schacter, D.L. (1990). Perceptual representation systems and implicit memory: Toward a resolution of the multipme memory systems debate. In A. Diamond (Ed.), Developmental and neural bases of higher cognitive function. Annals of the New York Academy of Sciences 608, 543-571. Schacter, D.L., Harbluk, J.L., & McLachlan, D.R. (1984). Retrieval without recollection: An experimental analysis of source amnesia. Journal of Verbal Learning & Verbal Behavior, 23, 593-611. Schacter, D.L., & Kihlstrom, J.F. (1989). Functional amnesia. In F. Boller & J. Graffman (Eds.), Handbook of neuropsychology (Vol. 3, pp. 209-231). Amsterdam: Elsevier. Scharf, B., & Zamansky, H.S. (1963). Reduction of word-recognition threshold under hypnosis. Perceptual & Motor Skills, 17, 499-510. Scheflin, A.W., & Shapiro, J.L. (1989). Trance on trial. New York: Guilford. Schul, Y., & Burnstein, E. (1985). When discounting fails: Conditions under which individuals use discredited information in making a judgment. Journal of Personality & Social Psychology, 49, 894-903. Schuyler, B.A., & Coe, W.C. (1981). A physiological investigation of volitional and nonvolitional experience during posthypnotic amnesia. Journal of Personality & Social Psychology, 40, 1160-1169. Schuyler, B.A., & Coe, W.C. (1989). More on volitional experiences and breaching posthypnotic amnesia. International Journal of Clinical & Experimental Hypnosis, 37, 320-331. Sears, A.B. (1955). A comparison of hypnotic and waking learning of the International Morse Code. Journal of Clinical & Experimental Hypnosis, 3, 215-221. Shank, R., & Abelson, R. (1977). Scripts, plans, goals, and understanding: An inquiry into human knowledge structures. Hillsdale, N.J.: Erlbaum. Shapiro, S. R., & Erdelyi, M. H. (1974). Hypermnesia for pictures but not for words. Journal of Experimental Psychology, 1974, 103, 1218-1219. Sheehan, P.W. (1988a). Confidence, memory, and hypnosis. In H.M. Pettinati (Ed.), Hypnosis and memory (pp. 95-127). New York: Guilford. Sheehan, P.W. (1988b). Memory distortion in hypnosis. International Journal of Clinical & Experimental Hypnosis, 36, 296-311. Sheehan, P.W., & Grigg, L. (1985). Hypnosis and the acceptance of an implausible cognitive set. British Journal of Experimental & Clinical Hypnosis, 3, 5-12. Sheehan, P.W., Grigg, L., & McCann, T. (1984). Memory distortion following exposure to false information in hypnosis. Journal of Abnormal Psychology, 93, 259-265. Sheehan, P.W., & Perry, C.W. (1977). Methodologies of hypnosis: A critical appraisal of contemporary paradigms of hypnosis. Hillsdale, N.J.: Erlbaum. Sheehan, P.W., Statham, D., & Jamieson, G.A. (1991). Pseudomemory effects over time in the hypnotic setting. Journal of Abnormal Psychology, 100, 39-44. Sheehan, P.W., & Tilden, J. (1983). Effects of suggestibility and hypnosis on accurate and distorted retrieval from memory. Journal of Experimental Psychology: Learning, Memory, & Cognition, 9, 283-293. Sheehan, P.W., & Tilden, J. (1984). Real and simulated occurrences of memory distortion following hypnotic induction. Journal of Abnormal Psychology, 93, 47-57. Sheehan, P.W., & Tilden, J. (1986). The consistency of occurrences of memory distortion following hypnotic induction. International Journal of Clinical & Experimental Hypnosis, 34, 122-137. Shepard, R.N., & Cooper, L.A. (1982). Mental images and their transformations. Cambridge: MIT Press. Shields, I.W., & Knox, V.J. (1986). Level of processing as a determinant of hypnotic hypermnesia. Journal of Abnormal Psychology, 95, 358-364. Shimamura, A.P. & Squire, L.R. (1987). A neuropsychological study of fact memory and source amnesia. Journal of Experimental Psychology: Learning, Memory, & Cognition, 13, 464-473. Silva, C.E., & Kirsch, I. (1987). Breaching amnesia by manipulating expectancy. Journal of Abnormal Psychology, 96, 325-329. Singer, J.L. (1990). Repression and dissociation: Implications for personality theory, psychopathology, and health. Chicago: University of Chicago Press. Sloane, M.C. (1981). A comparison of hypnosis vs. waking state and visual vs. non-visual recall instructions for witness/victim memory retrieval in actual major crimes. Doctoral dissertation, Florida State University. Ann Arbor, Mi.: University Microfilms International, #8125873. Slotnick, R.S., Liebert, R.M., & Hilgard, E.R. (1965). The enhancement of muscular performance in hypnosis through exhortation and involving instructions. Journal of Personality, 33, 37-45. Slotnick, R., & London, P. (1965). Influence of instructions on hypnotic and nonhypnotic performance. Journal of Abnormal Psychology, 70, 38-46. Smith, M.C. (1983). Hypnotic memory enhancement of witnesses: Does it work? Psychological Bulletin, 94, 387-407. Smith, S. (1988). Environmental context-dependent memory. In G.M. Davies & D.M. Thomson (Eds.), Memory in context: Context in memory (pp. 13-34. London: Wiley. Spanos, N.P. (1986). Hypnotic behavior: A social-psychological interpretation of amnesia, analgesia, and "trance logic". Behavioral & Brain Sciences, 9, 449-502. Spanos, N.P., Bertrand, L., & Perlini, A.H. (1988). Reduced clustering during hypnotic amnesia for a long word list: Comment. Journal of Abnormal Psychology, 97, 378-380. Spanos, N.P., & Bodorik, H.L. (1977). Suggested amnesia and disorganized recall in hypnotic and task-motivated subjects. Journal of Abnormal Psychology, 86, 295-305. Spanos, N.P., deGroot, H.P., & Gwynn, M.I. (1987). Trance logic as incomplete responding. Journal of Personality & Social Psychology, 53, 911-921. Spanos, N.P., Gwynn, M.I., Comer, S.L., Baltruweit, W.J., & de Groh, M. (1989). Are hypnotically induced pseudomemories resistant to cross-examination? Law & Human Behavior, 13, 271-289. Spanos, N.P., Della Malva, C.L., Gwynn, M.I., & Bertrand, L.D. (1988). Social psychological factors in the genesis of posthypnotic source amnesia. Journal of Abnormal Psychology, 1988, 97, 322-329. Spanos, N.P., James, B., & Degroot, H.P. (1990). Detection of simulated hypnotic amnesia. Journal of Abnormal Psychology, 99, 179-182. Spanos, N.P., & McLean, J. (1985-1986a). Hypnotically created pseudomemories: Memory distortions or reporting biases? British Journal of Experimental & Clinical Hypnosis, 3, 155-159. Spanos, N.P., & McLean, J. (1985-1986b). Hypnotically created false reports do not demonstrate pseudomemories. British Journal of Experimental & Clinical Hypnosis, 3, 160-161. Spanos, N.P., Radtke, H.L., & Bertrand, L.D. (1984). Hypnotic amnesia as a strategic enactment: Breaching amnesia in highly susceptible subjects. Journal of Personality & Social Psychology, 47, 1155-1169. Spanos, N.P., Radtke, H.L., & Dubreuil, D.L. (1982). Episodic and semantic memory in posthypnotic amnesia: A reevaluation. Journal of Personality & Social Psychology, 43, 565-573. Spanos, N.P., Tkachyk, M.E., Bertrand, L.D., & Weekes, J.R. (1984). The dissipation hypothesis of posthypnotic amnesia: More disconfirming evidence. Psychological Reports, 55, 191-196. Spence, J.D. (1984). The memory palace of Matteo Ricci. New York: Viking Penguin. Stalnaker, J.M., & Riddle, E.E. (1932). The effect of hypnosis on long-delayed recall. Journal of General Psychology, 6, 429-440. State v. Armstrong, 110 Wis. 2d 555; 329 N.W. 2d 386; 461 U.S. 946, 1983. State v. Hurd, 86 N.J. 525; 432 A. 2d 86, 1981. State v. Mack, 292 N.W. 2d 764; 27 Cr. L. 1043, 1980. (Minn.) St. Jean, R. (1980). Hypnotic time distortion and learning: Another look. Journal of Abnormal Psychology, 89, 20-24. St. Jean, R. (1989). Hypnosis and time perception. In N.P. Spanos & J.F. Chaves (Eds.), Hypnosis: The cognitive-behavioral perspective (pp. 175-186). Buffalo, N.Y.: Prometheus Press. St. Jean, R., & Coe, W.C. (1981). Recall and recognition memory during posthypnotic amnesia: A failure to confirm the disrupted-search hypothesis and the memory disorganization hypothesis. Journal of Abnormal Psychology, 90, 231-241. Sutcliffe, J.P. (1960). "Credulous" and "sceptical" views of hypnotic phenomena: A review of certain evidence and methodology. International Journal of Clinical & Experimental Hypnosis, 8, 73-101. Sutcliffe, J.P. (1961). "Credulous" and "skeptical" views of hypnotic phenomena: Experiments in esthesia, hallucination, and delusion. Journal of Abnormal & Social Psychology, 62, 189-200. Thorndike, E.L. (1913). Educational psychology: The psychology of learning. Vol. 2. New York: Teachers College Press. Timm, H.W. (1981). The effect of forensic hypnosis techniques on eyewitness recall and recognition. Journal of Police Science & Administration, 9, 188-194. Tkachyk, M.E., Spanos, N.P., & Bertrand, L.D. (1985). Variables affeting subjective organization during posthypnotic amnesia. Journal of Research in Personality, 19, 95-108. True, R.M. (1949). Experimental control in hypnotic age regression. Science, 110, 583. Tulving, E. (1964). Intratrial and intertrial retention: Notes toward a theory of free recall verbal learning. Psychological Review, 71, 219-237. Tulving, E. (1967). The effects of presentation and recall of material in free-recall learning. Journal of Verbal Learning and Verbal Behavior, 6, 175-184. Tulving, E. (1972). Episodic and semantic memory. In E. Tulving & W. Donaldson (Eds.), Organization of memory (pp. 381-403). New York: Academic. Tulving, E. (1974). Cue-dependent forgetting. American Scientist, 62, 74-82. Tulving, E. (1976). Ecphoric processes in recall and recognition. In J. Brown (Ed.), Recall and recognition (pp. 37-73). New York: Wiley. Tulving, E. (1983). Elements of episodic memory. Oxford: Clarendon Press. Tulving, E. (1991). Memory research is not a zero-sum game. American Psychologist, 46, 41-42. Tulving, E., & Pearlstone, Z. (1966). Availability versus accessibility of information in memory for words. Journal of Verbal Learning and Verbal Behavior, 5, 381-391. Tulving, E., & Schacter, D.L. (1990). Priming and human memory systems. Science, 247, 301-306. Tulving, E., & Thomson, D. M. (1973). Encoding specificity and retrieval processes in episodic memory. Psychological Review, 80, 352-373. Tversky, B., & Tuchin, M. (1989). A reconciliation of the evidence on eyewitness testimony: Comments on McCloskey and Zaragoza. Journal of Experimental Psychology: General, 118, 86-91. Udolf, R. (1983). Forensic hypnosis: Psychological and legal aspects. Lexington, Ma.: Heath. Udolf. (1990). Rock v. Arkansas: A critique. International Journal of Clinical & Experimental Hypnosis, 38, 239-249. Wagner, D.A. (1978a). Culture and mnemonics. In M.M. Gruneberg, P.E. Morris, & R.N. Sykes (Eds.), Practical aspects of memory (pp. 180-188). London: Academic. Wagner, D.A. (1978b). Memories of Morocco: The influence of age, schooling, and environment on memory. Cognitive Psychology, 10, 1-28. Wagstaff, G.F., & Ovenden, M. (1979). Hypnotic time distortion and free-recall learning: An attempted replication. Psychological Research, 40, 291-298. Waldfogel, S. (1948). The frequency and affective character of childhood memories. Psychological Monographs, 62, 39-48. Warner, K.E. (1979). The use of hypnosis in the defense of criminal cases. International Journal of Clinical & Experimental Hypnosis, 27, 417-436. Watkins, J.G. (1949). Hypnotherapy of war neuroses: A clinical psychologist's casebook. New York: Ronald Press. Wells, W.R. (1940). The extent and duration of experimentally induced amnesia. Journal of Psychology, 2, 137-131. White, R.W., Fox, G.F., & Harris, W.W. (1940). Hypnotic hypermnesia for recently learned material. Journal of Abnormal & Social Psychology, 35, 88-103. Whitehouse, W.G., Dinges, D.F., Orne, E.C., & Orne, M.T. (1991). Hypnotic hypermnesia: Enhanced memory accessibility or report bias? Journal of Experimental Psychology: Learning, Memory, & Cognition xx, xxx-xxx. Wilson, L., Greene, E., & Loftus, E.F. (1986). Beliefs about forensic hypnosis. International Journal of Clinical & Experimental Hypnosis, 34, 110-121. Wilding, J.M., & Valentine, E. (1985). One man's memory for prose, faces, and names. British Journal of Psychology, 76, 215-219. Williamsen, J.A., Johnson, H.J., & Eriksen, C.W. (1965). Some characteristics of posthypnotic amnesia. Journal of Abnormal Psychology, 70, 123-131. Wilson, L., & Kihlstrom, J.F. (1986). Subjective and categorical organization of recall in posthypnotic amnesia. Journal of Abnormal Psychology, 95, 264-273. Winograd, T. (1975). Computer memories: A metaphor for memory organization. In C.N. Cofer (Ed.), The structure of human memory (pp. 133-161). San Francisco: Freeman. Wollen, K.A., Weber, A., & Lowry, D. (1972). Bizarreness versus interaction of mental images as determinants of learning. Cognitive Psychology, 3, 518-523. Wood, G. (1967). Mnemonic systems in recall. Journal of Educational Psychology Monograph, 58(6, Pt. 2). Worthington, T.S. (1979). The use in court of hypnotically enhanced testimony. International Journal of Clinical & Experimental Hypnosis, 27, 402-416. Wyer, R.S., & Budesheim, T.L. (1987). Person memory and judgments: The impact of information that one is told to disregard. Journal of Personality & Social Psychology, 53, 14-29. Yates, A. (1961). Hypnotic age regression. Psychological Bulletin, 88, 429-440. Yates, F.A. (1966). The art of memory. Chicago: University of Chicago Press. Young, P.C. (1925). An experimental study of mental and physical functions in the normal and hypnotic states. American Journal of Psychology, 36, 214-232. Young, P.C. (1926). An experimental study of mental and physical functions in the normal and hypnotic states: Additional results. American Journal of Psychology, 37, 345-356. Yuille, J.C. (1983). Imagery, memory, and cognition: Essays in honor of Allan Paivio. Hillsdale, N.J.: Erlbaum. Zamansky, H. (1985-1986). Hypnotically created pseudomemories: Memory distortions or reporting biases? British Journal of Experimental & Clinical Hypnosis, 3, 160-161. Zamansky, H.S., Scharf, B., & Brightbill, R. (1964). The effect of expectancy for hypnosis on prehypnotic performance. Journal of Personality, 32, 236-248. Zelig, M., & Beidelman, W.B. (1981). The investigative use of hypnosis: A word of caution. International Journal of Clinical & Experimental Hypnosis, 29, 401-412.
http://socrates.berkeley.edu/~kihlstrm/SelfRegMem.htm
13
119
Science Fair Project Encyclopedia Telephone numbering plan A telephone numbering plan is a system that allows subscribers to make and receive telephone calls across long distances. The area code is that part of the dialed telephone number that specifies a telephone exchange system. Telephone numbering plans assign area codes to exchanges, so that dialers may contact telephones outside their local system. Normally occurring at the beginning of the number, area codes usually indicate geographical areas. Together, numbering plans and their component area codes direct telephone calls to particular regions on a public switched telephone network (PSTN), where they are further routed by the local network. Callers within the geographical area of a given area code usually do not need to include this particular area code in the number dialed, thereby giving the caller shorter local telephone numbers. In international phone numbers, the area code directly follows the country calling code. Although the International Telecommunication Union (ITU) has attempted to promote common standards among nation states, numbering plans take different formats in different parts of the world. For example, the ITU recommends that member states adopt 00 as their international access code. However, as these recommendations are not binding on member states, many have not, such as the United States, Canada, and other countries and territories participating in the North American Numbering Plan. The international numbering plan establishes country codes, that is, area codes that denote nations or groups of nations. The E.164 standard regulates country codes at the international level. However, it is each country's responsibility to define the numbering within its own network. As a result, regional area codes may have: - A fixed length, e.g. 3 digits in the US; 1 digit in Australia. - A variable length, e.g. between 2 and 5 in Germany and in Austria; between 1 and 3 in Japan; 1 or 2 in Israel. - Or be incorporated into the subscriber's number, as is the case in many countries, such as Spain or Norway. This is known as a "closed" telephone numbering plan. In some cases a trunk code (usually 0) must still be dialled, as in Belgium, Switzerland and South Africa. In many cases the area codes determine the rate (price) of a call. For example, in North America calls to the 800, 888, 877, and 866 areas are free to the caller and paid by the receiver, while calls to the 900 area are "premium rate", which means "more expensive". Normally intra-area calls are charged lower than inter-area calls, but there are exceptions, e.g. in Israel both are charged at the same rate. Open numbering plans An open numbering plan is one in which there are different dialing arrangements for local and long distance telephone calls. This means that to call another number within the same city or area, callers need only dial the number, but for calls outside the area, an area code is required. The area code is prefixed by a trunk code (usually "0"), which is omitted when calling from outside the country. To call a number in Amsterdam in the Netherlands for example: xxx xxxx (within Amsterdam- no area code required) (020) xxx xxxx (outside Amsterdam) +31 20 xxx xxxx (outside the Netherlands) In the United States, Canada, and other countries or territories using the North American Numbering Plan (NANP), the trunk code is '1', which is also (by coincidence) the country calling code. To call a number in San Francisco, the dialing procedure will vary: xxx xxxx (local calls, no area code required) 1 415 xxx xxxx (outside San Francisco) 415 xxx xxxx (mobile phones within NANP) +1 415 xxx xxxx (outside NANP) However, in parts of North America, especially where a new area code overlays an older area code, dialing 1 + area code is now required even for local calls, which means that the NANP is now closed in certain areas and open in others. Dialing from mobile phones is different in that the area code is always necessary, but not the trunk code; this is true regardless of the phone's area code. Closed numbering plans A closed numbering plan is one in which the subscriber's number is a standard length, and is used for all calls, even in the same area. This has traditionally been the case in small countries and territories where area codes have not been required. However, there has been a trend in many countries towards making all numbers a standard length, and incorporating the area code into the subscriber's number.This usually makes the use of a trunk code obsolete. For example, to call Oslo in Norway before 1992, one would dial: xxx xxx (within Oslo - no area code required) (02) xxx xxx +47 2 xxx xxx (outside Norway) After 1992, this changed to a closed eight-digit numbering plan, eg: 22xx xxxx (within Norway - including Oslo) +47 22xx xxxx (outside Norway) Paris 01 xxxx xxxx (outside France +33 1 xxxx xxxx) Brussels 02 xxx xxxx (outside Belgium +32 2 xxx xxxx) Geneva 022 xxx xxxx (outside Switzerland +41 22 xxx xxxx) Cape Town 021 xxx xxxx (outside South Africa +27 21 xxx xxxx) While the use of full national dialing is less user-friendly than only using a local number without the area code, the increased use of mobile phones, which require full national dialing and can store numbers, means that this is of decreasing importance. It also makes easier to display numbers in the international format, as no trunk code is required- hence a number in Prague Czech Republic can now be displayed as: +420 2 xxxx xxxx formerly: 02 xxxx xxxx (inside Czech Republic) +420 2 xxxx xxxx (outside Czech Republic) Numbering plans by country Main article: Argentine telephone numbering plan Country Code: 54 Main article: Australian telephone numbering plan Country Code: 61 Telephone numbers in Australia consist of a single digit area code and eight-digit local numbers, the first four of which generally specify the exchange, and the final four a line at that exchange. (Most exchanges though have several four-digit exchange codes.) Australia is divided geographically into a few large area codes, some of which covering more than one state and territory. Prior to the introduction of eight-digit numbers in the early to mid-1990s, telephone numbers were seven digits in the major capital cities, with a single-digit area code, and six digits in other areas with a two-digit area code. There were more than sixty such codes by 1990, spurring the reorganization. 02 New South Wales and Australian Capital Territory 03 Victoria and Tasmania 04 Mobile phone services 07 Queensland 08 Western Australia (including Christmas Island and Cocos Keeling Islands), South Australia and Northern Territory.) The system is not perfect; the codes do not strictly follow state borders. For example, Broken Hill in New South Wales is in the 08 area code, due to its proximity to the South Australian border. The main international prefix is 0011. 000 is the emergency telephone number in Australia, but the internationally accepted GSM mobile emergency telephone number 112 also works on mobile phones. Telephone numbers within Australia are allocated by the Australian Communications Authority. Country Code: 55 In Brazil, long distance and international dialling requires the use of carrier selection codes, after the trunk code or international access code. For example, to call Rio de Janeiro from another city in Brazil, one would dial the trunk code '0', a two-digit code, the area code '021' and the subscriber's number. Consequently, a Rio de Janeiro number would be displayed in Brazil as 0xx21 nnnn nnnn. A few areas use nnn-nnnn in lieu of nnnn nnnn, such as such as Natal (area code for that state is '84', in the state of Rio Grande do Norte, in northeastern Brazil. However, this practice will be phased out in 2006. xx is the two-digit Operator Code for long distance calls: - 15 for Telefónica - 21 for Embratel - 23 for Intelig and some others. Outbound international dialing in Brazil 00xx 1 212 xxx xxxx. The current carriers (the two digits you should dial after the "0") are: Embratel : 21 Intelig : 23 Telemar: 31 Brasil Telecom *: 14 Telefônica : 15 Claro**: 36 TIM**: 41 CTBC*: 12 GVT*: 25 Vesper São Paulo*: 89 Vesper Norte-Leste*: 85 Sercomtel*: 43 Those marked with an asterisk (*) are only avalaible to certain areas. The ones with "**" are to be used with mobile phones only. Area codes are distributed geographically. See List of Brazilian area codes. A note about mobile telephony in Brazil Mobile phone numbers are usually prefixed with the digit '7', '8' or '9'. '7' is used mainly for radiophone use (iDEN technology). Numbers with an '8' are always GSM Mobiles, while '9' can be mostly analogue (AMPS), TDMA and CDMA mobiles. The prefix number in mobile telephony is related to the license the carrier has. On newer licenses, use of the '8' digit is mandatory, while the previously state-onwned mobile operators always uses '9'(or '7', in some cases in São Paulo area). Some GSM mobiles can be prefixed with a '9' because the now privatized operators decided to overlay using this technology. Mobile phone numbers generally have eight digits. Exceptions exist in Brasilia. Country Code: 86 Main article: Colombian telephone numbering plan Country Code: 57 East Timor (Timor Leste) Country Code: 670 The only authorized telephone and data carrier in East_Timor is Timor_Telecom , a company owned by Portugal_Telecom with 50.1% of the shares (source: Telecommunications Research Project at the University of Hong Kong). The services are very limited and very, very expensive. According to a press-release issued by Portugal_Telecom , the total number of fixed phones (landline) are 2,100, mobile_cellular are 25,000, 500 Dial-up_access users and 30 broadband users (as of October 2004). Portugal_Telecom signed a 15-year contract in 2002 to invest US$ 29 million to rebuild and operate the phone system. The contract could be extended by 10 more years, totalling 25 years of monopoly. 2003 gross revenue totalled € 10.5 million. Telephone numbering plan in East Timor are as follows: Mobile: 670 72X-YYYY, where X could be 3, 4, 6 or 7 Service Numbers: 670 721-XXXX Fixed: 670 32X-YYYY, where X could be 2 or 3. Also refered as 670 390 32X-YYYY Emergency Numbers: Ambulance Service: 110 Fire Dept: 115 Emergency: 112 European Union (1996 proposal) Proposed Country Code: 3 In 1996, the European Commission proposed the introduction of a single telephone numbering plan, in which all European Union member states would use the code '3'. Calls between member states would no longer require the use of the international acces code '00'. This proposal would have required countries like Germany, the United Kingdom, Denmark and others, whose country codes began with the digit '4', to return these to the International Telecommunication Union. For example, to call a number in Berlin, in Germany: xxxx xxxx (within Berlin) 030 xxxx xxxx (within Germany) 1 4930 xxxx xxxx (within the EU) +3 49 30 xxxx xxxx (outside the EU) xxxx xxxx (within Dublin) 01 xxxx xxxx (within Ireland) 1 53 1 xxxx xxxx (within the EU) +3 53 1 xxxx xxxx (outside the EU) A Green Paper on the proposal was published, but it was felt by many in the industry that the disruption and inconvenience of such a scheme would outweigh any advantages. The EU proposal should not be confused with the European Telephony Numbering Space (ETNS) scheme, which uses the code +388, and is intended to complement, rather than replace, existing national numbering plans. Country Code: 358 Before 1996: 90' xxx xxx within Finland +358 0 xxx xxx outside Finland After 1996: 09' xxx xxx within Finland +358 9 xxx xxx outside Finland The default international access code became 00, although other codes such as 999 are also still used. Main article: French telephone numbering plan Country Code: 33 In 1996, France changed to a ten-digit numbering scheme, as follows: 01 Paris 02 Northeast France 03 Northwest France 04 Southeast France 05 Southwest France 06 Mobile phone services 08 Freephone (numéro vert) and shared cost services. Country Code: 49 Following German reunification in 1990, the former East Germany's code +37 was relinquished and returned to the ITU for reallocation. Similarly, the digit '3' is the initial digit for the new area codes in the former East Germany, while the area code 030 for West Berlin was used for the reunified city. There are no standard lengths for either area codes or subscribers' numbers in Germany, meaning that some subscribers' numbers may be as short as three digits. Larger towns have shorter area codes permitting larger telephone numbers in that area. Some examples: - (0)1*: service numbers and cell phones - (0)30: Berlin - (0)40: Hamburg - (0)421: Bremen - (0)89: München - (0)800: toll free - (0)900: premium rate The default length for newly assigned numbers (area code without 0 + subscriber number) is 10 or 11 digits, but older shorter numbers will not be replaced, but not reassigned if given back. The Area Codes are, if not counting the national thunk prefix '0', from 2 Digits (only for Berlin +49 30, Hamburg +49 40, Frankfurt +49 69 and Munich +49 89) to 5 Digits in the former East Germany in +49 3. Area Codes in +49 2, +49 4 to +49 9 are up to 4 digits. 3 digit area codes of the form +49 XY1 where assigned to the cities, where the former central switching bords where placed (usually bigger cities). Exceptions from the numbering sceme exists in the rhein-ruhr-area, where many big cities can be found and in the former east. Non-geographic numbers, including mobile phone services, are prefixed with '01' (+49 1 internationally). 0130 was originally used for toll-free numbers, but this has been changed to 0800. The prefix 0180 is used for shared cost services while 0700 is for personal numbers. Premium rate numbers use the prefix 0900 (the older prefix 0190 is being phased out); these are not accessible from outside Germany. Mobile numbers have area codes from (+49 15XX to +49 17X). Geographic codes beginning with '02' (+49 2 internationally) are placed in the western part of the country, e.g. '0221' for Cologne (+49 221 internationally) or '0211' for Düsseldorf (+49 211 internationally), '03' (+49 3 internationally) as stated above in the former East Germany, '04' (+49 4 internationally) numbers in the north (e.g. Hamburg +49 40 oder Bremen +49 421), '05' (+49 5 internationally) numbers in the middle-north (e.g. +49 511 for Hanover or +49 521 for Bielefeld), '06' (+49 6 internationally) in the central parts (e.g. +49 69 for Frankfurt or +49 611 for Wiesbaden), '07'(+49 7 internationally) in the south-west (e.g. +49 711 for Stuttgart oder +49 721 for Karsruhe), '08' (+49 8 internationally)for the south-east (e.g. +49 89 for Munich oder +49 821 for Augsburg) and '09' (+49 9 internationally) in the south-east (e.g. +49 911 for Nuremberg or +49 921 for Bayreuth). The international access code and the emergency services number are the European standard 00 and 112, respectively. Country Code: 30 During 2001-2002, Greece moved to a closed ten-digit numbering scheme in two stages, with the result that subscribers' numbers changed twice. For example, before the change, a number in Athens would have been dialed as follows: xxx xxxx (within Athens) (01) xxx xxxx (within Greece) +30 1 xxx xxxx (outside Greece) In 2001, a '0' was added after the area code, which was incorporated into the subscriber's number: 010 xxx xxxx (within Greece, including Athens) +30 10 xxx xxxx (outside Greece) Finally, in 2002, the leading '0' was changed to a '2' (for geographic numbers) : 210 xxx xxxx (within Greece, including Athens) +30 210 xxx xxxx (outside Greece) Mobile phone numbers were similarly prefixed with the digit '6'. Country Code: 852 Main article: Hong Kong telephone numbering plan Country Code: 353 Telephone numbers in Ireland are similar in format to those in the United Kingdom, with only the subscriber's number being required for local dialing. The trunk prefix is '0' followed by an area code, the first digit indicating the geographical area. 01 Dublin 02 Cork (021) and South 04 Drogheda (041) and East 05 Waterford (051) and South East 06 Limerick (061) and South West 07 Sligo (071)and North West 09 Galway (091) and West Area codes have varied in length, between one and three digits, and subscribers' numbers between five and seven digits but there is now a migration to a standard format, as follows: (0xx) xxx xxxx Dublin numbers are seven digits, but may change to eight digits in the future. The 08 numbering range was originally used for calls to Northern Ireland, but following the UK's renumbering of Northern Ireland in 2000, this changed, so to call a number in Belfast from the Republic: Before 2000: (080) 1232 xxx xxx After 2000: (048) 90xx xxxx; or via the UK numbering plan; 00 44 28 90xx xxxx Before 1992: 030 xxx xxx xxx After 1992: 00 44 xxx xxx xxx Mobile phones use the prefixes 086, 087 and 088, with 0818 being used for 'find me anywhere' services. Freephone services use the prefix 1800, while shared cost or Lo-Call numbers use the prefix 1850. Internet access numbers use the prefix 1891, 1892 and 1893. Country Code: 39 Italy changed to a closed numbering plan in 1998, with callers being told in to fissa il prefisso ("fix the prefix"). Unlike in other closed numbering plans the trunk code '0' was simply incorporated into subscribers' landline numbers; e.g. a number in Rome: 06 xxx xxxx (within Rome - after 1999) 06 xxx xxxx (within Italy) +39 06 xxx xxx (outside Italy - after 1998) Calls to mobile phone numbers within Italy were also affected, deleting the previously used prefix '0' (as it was already happening for oversea callers); e.g. for Omnitel-Vodafone provider in Italy: 0347 xxx xxx (within Italy - before 1999) 347 xxx xxx (within Italy - after 1999) +39 347 xxx xxx (outside Italy) Until 1996, San Marino was part of the Italian numbering plan, using the Italian area code 0549 but in that year it adopted its own international code 378. However,instead of using international dialing codes, dialling arrangements between San Marino and Italy continued as before. In 1998, San Marino incorporated the 0549 area code into its subscribers' numbers, following the Italian format: 0549 xxx xxx (San Marino from Italy) +378 0549 xxx xxx (San Marino from rest of the world) +39 0549 xxx xxx (San Marino via Italy) Country Code: 81 Main article: Japanese telephone numbering plan Country Code: 52 In 1999 Mexico introduced the following new prefixes long distance calls for long distance and international calls: 00 - international direct dialing (00 + country code + nat'l number) including USA and Canada. 01 - domestic direct dialing (01 + area code + number) 02 - domestic person-to-person (02 + area code + number) 09 - international person-to-person (09 + country code + number) including USA and Canada. This did not affect calls from outside Mexico, which continued to be dialed in the same format, for example, to call a number in Mexico City: +52 55 xxxx xxxx Country Code: 31 In the Netherlands, the area codes are -- excluding the leading '0' -- one, two or three digits long, with larger towns and cities having shorter area codes permitting a larger number of telephone numbers in the ten digits used. Since renumbering in 1996, subscribers' numbers are now either six digits long, or in the larger towns and cities, seven digits. 010: Rotterdam 020: Amsterdam 030: Utrecht 040: Eindhoven 050: Groningen 06x: mobile phone number 0676: internet access number 070: The Hague 0800: toll free number 0900: premium rate call 0906: premium rate 112: emergency services number Previously, 06 was used for toll-free and premium rate numbers, while 09 was used as the international access code, before this changed to 00. Country Code: 64 Since 1993, land-line telephone numbers in New Zealand consist of a single-digit area code and seven-digit local numbers, the first three of which generally specify the exchange and the final four a line at that exchange. The long distance prefix is '0'. There are five regional area codes, which must be used when calling outside the local dialing area, for example from Christchurch to Dunedin in the South Island, the '03' prefix must be dialed first. In many parts of the country, the old area code was incorporated into the new number, hence Nelson (055) xx xxx became (03) 55x xxxx . 03 the South Island and the Chatham Islands 04 Wellington Region except the Wairarapa and Otaki 06 the remaining southern and eastern North Island: - Taranaki - Manawatu-Wanganui except Taumarunui - Hawke's Bay - Gisborne - the Wairarapa and Otaki 07 the Waikato, the Bay of Plenty and Taumarunui 09 Auckland and Northland Mobile phone numbers are prefixed with 02, followed by one digit and the subscriber's number, which is either six or seven digits, dialled in full, e.g. 025 xxx xxx or 027 xxx xxxx. Free call services generally use the prefix 0800 (although some use 0508) while local rate (usually internet access numbers) have the prefix 08xx. Premium rate services use the code 0900 followed by five digits. The main international prefix is '00' (there are others for special purposes, such as 0161, for discounted rates). The emergency services number is '111'. Country Code: 51 Also on that date, '9' was prepended to existing cellular/mobile numbers. Mobile subscriber numbers are now 8 digits in Lima (+51 1 9xxxxxxx) and 7 digits elsewhere (+51 xx 9xxxxxx). Country Code: 351 Portugal changed to a closed numbering plan in 1999. Previously, the trunk prefix was '0', but this was dropped, and the area code, prefixed by the digit '2' was incorporated into the subscriber's number, so that a nine-digit number was used for all calls, eg: xxx xxxx (within Lisbon) (01) xxx xxxx (within Portugal) +351 1 xxx xxxx (outside Portugal) +351 21x xxx xxx (after 1999) Mobiles similarly changed, with the digits '96' replacing the prefix '0936': 0936 xxx xxx (within Portugal) +351 936 xxx xxxx (outside Portugal) +351 96 xxx xxxx (after 1999) Other new number ranges include: 10xx Carrier selection codes 700 xxx xxx Personal numbering 8xx xxx xxx Geographic expansion 800 xxx xxx Freephone 80x xxx xxx Shared cost Russia and Commonwealth of Independent States Country Code: 7 Under the Russian numbering plan, the trunk code is '8', with subscribers numbers being a total of ten digits long, e.g Moscow: xxx xxxx (within Moscow) 8 095 xxx xxxx (within Russia and some CIS republics) +7 095 xxx xxxx (outside Russia and some CIS republics) Following the break-up of the former Soviet Union, all former republics apart from Kazakhstan now have separate international from Russia, although from some republics, the old area codes are still used. The international access code is 8~10 - callers dial '8', wait for a tone, and then dial '10', followed by the number. Country Code: 34 Spain changed to a closed numbering plan in 1998. Previously, the trunk prefix was '9', but this was incorporated into the subscriber's number, so that a nine-digit number was used for all calls, eg: xxx xxxx (within Madrid) (91) xxx xxxx (within Spain) +34 1 xxx xxxx (outside Spain) +34 91x xxx xxx (after 1998) Mobiles similarly changed, prefixed with the digit '6': 906 xxx xxx (within Spain) +34 06 xxx xxx (outside Spain) +34 606 xxx xxx (after 1998) New numbering ranges have also since been introduced: 10xx Carrier selection codes 700 xxx xxx Personal numbering 8xx xxx xxx Geographic expansion 800 xxx xxx Freephone 80x xxx xxx Shared cost Spain's international access code also changed from 07 to 00, but this did not affect dialing arangements for calls to Gibraltar, in which the provincial code 9567 is used instead of the international code 350, eg: 9567 xxxxx (Gibraltar from Spain) +350 xxxxx (Gibraltar from all other countries) +34 9567 xxxxx (Gibraltar via Spain) Country Code: 46 In Sweden, the area codes are -- excluding the leading '0' -- one, two or three digits long, with larger towns and cities having shorter area codes permitting a larger number of telephone numbers in the eight to ten digits used. Before the 1990s, ten-digit numbers were very rare, but they have become increasingly common because of the deregulation of telecommunications, the new 112 emergency number, which required change of all numbers starting with 11, and the creation of a single area code for the Greater Stockholm area. No subscriber number is shorter than five digits. 010: NMT mobile phones 01x(x): South Middle Sweden 020: toll free 0200: toll free 02x(x): North Middle Sweden 03x(x): Central South Sweden 031: Gothenburg 040: Malmö 04x(x): Southern Sweden 05x(x): Western Sweden 055: Grums 06x(x): Northern Sweden 070: GSM mobile phones 071: Premium rate calls 073: GSM mobile phones 0730: GSM mobile phones 074(x): Pagers 076: GSM mobile phones 07x(x): various non-geographical area codes 08: Greater Stockholm 09x(x): Far Northern Sweden and premium rate calls 112: emergency services number Sweden adopted 00 as its international access code in 1999, replacing 009 and 007. According to the postal and telecommunication services supervising authority Post- och Telestyrelsen, it seems possible that Sweden will adopt a closed numbering plan in the future. Country Code: 41 In 2002, Switzerland adopted a closed numbering plan, but retained the use of the trunk code 0. The original plan was to dispense with the trunk code completely, so that all calls within Switzerland would only require a nine-digit number. However, this was modified on grounds of cost. The 01 prefix for numbers in Zurich is being phased out in favor of 044, with 043 being used for overlay numbers. Until 1999, Liechtenstein formed part of the Swiss numbering plan, using the area code 075, but in that year it adopted its own international code 423, meaning that calls to and from Switzerland require international dialing. The 07 number range is now used for mobile phone services. Country Code: 44 Main article: UK Telephone Numbering Plan Since April 28, 2001, the overall structure of the UK's National Numbering Plan is: 01 Geographic area codes 02 Geographic area codes (newly introduced in 2000) 03 Reserved for area codes 04 Reserved 05 Reserved for corporate numbering and VoiP services. 06 Reserved 07 "Find Me Anywhere" services (mobile phone, pager & personal numbers) 08 Freephone (toll free), Local & National Rate numbers 09 Premium Rate services and multimedia A short list of examples, set out in the officially approved (Ofcom) number groups: 020 xxxx xxxx: London 028 xxxx xxxx: Northern Ireland 029 xxxx xxxx: Cardiff 0131 xxx xxxx: Edinburgh 01382 xxx xxx: Dundee, a typical area code In the United Kingdom, area codes are — including the leading '0' which is dropped when calling UK numbers from overseas — three, four, or five digits long, with larger towns and cities having shorter area codes permitting a larger number of telephone numbers in the eleven digits used. Area codes are sometimes still called "STD" (subscriber trunk dialling) codes. United States and Canada - Main article: North American Numbering Plan - See also: List of North American area codes In the United States and Canada, area codes are regulated by the North American Numbering Plan. Currently, all area codes (officially called numbering plan areas) in the NANP must have 3 digits. Many other countries have area codes that are shorter for heavily populated areas and longer for lightly populated areas. Before 1995, North American area codes were of the form [2-9][0/1][0-9], with the prefix or NNX in the form [2-9][2-9][0-9]; that codespace filled up due to overallocation, and was extended to [2-9][0-8][0-9]-[2-9][0-9][0-9] (referred to as NPA-NXX). N11 codes (such as 911) are not eligible to be used as area codes. Not all area codes correspond to a geographical area. Codes 8xx (excluding 811 and 899) with the last two digits matching, such as 800, 888, 877, 866, etc., are reserved for toll-free calls. Code 900 is reserved for premium-rate calls (also known as dial-it services, although such services also exist in some places on a local basis using a particular three-digit prefix following the area code, often "976"). Area code 710 has been reserved for the United States Government, although no lines other than the single telephone number 710-627-4387 ("NCS-GETS") had actually been connected on this code as of 2004. None of these changes enable the existence of variable length area codes, which are commonplace outside North America. Also see . There are several noteworthy peculiarities in the NANP: - In many localities, all calls must include an area code, even when calling within the same area. For example in large urban areas, a 'local' call may be in another area code, or two area codes may overlay the same geography. In most cases a "1" needs to be dialed before the area code as well. As new area codes are being required, this is becoming more common. - Mobile phones are allocated numbers within regular geographic area codes corresponding (usually) to the subscriber's home or work location, instead of within a distinctive subset of area codes (e.g. 07xxx in the UK). Since a calling party cannot reliably distinguish between landline and mobile phone numbers, this forced NANP cellular telephone carriers into a charging model wherein the cellular subscriber usually pays for all airtime on his/her phone, whether placing or receiving a call, as opposed to the distinctive-subset model in which callers are usually charged at a higher rate ("caller-pays") for dialing mobile numbers than landlines. Some have cited this "receiver-pays" model as a reason for the US's relatively slow adoption rate of cellular telephony in the 1990s, compared to that in Europe and the Pacific Rim, though this has been largely countered by the post-2000 prevalence of free long-distance calling (to cellular or landline phones) on nearly all US cellular plans. Alphabetic mnemonic system Another oddity of NANP telephone numbering is the popularity of alphabetic dialing. On most US telephones, three letters appear on each number button from 2 through 9. This accommodates 24 letters. Historically, the letters Q and Z were omitted, though on some modern telephones, they are added, so that the alphabet is apportioned as follows: 2 = ABC 3 = DEF 4 = GHI 5 = JKL 6 = MNO 7 = P(Q)RS 8 = TUV 9 = WXY(Z) No letters are allocated to the 1 or 0 keys (although some corporate voice mail systems are set up to count Q and Z as 1, and some old telephones assigned the Z to the digit 0). Originally, this scheme was meant as a mnemonic device for telephone number prefixes. When telephone numbers in the US were standardized in the mid-20th century, they were made seven digits long, including a two-digit prefix, the latter expressed as letters rather than numbers. (Before World War II, many localities used three letters and four numbers, and in much of California during this period, phone numbers had only six digits — two letters followed by four numbers.) The prefix was a name, and the first two or three letters (usually shown in capitals) of the name were dialed. Later, the third letter (where previously used) was replaced by a number; this generally happened after World War II, although New York City did this in 1930. Thus, the famous Glenn Miller tune "PEnnsylvania 6-5000" refers to a telephone number 736-5000, the number of the Hotel Pennsylvania, which still bears the same number today. Similarly, the classic Elizabeth Taylor film "BUtterfield 8" refers to the section of New York City where the film is set, where the telephone prefixes include 288 (on the East Side of Manhattan between roughly 64th and 86th Streets). One Wikipedia author's childhood telephone number was MOhawk 5-6612. Today this system has been abandoned, but alphabetic dialing remains as a commercial mnemonic gimmick, particularly when combined with toll-free numbers. For example, one can dial 1-800-FLOWERS to send flowers to someone. Sometimes, longer words are used - for example one might be invited to give money to a public radio station by dialing 1-866-KPBS-GIVE. The "number" is 8 digits long, but only the first seven need be dialed. If an eighth (or more) digit is dialed, the switching system will ignore it. - World Telephone Numbering Guide - Free Area Code Listing (US and Canada) - North American Numbering Plan Administration (NANPA) The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Area_code
13
70
Between the World Wars World War I abruptly ended on the eleventh hour of the eleventh day of the eleventh month in 1918. At that time the United States Army had nearly four million men in uniform, half of them overseas. President Wilson negotiated the peace treaty in Paris with other world leaders against a backdrop of immense if short-lived American power. As in the aftermath of the nation's earlier wars, a massive demobilization began which soon reduced the Army to about 224,000 men, a force far smaller than that of the other major powers.1 Limited budgets as well as reduced manpower became the order of the day. Despite Wilson's efforts, the Senate rejected the Treaty of Versailles, and the nation hastened to return to its traditional isolation. The League of Nations, centerpiece of the president's peace plan, was formed without U.S. participation. Yet the years that followed, often viewed as an era of withdrawal for the United States and of stagnation for the Army, brought new developments to the field of military communications. Technical advances in several areas, especially voice radio and radar, had major consequences for the Signal Corps. When events abroad made it clear that the Wilsonian dream of a lasting peace was only that, such innovations helped to shape the nation's military response to the new and more terrible conflict that lay ahead. The silencing of the guns in November 1918 did not complete the U.S. Army's work in Europe. In spite of pressures for rapid demobilization, shipping shortages delayed the departure of most units from European shores until the spring and summer of 1919. Although Pershing embarked for home on 1 September 1919, American troops remained in France through the end of the year. For its part, the Signal Corps gradually turned over its communication lines, both those it had built and those it had leased, to the French. In addition, the Corps had to dispose of vast quantities of surplus war materiel and equipment.2 According to the terms of the Armistice, the Third Army (organized in November 1918) moved up to the Rhine River, and American soldiers continued to occupy a zone in the Rhineland until 1923.3 The 1st Field Signal Battalion comprised part of these forces and operated the German military and civilian telephone and telegraph lines, which had been turned over to the Americans. The unit returned home in October 1921.4 In addition to these activities, the Signal Corps provided communications for the Paris Peace Conference, which began in January 1919. Brig. Gen. Edgar Russel placed John J. Carty of AT&T, who had not yet doffed his uniform, in charge of setting up this system. The Signal Corps installed a telephone central switchboard at the conference site in the Crillon Hotel and provided communications for President Wilson at his residence. Several of the women operators from the front operated these lines. The Signal Corps could also connect the president with the American forces in Germany.5 Despite the importance of its work in Europe, the main story of the Signal Corps, as of the Army, was one of rapid demobilization. From a total at the Armistice of 2,712 officers and 53,277 enlisted men, the Corps had dropped by June 1919 to 1,216 officers and 10,372 men. A year later its strength stood at less than one-tenth its wartime total, with 241 officers and 4,662 enlisted men on the rolls.6 As the soldiers came home, the government lifted the economic restrictions imposed during the war, restored control over the civilian communications systems to the commercial companies, and dismantled most of the wartime boards and commissions.7 These changes were aspects of the return to normalcy, reflective of the nation's resurgent isolationism and its desire to escape from the international arena it had entered during the war. Meanwhile, Congress debated the future military policy of the United States. The War Department favored the maintenance of a large standing Army numbering some 500,000 officers and men, but its proposal failed to win the support of the war-weary public or their representatives in Congress. As part of the usual postwar review of lessons learned, Congress held lengthy hearings on Army reorganization, but twenty months passed before it enacted new defense legislation.8 On 4 June 1920 President Wilson signed into law the National Defense Act of 1920. Written as a series of amendments to the 1916 defense act, the new legislation enacted sweeping changes and remained in effect until 1950.9 It established the Army of the United States, comprised of three components: the Regular Army, the Organized Reserves, and the National Guard. It set the Regular Army's strength at approximately 300,000 (17,700 officers and 280,000 men), with 300 officers and 5,000 men allotted to the Signal Corps.10 The act also abolished the detail system for Signal Corps officers above the rank of captain. In the future, they would receive permanent commissions in the Corps. Congress also abandoned the system of territorial departments within the continental United States and replaced them with nine corps areas. These were intended to serve as tactical commands rather than simply as administrative headquarters. Each corps area would support one Regular Army division.11 Hawaii, the Philippines, and Panama continued to constitute separate departments. Other significant provisions included the creation of the Air Service as a new branch along with the Chemical Warfare Service and the Finance Department.12 Ironically, while the Signal Corps received recognition in the new defense act as a combat arm, changes in doctrine concurrently took away its tactical communications function.13 In April 1919 Pershing had convened a committee of high-ranking officials, called the Superior Board, to examine the organizational and tactical experiences of the war. Col. Parker Hitt, who had served as chief signal officer of the First Army, represented the Signal Corps' interests. Drawing upon the proceedings of boards previously held at the branch level, the board concentrated on the structure of the infantry division and recommended that the division be increased in size to achieve greater firepower even at the expense of mobility. Because Pershing disagreed with the panel's advice, favoring a smaller, more mobile organization, he withheld its report from the War Department for a year.14 For the Signal Corps, the Superior Board's recommendations resulted in a dramatic change in its role within the Army. In their postwar reviews, both the infantry and artillery boards had expressed a desire to retain their own communication troops. The Superior Board agreed and, with the approval of the secretary of war, this modification made its way into policy. Henceforth, the Signal Corps' responsibility for communications would extend only down to division level. Below that echelon the individual arms became responsible for their own internal communications as well as for connecting themselves with the command lines of communication established by the Signal Corps.15 Although the Signal Corps retained overall technical supervision, it no longer controlled communications from the front lines to Washington as it had done successfully during World War I. Understandably, Chief Signal Officer Squier protested the change, arguing that it would result in confusion: This office is more than ever of the opinion that the present system of dividing signaling duties and signaling personnel, in units smaller than divisions, among the various branches of the service, is not wise and a return to the former system which provided Signal Corps personnel for practically all signaling duties is recommended.16 But his protest fell on deaf ears. The Army's revised Field Service Regulations, approved in 1923, reflected the doctrinal changes.17 In a further departure from the past, Congress had given the War Department discretion to determine the Army's force structure at all levels.18 Col. William Lassiter, head of the War Plans Division of the General Staff and a member of the Superior Board, presided over a panel to study the Army's organization. Unlike the Superior Board, this body, designated the Special Committee (but more commonly known as the Lassiter Committee), favored a reduction in the infantry division's size, while retaining its "square" configuration of two brigades and four infantry regiments. Much of the reduction resulted from proposed cuts in the number of support troops. Under its plan, divisional signal assets were reduced to a single company, reflecting their reduced mission under the postwar doctrine. Approved by the Army chief of staff, General Peyton C. March, and written into the tables of organization, the new policy placed the infantry division's signal company (comprising 6 officers and 150 men) in the category of special troops, along with a military police, a light tank, and an ordnance maintenance Yet few of these units were actually organized. For most of the interwar years the Army had just three active infantry divisions in the continental United States (the 1st, 2d, and 3d) and the 1st Cavalry Division. Thus the Signal Corps contained very few tactical units. Signal service companies, meanwhile, served in each of the nine corps areas as well as at Camp Vail, New Jersey, and in Alaska, Hawaii, the Canal Zone, and the Philippines. A shrunken organization carried out a more limited mission in a nation that seemingly wanted to forget about military matters. Despite a booming national economy, the Army did not prosper during the "Roaring Twenties." Budget-minded Congresses never appropriated funds to bring it up to its authorized strength. In 1922 Congress limited the Regular Army to 12,000 commissioned officers and 125,000 enlisted men, only slightly more than had been in uniform when the United States entered World War 1.21 Eventually Congress reduced enlisted strength to 118,000, where it remained until the late 1930s. Army appropriations, meanwhile, stabilized at around $300 million, about half the projected cost of the defense act if fully implemented. The Army remained composed of skeleton organizations with most of its divisions little more than "paper tigers.”22 Under these circumstances, the fate of the Signal Corps was not exceptional. But it did suffer to an unusual degree because its operations were far-flung and its need for costly materiel was great. The Corps' actual strength never reached the SIGNAL STUDENTS TAKE A BREAK FROM THEIR CLASSES figures authorized in the defense act; in 1921 Congress cut its enlisted personnel to 3,000, and by 1926 this figure had dropped to less than 2,200. At the same time, officer strength remained well below 300.23 Moreover, the Signal Corps lost a significant percentage of its skilled enlisted personnel each year to private industry, which could offer them significantly higher salaries.24 The branch's annual appropriation plummeted from nearly $73 million for fiscal year 1919 to less than $2 million for fiscal year 1923, and by 1928 it had risen only slightly.25 The War Department's financial straits dictated that surplus war equipment be used up, even if obsolete, and only limited funds were available to purchase or develop new items. Signal training suffered as well. During demobilization, most of the wartime camps had been shut down. The Signal School at Fort Leavenworth, which had been closed during the war, opened briefly to conduct courses for officers from September 1919 to June 1920 before shutting its doors permanently. But there was an important exception to the general picture of decline: Camp Vail, New Jersey, became the new location of the Signal School, officially opening in October 1919. The school offered training for both officers and enlisted men of the Signal Corps as well as those from other branches.26 In 1920 the school began instructing members of the Reserve Officers Training Corps, and the following year added courses for National Guard and Reserve officers. Students from foreign armies, such as Cuba, Peru, and Chile, also received training at Camp Vail. Here the Corps prepared its field manuals, regulations, and other technical publications as well as its correspondence courses and testing materials.27 The post also had the advantage of being close to New York City, where the students trav- eled to view the latest in commercial communication systems. They gained practical field experience by participating in the annual Army War College maneuvers. Signal officers could further enhance their education by attending communication engineering courses at such institutions as Yale University and the Massachusetts Institute of Technology.28 In 1925 Camp Vail became a permanent post known as Fort Monmouth.29 Here the 51st Signal Battalion (which had fought during World War I as the 55th Telegraph Battalion), the Signal Corps' only active battalion-size unit, made its home during the interwar years, along with the 15th Signal Service Company and the 1st Signal Company.30 Fort Monmouth also became the home of the Signal Corps' Pigeon Breeding and Training Center. Although the Army had sold most of its birds at the end of the war, the Signal Corps retained a few lofts along the Mexican border, in the Panama Canal Zone, and at several camps and flying stations. At Monmouth, the Corps' pigeon experts devoted much effort to training birds to fly at night. Some may also have wished that they could breed the pigeons with parrots so the birds could speak their messages.31 Each year the Corps entered its pigeons in exhibitions and races, winning numerous prizes. In April 1922 the Signal Corps' pigeons participated in a contest that, however ludicrous to a later age, was taken seriously at the time. Responding to an argument raised by the San Francisco press, Maj. Henry H. Arnold of the Army Air Service challenged the pigeons to a race from Portland, Oregon, to San Francisco, to determine whether a pigeon or a plane could deliver a message faster. As the race began, the pigeons disappeared from view while Arnold struggled for forty-five minutes to start his airplane's cold engine. Then he had to make several stops for fuel. Meanwhile, in San Francisco, citizens received telegraphic bulletins of the race's progress, with the pigeons apparently holding their early lead. Bookies did a brisk business as bettors began backing the birds. When Arnold finally landed in San Francisco after a seven-and-a-half-hour journey, he expected to be the loser. But surprisingly, no pigeons had yet arrived, and none did so for two more days. Perhaps aviation was not just for the birds after all.32 Despite the outcome, the Signal Corps did not abandon its use of pigeons, and in 1927 was maintaining about one thousand birds in sixteen lofts in the United States, the Canal Zone, Hawaii, and the Philippines.33 Although the Signal Corps had lost much of its wartime mission, it still performed an important peacetime function by providing the Army's administrative communications. As it had for many years, the Corps continued to operate the telephone and telegraph systems at Army installations and to maintain coast artillery fire control systems. In addition, the Signal Corps received authorization in 1921 to set up a nationwide radio net. Stations were located at the headquarters of each corps area and department, as well as in certain major cities. Each corps area in turn established its own internal system connecting posts, camps, and stations. The 17th Service Company (redesignated in 1925 as the 17th Signal Service Company) operated the net's headquarters in Washington, D.C., which bore the call letters WVA (later changed, appropriately enough, to WAR).34 SIGNAL CORPS SOLDIER DEMONSTRATES THE EMPLOYMENT OF PIGEONS AT CAMP ALFRED VAIL; Stations at Fort Leavenworth, Kansas, and Fort Douglas, Utah, relayed messages to the West Coast. Due to atmospheric disturbances and other forms of interference, good service meant that a message filed in Washington reached the West Coast by the following day.35 Although established to serve as an emergency communications system in the event of the destruction or failure of the commercial wire network, on a day-to-day basis the radio net handled much of the War Department's message traffic formerly carried by commercial telegraph, saving the government a considerable expense. By 1925, 164 stations, including those on Army ships and in Alaska, came under the net's technical supervision, and the chief signal officer described it as "the largest and most comprehensive radio net of its kind in the world today."36 The success of the radio net led to the establishment of the War Department Message Center on 1 March 1923, through the merger of the War Department's telegraph office with the Signal Corps' own telegraph office and radio station. The chief signal officer became the director of the center, which coordinated departmental communications in Washington and dispatched them by the most appropriate means, whether telegraph, radio, or cable. Although originally intended for War Department traffic only, the center eventually handled messages for over fifty federal agencies.37 In an attempt to supplement its limited regular force, the Signal Corps formed the Army Amateur Radio System in 1925, with the net control station located at Fort Monmouth. The system operated every Monday night except during the summer months, when static interfered too greatly. The volunteer operators constituted a sizable pool of skilled personnel upon whom the Army could call in case of emergency. Each corps area signal officer appointed an amateur operator, known as the radio aide, to represent the operators in his area.38 Among President Wilson's concerns during the 1919 peace negotiations in Paris had been the future of postwar communications. In the past British companies had controlled global communications through their ownership of most of the world's submarine cables. During the war the British government had exercised its jurisdiction by intercepting cable traffic. Wilson sought to prevent such a monopoly in the future, and debate at the conference revolved around how the captured German cables would be allocated.39 Radio did not appear as an issue on the agenda at Paris, even though it constituted a new force in international communications that would greatly change the balance of the equation. Indications of its potential importance had appeared during the war when the Navy used its station at New Brunswick, New Jersey, to broadcast news to Europe-in particular, the Fourteen Points enunciated by President Wilson. The Germans in turn had used radio to transmit to the United States their willingness to negotiate an armistice with the Allies. When Wilson crossed the Atlantic to attend the peace conference, he had maintained communication with Washington via radiotelephone. (Due to technological limitations, there would be no transatlantic voice telephone cables until after World War II.) Despite these early achievements, radio remained in its infancy. Lacking a nationwide radio broadcasting network, Wilson was compelled to fight for the peace treaty by embarking upon a strenuous barnstorming tour that destroyed his health.40 After the war the new medium soon fulfilled its promise. Radio technology rapidly moved away from the spark-gap method to the continuous waves generated by vacuum tubes, which were capable of carrying voice and music. Radio's ability to be broadcast made it more difficult for any one party or nation to control the dissemination of information. Instead of the point-to-point communications of the telegraph and telephone, radio could reach all who wanted to listen and who possessed a simple receiver. The era of mass communications had arrived. Within the United States, the Navy endorsed the retention of governmental control over radio as a means to prevent foreign domination of the airwaves. Congress did not act accordingly, however, and the government returned the stations to their owners.41 To counter foreign competition, particularly that of the British-controlled Marconi Company, a solution was soon found. In 1919 an all-American firm, the Radio Corporation of America (RCA), was formed through the merger of General Electric and the American Marconi company. By means of cross-licensing agreements with the industry's leaders (AT&T, Westinghouse, and the United Fruit Company), RCA obtained the use of their radio patents, thus securing a virtual monopoly over the latest technology.42 Under the leadership of its general manager, David Sarnoff, a former Marconi employee, RCA helped to create the nation's first broadcasting network, the National Broadcasting Company (NBC), in 1926.43 With the wartime restrictions lifted, an extraordinary radio boom swept over the United States. It began in November 1920 when the nation's first commercial radio station went on the air, KDKA in Pittsburgh, owned and operated by the Westinghouse Company.44 In 1922, when more than five hundred new stations went on the air, Chief Signal Officer Squier referred to the radio phenomenon as "the outstanding feature of the year in signal communications."45 The thousands of veterans who had received wireless training during the war plus legions of amateur "hams" with their homemade crystal sets fueled the movement. The spectacular growth of private and commercial radio users necessitated, however, more stringent regulation of licenses and frequencies. A power struggle ensued over who should control the medium, the federal government or private enterprise. Since the Commerce Department had been granted certain regulatory powers under the radio act of 1912, Secretary of Commerce Herbert Hoover attempted to bring order out of the chaos by convening a series of conferences among radio officials in Washington. Ultimately, in 1927, Congress enacted a new Radio Act that created an independent agency to oversee the broadcasting industry, the Federal Radio Commission, forerunner of the present Federal Communications Commission (FCC). Radio thus remained a commercially dominated medium, but subject to governmental regulation.46 The Signal Corps played a role in the industry's growth. The Fourth International Radio Conference was to have met in Washington in 1917, but the war forced its postponement. In 1921 Chief Signal Officer Squier headed an American delegation to Paris to help plan the rescheduled meeting. The rapid technological changes of the next several years, however, caused a further delay. When the conference finally convened in Washington in October 1927, a decade after its initial date, one of the chief items on its agenda was the international allocation of radio frequencies.47 Radio technology was beginning to link the entire world together, including remote and inaccessible regions such as Alaska. Radio had a considerable impact upon the Washington-Alaska Military Cable and Telegraph System, which continued to serve as an important component of the Signal Corps' chain of communications. By 1923 over 40 percent of the Alaskan stations employed radio.48 Meanwhile, the deteriorating condition of the underwater cable, nearly twenty years old, mandated its replacement as soon as possible. Despite the Army's restricted budget, the Signal Corps succeeded in securing an appropriation of $1.5 million for the project. First, the Corps acquired a new cable ship, the Dellwood, to replace the Burnside, which had been in service in Alaska since 1903. Under the supervision of Col. George S. Gibbs, who had helped string the original Alaskan telegraph line as a lieutenant, the Corps completed the laying of the new cable in 1924. With five times the capacity of the earlier cable, it more than met the system's existing and anticipated needs. On land, the total mileage of wire lines steadily dwindled as radio links expanded. Radio cost less to maintain both in monetary and in human terms. No longer would teams of men have to endure the hardships of repairing wires in the harsh climate. In 1928 the Signal Corps discontinued the last of its land lines, bringing a colorful era of WAMCATS history to an end.49 Weather reporting continued as an important Signal Corps function, even though the branch had lost most of its experienced observers upon demobilization. New personnel were trained at Fort Monmouth, and officers could receive meteorological instruction at the Massachusetts and California Institutes of Technology. By July 1920 the Corps had fifteen stations providing meteorological information to the Field and Coast Artillery, Ordnance, and Chemical Warfare branches as well as to the Air Service. As in the past, the Signal Corps' weather watchers made their observations three times daily.50 The Corps refrained from duplicating the work of the Weather Bureau, however, and passed its information along for incorporation into the bureau's forecasts. In 1921 the Corps began exchanging data between some of its stations by radios.51 The Air Service, soon to become the Army Air Corps, placed the heaviest demands upon the Signal Corps' meteorological services. In 1921 the Air Service established a model airway between Washington, D.C., and Dayton, Ohio. Although the Signal Corps provided weather information to the Army pilots, it did not initially have enough weather stations to provide the level of assistance needed. In the meantime, the Air Service depended upon the Weather Bureau, only to find that it too had difficulty meeting the airmen's requirements. Consequently, by 1925 the Signal Corps had expanded its meteorological services to include a weather detachment at each Air Service flying field.52 As planes became more sophisticated and powerful, Army pilots attempted more ambitious undertakings. In 1924 they made their first flight around the world, assisted by weather information from the Signal Corps. At its peak the Signal Corps maintained forty-one weather stations across the country.53 The Corps also retained its photographic mission, even though it had lost responsibility for aerial photography in 1918. The branch maintained two photographic laboratories in Washington, D.C.; one for motion pictures at Washington Barracks (now Fort Lesley J. McNair), and the other at 1800 Virginia Avenue, Northwest. Among its services, the Signal Corps sold photos to the public. Its collection of still photographs included its own pictures, as well as those taken by other branches. The Corps also operated a fifty-seat motion-picture theater where films could be viewed for official purposes or the public could view films for prospective purchase.54 In 1925 the Signal Corps acquired responsibility for the Army's pictorial publicity. In this capacity it supervised and coordinated the commercial and news photographers who covered Army activities.55 Following their successful use during World War I, the Army increasingly relied upon motion pictures for training purposes. With the advent of sound films in the late 1920s, film production entered a new era. In 1928 the War Department made the Signal Corps responsible for the production of new training films but neglected to allocate any funds. To obtain needed expertise, the Signal Corps called upon the commercial film industry for assistance, and in 1930 the Signal Corps sent its first officer to Hollywood for training sponsored by the Academy of Motion Picture Arts and Sciences.56 While photography played a relatively minor role in the Corps' overall operations, it nonetheless provided valuable documentation of the Army's activities during the interwar period. The Signal Corps underwent its first change of leadership in half a dozen years when General Squier retired on 31 December 1923. In retirement Squier continued to pursue his scientific interests. One of his better known inventions, particularly to those who frequently ride in elevators, was Muzak. Based on his patents for "wired wireless," a system for transmitting radio signals over wires, Squier founded Muzak's parent company, Wired Radio, Inc., in 1922. He did not coin the catchy name, however, until 1934, when he combined the word music with the name of another popular item, the Kodak camera. In that year the Muzak Corporation became an entity and sold its first recordings to customers in Cleveland.57 In addition to his commercial ventures, Squier received considerable professional recognition for his contributions to science, among them the Elliott Cresson Gold Medal and the Franklin Medal, both awarded by the Franklin Institute in Philadelphia. In 1919 he had become a member of the National Academy of Sciences, and he also received honors from the governments of Great Britain, France, and Italy.58 The new chief signal officer, Charles McKinley Saltzman, was a native of Iowa and an 1896 graduate of the U.S. Military Academy. As a cavalry officer, he had served in Cuba during the War with Spain. After transferring to the Signal Corps in 1901, Saltzman embarked upon a new career that included serving on the board that examined the Wrights' airplane during its trials at Fort Myer in 1908 and 1909. During World War I he remained in Washington as the executive officer for the Office of the Chief Signal Officer. Saltzman possessed considerable knowledge about radio and had attended the national and international radio conferences since 1912. With this background he seemed extremely well qualified for the job when, as the Signal Corps' senior colonel, he received the promotion to chief signal officer upon Squier's retirement. The four-year limitation placed on the tenure of branch chiefs in the 1920 defense act obliged General Saltzman to step down in January 1928.59 But retirement did not end his involvement with communications. In 1929 President Hoover appointed him to the Federal Radio Commission, and he served as its chairman from 1930 to 1932. He also played an important role in the formation of the Federal Communications Commission.60 Saltzman's successor, Brig. Gen. George S. Gibbs, also hailed from Iowa but had not attended West Point. He received both the bachelor's and master's degrees of science from the University of Iowa. During the War with Spain he enlisted in the 51st Iowa Volunteer Infantry and sailed for the Philippines. There he transferred to the Volunteer Signal Corps and distinguished himself during the Battle of Manila. In 1901 he obtained a commission in the Signal Corps of the Regular Army, and several highlights of his subsequent career have already been mentioned. Immediately prior to becoming head of the branch in 1928 he was serving as signal officer of the Second Corps Area. Under his leadership the Signal Corps entered the difficult decade of the 1930s.61 World War I had witnessed the growth and strengthening of ties between government and business, the beginnings of what President Dwight D. Eisenhower later called the military-industrial complex. But the drastic military cutbacks following victory endangered this relationship. While research became institutionalized in the commercial sector with the rise of the industrial labs, such as those of AT&T and General Electric, the Army lagged behind.62 The Signal Corps' research and development program survived the Armistice, but in reduced form. The scientists recruited for the war effort returned to their own laboratories, although some, like Robert A. Millikan, retained their reserve commissions. While the Signal Corps lacked the money to conduct large-scale research, it did continue what it considered to be the most important projects. However, as Chief Signal Officer Saltzman remarked in his 1924 annual report, "The rapid strides being made in commercial communication makes the military development of a few years ago obsolete and if the Signal Corps is to be found by the next emergency ready for production of modern communication equipment, a materially larger sum must be expended on development before the emergency arises.”63 Because radio had not yet proved itself on the battlefield, wire remained the dominant mode of communication. The 1923 version of the Field Service Regulations reiterated the traditional view: "Telegraph and telephone lines constitute the basic means of signal communication. Other means of communication supplement and extend the service of the telegraph and telephone lines."64 Hence the Signal Corps devoted considerable energy to improving such familiar equipment as field wire, wire carts, the field telephone, and the storage battery. Until 1921 the Signal Corps conducted nonradio research in its electrical engineering laboratory at 1710 Pennsylvania Avenue. In that year the laboratory moved to 1800 Virginia Avenue, Northwest. The Corps also continued to support a laboratory at the Bureau of Standards, where Lt. Col. Joseph O. Mauborgne was in charge from 1923 to 1927.65 One significant advance made in wire communications during the interwar period was the teletypewriter. Although printing telegraphs had been used during World War I, they had not achieved the sophistication of the teletypewriter, which was more rapid and accurate than Morse equipment yet relatively simple to operate. Like the Beardslee telegraph of the Civil War, the teletype did not require operators trained in Morse code. On the other hand, teletype machines were heavier, used more power, and were more expensive to maintain than Morse equipment. Teletypewriters came in two general versions: page-type, resembling an ordinary typewriter, and tape-type, which printed messages on paper tape similar to ticker tape that could be torn off and pasted on sheets. By the late 1930s the Signal Corps had converted most of its administrative telegraph system from Morse to teletype. Teletype's adaptation to tactical signaling awaited, however, the development of new equipment that was portable and rugged. After making a good showing during the Army's interwar maneuvers, such teletype machines were on their way to the field by the time the United States entered World War II.66 Although wire remained important, military and civilian scientists attained advances in radio technology that launched Army communications into the electronics age. The Signal Corps conducted radio research in its laboratories at Fort Monmouth. Here in 1924 the Signal Corps Board was organized to study questions of organization, equipment, and tactical and technical procedures. The commandant and assistant commandant of the school served as its top officers.67 A second consultative body, the Signal Corps Technical Committee, had the chief and assistant chief of the Research and Development Division as its chairman and vice chairman, respectively. Transmission by shortwaves, or higher frequency waves, enabled broadcasts to be made over greater distances using less power and at lower cost. Consequently, the Corps gradually converted most of its stations, especially those belonging to the War Department Radio Net, to shortwave operation. By 1929 direct radio communication with San Francisco had been achieved.68 Meanwhile, work continued on the loop radiotelegraph set, first devised during World War I, which became known as model SCR-77. Other ground radio sets included the SCR-131 and 132, the latter with both telegraph and telephone capabilities. Signal Corps engineers made other significant discoveries, among them a new tactical communications device, the walkie-talkie, or SCR-194 and 195. This AM (amplitude-modulated) radiotelephone transceiver (a combination transmitter and receiver) had a range of up to five miles. Weighing about twenty-five pounds, it could be used on the ground or in a vehicle or carried on a soldier's back. The Signal Corps field tested the first models in 1934, and improved versions passed the infantry and field artillery service tests in 1935 and 1936. Lack of funds prevented production until 1939, when the new devices were used successfully during the Plattsburg maneuvers. Walkie-talkies provided a portable means of battlefield communication that increased the ability of infantry to maneuver and enabled commanders to reach units that had outrun field telephone lines.69 As the Army slowly moved toward motorization and mechanization during the 1920s and 1930s, the Signal Corps also addressed the issue of mobile communications. Without radios, early tankers communicated by means of flags and hand signals. As in airplanes, a tank's internal combustion engine interfered with radio reception. The friction of a tank's treads could also generate bothersome static. With the development of FM radio by Edwin H. Armstrong, vehicular radio finally became feasible, but the Signal Corps was hesitant to adopt this revolutionary technology.70 FM eliminated noise and static interference and could transmit a wider range of sound than AM radios. When coupled with crystal control, permitting a radio to be tuned automatically and precisely with just the push of a button, rather than by the intricate twirling of dials, FM radios could easily be used in moving vehicles. Although demonstrations at Fort Knox, Kentucky, in 1939 did not conclusively prove FM's superiority over AM, the chiefs of infantry and field artillery recognized FM's potential and pushed for its adoption. The mechanized cavalry also called for the new type of sets. Nevertheless, the Signal Corps remained skeptical. The Corps' preference for wire over radio, the shortage of developmental funds, and the resistance to FM within the communications industry (where it would render existing AM equipment obsolete) delayed FM's widespread introduction into military communications. Meanwhile, with the Army far from being completely motorized, the Signal Corps continued working on a pack radio set for the Cavalry. Only in late 1940 did the Signal Corps begin to respond to the demands from the field for FM radios.71 When the War Department reduced the Signal Corps' communication duties in 1920, it gave the Air Service responsibility for installing, maintaining, and operating radio apparatus for its units and stations. The Signal Corps retained control, however, over aviation-related radio development. The rapid improvements being made in aircraft design necessitated equal progress in aerial radio. In its Aircraft Radio Laboratory at McCook Field, Ohio, the Signal Corps conducted both the development and testing of radios designed for the Air Corps.72 Expanding on its work during World War I, the Signal Corps made significant strides in airborne radio during the postwar period. Improvements took place in the models of the SCR-130 series. Sets were designed for each type of aircraft: observation, pursuit, and bombardment. The pursuit set (SCR-133) provided voice communication between planes at a distance of 5 miles; the observation and bombardment sets (SCRs 134 and 135) had ranges of 30 and 100 miles, respectively. The SCR-136 model provided communication between ground stations and aircraft at distances of 100 miles using radio and 30 miles using telephony. Many technical problems had to be solved in developing these radios, including the interference caused by the plane's ignition system. With the installation of proper shielding, this difficulty could be overcome.73 But despite advances in aerial radio, pilots in the 1930s still relied to some extent on hand signals to direct their squadrons.74 The Signal Corps also developed radios for navigational purposes, basing its technology on work done during the war in direction finding.75 One of the most important navigational aids was the radio beacon, which enabled a plane to follow a signal to its destination. When equipped with radio compasses, which they tuned to the beacons on the ground, pilots no longer had to rely on their senses alone; they could fly "blind," guided by their instruments. This system proved itself in June 1927 when it guided two Army pilots, 1st Lts. Lester J. Maitland and Albert F. Hegenberger, on the first nonstop flight from California to Hawaii. This milestone occurred just a few weeks before Charles Lindbergh made his historic flight across the Atlantic.76 Lieutenant Hegenberger later became head of the Air Corps' Navigational Instrument Section at Wright Field, which was located in the same building as the Signal Corps' Aircraft Radio Laboratory. (McCook Field was incorporated into Wright Field in 1927.) However, the Signal Corps did not always enjoy a cordial relationship with the Air Corps regarding radio development. In fact, Hegenberger, in an attempt to take over the Signal Corps' navigational projects, went so far as to lock the Signal Corps personnel out of his portion of the building they shared. When the Air Corps failed in its attempt to carry the mail in 1934, suffering twelve fatalities and sixty-six crashes in four months, some senior Air Corps officers tried to blame the high casualty rate on the Signal Corps for neglecting to develop the appropriate navigational aids. In fact, inexperienced pilots and inadequate training had accounted for many of the accidents. The chief signal officer at that time, Maj. Gen. James B. Allison, and Maj. Gen. Benjamin D. Foulois, chief of the Air Corps, finally agreed in 1935 to discontinue Hegenberger's laboratory.77 In August 1929 the Signal Corps consolidated its research facilities in Washington with those at Fort Monmouth, establishing the Signal Corps Laboratories at Fort Monmouth. In 1935 a modern, permanent laboratory opened there to replace the World War I-vintage buildings previously in use. The new structure was named, most fittingly, Squier Laboratory, in honor of the former chief signal officer and eminent scientist, who had passed away the previous year at the age of sixty-nine.78 Meanwhile, the Signal Corps' Aircraft Radio Laboratory remained at Wright Field because the equipment produced there required continuous flight testing.79 Probably the most significant research undertaken by the Signal Corps between the wars was that pertaining to radar, an offshoot of radio. The word radar is an acronym for radio detection and ranging.80 In brief, radar depends on the reflection of radio waves from solid objects. By sending out a focused radio pulse, which travels at a known rate (the speed of light), and timing the interval between the transmission of the wave and the reception of its reflection or echo, the distance, or range, to an object can be determined. The resultant signals are displayed visually on the screen of a cathode-ray oscilloscope. During the interwar years many other nations, including Germany, Great Britain, and Japan, conducted radar experiments, but secrecy increased along with heightening world tensions. In the United States credit for the initial development of radar belonged to the Navy, which conducted its seminal experimentation at the Naval Research Laboratory in Washington during the 1920s and 1930s. While the Signal Corps did not invent radar, its subsequent efforts played an important role in furthering its evolution.81 The origins of the Army's radar research dated back to World War I, when Maj. William R. Blair, who then headed the Signal Corps' Meteorological Section in the American Expeditionary Forces, conducted experiments in sound ranging for the purpose of locating approaching enemy aircraft by the noise of their engines. After the war Blair served as chief of the meteorological section in Washington and in 1926 became head of the Research and Engineering Division. In 1930 he was named director of the laboratories at Fort Monmouth. In February 1931 Blair began research on radio detection using both heat and high-frequency, or infrared, waves. Known as Project 88, this undertaking had been transferred to the Signal Corps from the Ordnance Department. When these methods proved disappointing, Blair began investigating the pulse-echo method of detection.82 Contrary to its usual procedure, the Signal Corps conducted all of its developmental work on radar in its own laboratories, rather than contracting components out to private industry. Chief Signal Officer Allison did not believe that commercial firms could yet "offer useful results in practical form."83 Although Allison requested additional money for radar research, the War Department provided none, and the Signal Corps obtained the necessary funds from cutbacks in other projects. In December 1936 Signal Corps engineers conducted the first field test of their radar equipment at the Newark, New Jersey, airport where it detected an airplane at a distance of seven miles. In May 1937 the Signal Corps demonstrated its still crude radar, the future SCR-268, for Secretary of War Harry H. Woodring; Brig. Gen. Henry H. Arnold, assistant chief of the Air Corps; and other government officials at Fort Monmouth.84 Impressed by its potential, Woodring later wrote to Allison: "It gave tangible evidence of the amazing scientific advances made by the Signal Corps in the development of technical equipment."85 Arnold, also responding favorably, urged the Signal Corps to develop a long-range version for use as an early warning device. With this high-level support, the Signal Corps received the funds it needed to continue its development program.86 The Corps' application of radar to coast defense was an extension of its longstanding work in the development of electrical systems for that purpose, which had begun in the 1890s. Because national policy remained one of isolationism, American military planners envisioned any future war as defensive. Consequently, the Army placed great reliance upon warning systems to protect against surprise attack by sea and especially by air. Hence the Signal Corps developed the SCR-268, a short-range radar set designed to control searchlights and antiaircraft guns, and subsequently designed for the Air Corps two sets for long-range aircraft detection: SCR-270, a mobile set with a range of 120 miles, and SCR-271, a fixed radar with similar capabilities.87 In an interesting historical parallel, the Signal Corps carried out its radar testing at the same locations-Sandy Hook and the Highlands at Navesink, New Jersey-where Assistant Surgeon Albert J. Myer had tested his wigwag signals with 2d Lt. Edward P Alexander prior to the Civil War. While Myer had favored these sites for their proximity to New York Harbor, the later generation of experimenters found them convenient to Fort Monmouth. Here and elsewhere the Signal Corps was bringing the Army into the electronics age.88 While the cost of technology steadily rose, the amount of money the nation was willing to spend on its Army tended to decline during the early 1930s, as the nation plunged into the Great Depression that followed the stock market crash of October 1929. Two veteran Signal Corps officers led the branch during this difficult period: General Gibbs and his successor, Maj. Gen. Irving J. Carr. Gibbs, who remained at the helm until 30 June 1931, counted among his major achievements the consolidation of the Corps' laboratories and a reorganization and restructuring of the Signal Office that endured until World War II.89 Upon retirement he became an executive with several communications firms, an indication of the increasingly close relationship between the military and industry, based in part on the growing similarity of military and civilian technology.90 General Carr, who received a degree in civil engineering from the Pennsylvania Military College in 1897, had served as an infantry lieutenant during the Philippine Insurrection. Graduating from the Army Signal School in 1908, he was detailed to the Signal Corps during World War I. Carr served in France successively as chief signal officer of the 2d Division, the IV Army Corps, and the Third Army. In addition to attending the General Staff School and the Army War College after the war, he served as signal officer of the Western Department and as chief of staff of the Hawaiian Division. At the time of his appointment as chief signal officer, Carr held the position of executive officer in the Office of the Assistant Secretary of War.91 General Carr faced a situation that had been transformed by the economic crisis. While Americans stood in breadlines, the Army, already experiencing hard times because of national pacifism and war-weariness, felt the added impact of the Great Depression. In the midst of this national tragedy, military preparedness took a backseat to social and economic concerns. Chief of Staff General Douglas MacArthur did nothing to improve the Army's image by dispersing with unnecessary brutality the so-called Bonus Army of World War I veterans who marched on Washington in the summer of 1932. This violent incident may also have contributed to President Herbert Hoover's defeat by Franklin D. Roosevelt in the presidential election that fall. Despite its lack of funds, the Army sought new roles to assist the nation through its time of economic distress. Its contribution to the organization of the Civilian Conservation Corps (CCC), established as part of President Roosevelt's New Deal in April 1933, proved popular but a drain on its limited resources. The CCC's activities included reforestation, soil conservation, fire prevention, and similar projects. The Army set up and ran the camps and supplied food, clothing, shelter, medical care, and recreation. For its part, the Signal Corps provided radio communication and linked radio stations at CCC district headquarters with the War Department Radio Net. Members of the Army Amateur Radio System participated in this effort. The Signal Corps also helped to advertise this least partisan of New Deal ventures, completing a three-reel historical film about the CCC in 1935.92 The Second International Polar Year was held from 1932 to 1934, fifty years after the original event. Financial support from the Rockefeller Foundation helped make this effort possible in the midst of the worldwide depression. While Arctic studies remained the focus, more countries participated and more branches of science were included than before. Although the Signal Corps did not play as prominent a role as in the 1880s, it nonetheless lent its expertise to the scientists involved in polar research. The Corps established communication facilities for the Army's station near the Arctic Circle and supplied equipment for studying problems of radio transmission.93 With General Carr's retirement, Maj. Gen. James B. Allison became chief signal officer on 1 January 1935. Allison had received extensive experience in signal training during the years 1917-1919 when he commanded Signal Corps training camps at Monterey, California; Fort Leavenworth; and Camp Meade, Maryland. From September 1925 to June 1926 he served as commandant of the Signal School. Prior to becoming chief, he had been signal officer of the Second Corps Area at Governors Island, New York. Allison was fortunate to assume his new duties during the same year that the Army acquired a new chief of staff, General Malin Craig, who recognized the value of communications. Craig, concerned about the threatening world situation in both the Far East and Europe, pressed for a limited rearmament. He also supported increases in the Signal Corps' budget that finally ended its years of impoverishment.94 The growing danger of war, the demands for improved technology, and even the Great Depression itself improved the Signal Corps' prospects. The turnover rate of its enlisted personnel dropped as joblessness increased in civilian life. When Congress enlarged the size of the Army in 1935, the Signal Corps received an additional 953 enlisted men, enabling the Corps to handle the growing demands on its services caused by the public works programs of the New Deal and the expanding activities of the Air Corps.95 The Corps also held onto one of its traditional activities, WAMCATS, in the face of renewed demands that the government sell the Alaska system because of its predominantly commercial nature. It was also argued that the release of the more than two hundred enlisted men assigned to duty in Alaska would help ease the Corps' overall personnel shortage. But Chief Signal Officer Gibbs had opposed the sale, and Congress did not act upon the War Department's enabling legislation. While the long-standing debate continued as to whether to transfer the system to another agency or turn it over to commercial interests, WAMCATS remained in the Signal Corps' hands.96 Under the Corps' stewardship the system continued to develop. By 1931 radio had overtaken the use of cables, but the underwater lines were kept in operable condition in case of emergencies. In 1933 the Army transferred the Dellwood, now left with little to do, to the U.S. Shipping Board, which in turn sold it to a commercial cannery.97 To reflect its new image, the WAMCATS underwent a name change in 1936, becoming the Alaska Communication System (ACS).98 WAMCATS continued to render important service to Alaskans, proving itself to be a "lifeline to the north." In 1934, when much of Nome went up in flames, the city's WAMCATS station stayed on the air to coordinate relief and rescue work. WAMCATS also played a key role in the drama surrounding the plane crash that killed humorist Will Rogers and aviator Wiley Post near Point Barrow in August 1935. Sgt. Stanley R. Morgan, the Signal Corps radio operator there, learned of the accident from a native runner. After summoning help, Morgan traveled to the crash site to do what he could. Unfortunately, both men had died instantly. Returning to his station, Morgan signaled news of the tragedy to the world.99 The Signal Corps' photographic mission continued to expand during the 1930s. Photographic training was briefly transferred to the Army War College, but soon returned to Fort Monmouth. In 1933 the Corps produced its first feature-length sound movie, depicting infantry maneuvers at Fort Benning, Georgia. The Corps also released several new training films, including such action-packed features as "Cavalry Crossing Unfordable Stream" and "Elementary Principles of the Recoil Mechanism." The shortage of funds, however, prevented the Signal Corps from making many films prior to World War II. The Corps did work diligently to index and reedit its World War I films, making master copies and providing better storage facilities for these priceless records.100 Despite many difficulties, the Signal Corps' operations increased overall during the 1930s. But it lost one function, military meteorology. As the decade progressed, the branch simply could not keep up with the demands made on its weather service by the Air Corps. Following the airmail fiasco, the Air Corps sought to upgrade operations at some stations to provide weather service around the clock and throughout the year. With its limited manpower and varied missions, the task was beyond the Signal Corps' capability. In his 1936 annual report Chief Signal Officer Allison recommended that "if the required additional personnel could not be given [to] the Signal Corps, all meteorological duties ... be transferred to the Air Corps which is the principal user of the meteorological service."101 The secretary of war agreed, and returned weather reporting and forecasting to the using arms effective 1 July 1937. As a result, many of the Signal Corps' meteorologists transferred to the Air Corps. Although the Signal Corps retained responsibility for the development, procurement, supply, and maintenance of meteorological equipment, the sun had set once more on its weather service.102 Upon General Allison's retirement at the end of September 1937, Col. Joseph O. Mauborgne was designated to become the new chief signal officer, effective on I October. Originally commissioned as a second lieutenant of infantry in 1903, he had served with the Signal Corps since 1916 and transferred to the branch in 1920. A well-known expert in radio and cryptanalysis, Mauborgne had been chief of the Corps' Research and Engineering Division during World War I. His postwar assignments included heading the Signal Corps Laboratory at the Bureau of Standards and commanding, for a second time, the Research and Engineering Division in the Signal Office. He also served as a technical adviser at several international communications conferences, including the radio conference held in Washington in 1927. After becoming a colonel in 1934, he was the director of the Aircraft Radio Laboratory from 1936 to 1937. In addition to his scientific expertise, Mauborgne possessed considerable artistic talent as a portrait painter, etcher, and maker of prize-winning violins.103 Among its many duties, the Signal Corps held responsibility for revising and compiling all codes and ciphers used by the War Department and the Army. Under General Mauborgne, himself a gifted cryptologist, activities in this area expanded. In 1929 General Gibbs had established the Signal Intelligence Service to control all Army cryptology. In addition to code and cipher work, the Signal Intelligence Service absorbed the covert intelligence-gathering activities formerly conducted by the so-called Black Chamber within the Military Intelligence Division of the War Department General Staff. WILLIAM F. FRIEDMAN, CENTER BACK, AND THE STAFF OF THE SIGNAL William F. Friedman became the Signal Intelligence Service's first chief. After serving in the intelligence section of the General Staff, AEF, during World War I, Friedman had joined the Signal Corps in 1921 to develop new codes and ciphers. In 1922 he became chief cryptanalyst in the code and cipher compilation section of the Research and Development Division where he became known for his remarkable code-breaking abilities. In addition to cryptographic skills, Friedman shared Mauborgne's interest in the violin and formed a musical group that included the chief signal officer and several friends.104 In 1935 the Army reinstituted its program of large-scale maneuvers, which it had not held since before World War I. The 51st Signal Battalion, the only unit of its type, provided the communications for these exercises. In 1937 the Army tested its new "triangular"-three regiment-division at San Antonio, Texas. This streamlined unit, reduced from four regiments and without any brigade headquarters, had been favored by Pershing in 1919. Providing more mobility and flexibility than the square division of World War I, the triangular division would become the standard division of the next war. While the divisional signal company was somewhat larger (7 officers and 182 men) than that provided for in the 1920 tables of organization, the signal complement of the combat arms was cut in half.105 Thus, helped by a variety of factors, the Signal Corps weathered the years of political isolationism and economic depression. As a technical service, it benefit- ed from the rapid development in communications technology pioneered by civilian industry and from the growing realization among military and civilian leaders alike that science would be a crucial factor in any future conflict. Unfortunately, that future was closer than many Americans liked to think. Throughout the 1930s the world situation had grown increasingly ominous. Adolph Hitler came to power in Germany in 1933 and, denouncing the Versailles treaty, undertook a program of rearmament. Italy's dictator, Benito Mussolini, began a course of aggression by attacking Ethiopia in 1935. In 1939 Hitler signed a treaty with the Soviet dictator, Joseph Stalin, and invaded Poland, precipitating a general war in Europe. Across the Pacific, Japan unleashed its power, seizing Manchuria in 1931 and invading China in 1937. Finally, the formation of the Rome-Berlin-Tokyo Axis in September 1940 appeared to unite three heavily armed and aggressive nations against the ill-armed democracies. After years of stagnation, the United States began a gradual military buildup in the late 1930s. President Roosevelt, who had once served as assistant secretary of the Navy, at first championed only a naval rebuilding program, but the Army eventually began to receive greater attention. In his annual message of January 1938, Roosevelt requested an Army budget of $17 million, a substantial sum but considerably less than the Navy's allotment of $28 million.106 Having learned some hard lessons from its unpreparedness for World War I, the War Department devoted considerable attention during the interwar period to planning for future wars. Responsibility for strategic planning rested with the War Plans Division of the General Staff, while the 1920 defense act assigned supervision of procurement and industrial mobilization planning to the assistant secretary of war.107 Despite the power wielded by the General Staff, considerable administrative control still existed at the branch level. For its part, the Signal Corps contained a procurement planning section which prepared estimates of requirements, conducted surveys of manufacturers, and identified scarce raw materials, such as the Brazilian quartz used in radios.108 The Army's Industrial Mobilization Plan of 1930 established procedures for harnessing the nation's economic might, while the Protective Mobilization Plan of 1937 set forth the steps for manpower mobilization, beginning with the induction of the National Guard. These plans failed, however, to envision a conflict on a scale larger than World War I. For instance, estimates placed the Signal Corps' monthly requirement for batteries during wartime at five million; the actual number later proved to be more than four times that amount.109 With the outbreak of war in Europe, the United States undertook a limited preparedness effort with the emphasis on hemispheric defense. President Roosevelt declared a "limited national emergency" on 8 September 1939 and authorized an increase in the Regular Army's enlisted strength to 227,000.110 Public opinion, however, remained committed to staying out of war and protecting "America First." The blitzkrieg tactics of the Nazis in Poland suggested that this war would be a mobile one, unlike the stalemate of the Western Front during World War I. By 1939 the United States Army had undergone extensive motorization, although mechanization remained in its early stages. For the Signal Corps, motorization meant developing light automobiles equipped with radios as reconnaissance vehicles and adapting motor vehicles to lay wire.111 But little had been done to integrate communications into larger, combined arms mobile formations. During the spring of 1940 the Army held its first genuine corps and army training maneuvers. The exercises, conducted in May 1940 along the Texas and Louisiana border, "tested tactical communications more thoroughly than anything else had since World War I”112 Unfortunately, much Signal Corps equipment proved deficient. The W-110 field wire, for instance, worked poorly when wet and suffered considerable damage from motor vehicles. (Local cattle also liked to chew contentedly upon it.) Moreover, the SCR-197, designed to serve as a long-range mobile radio, could not function while in motion. Intended for operation from the back of a truck, the radio could only send or receive messages after the vehicle had stopped. First, however, the crew had to dismount to deploy the antenna and start the gasoline generator. The allocation of frequencies also became a problem with the proliferation of radios throughout the Army's new triangular divisions. In part, the frequency issue arose because the radios in use were obsolescent. They did not reflect the most recent innovations-crystal control and FM-that would both increase the range of available frequencies and enable operators to make precise adjustments to particular frequencies with just the push of a button. Until the Army adopted improved radios, it could not fight a modern war successfully. Moreover, in addition to highlighting the general inadequacy of tactical communications, the 1940 maneuvers demonstrated that the Signal Corps needed additional men and units to carry out its mission.113 Although technically a neutral nation, the United States gradually began to prepare for the possibility of entering the war and increased its support to the Allies. On 10 May 1940 Germany invaded France and the Low Countries. The subsequent defeat of the Allied armies, followed by the narrow escape of the British expeditionary force from Dunkirk and the fall of France in June 1940, brought Allied fortunes to the brink of disaster. At the end of August Congress authorized the president to induct the National Guard into service for a year and to call up the Organized Reserves. Furthermore, the Selective Service and Training Act, signed into law on 16 September 1940, initiated the first peacetime draft in the nation's history. While the United States was not yet ready to become a direct participant, the signing of the Lend-Lease Act in March 1941 officially made it the world's "arsenal of democracy."114 While the nation moved toward war, the Signal Corps underwent some changes of its own. The pressure of the impending conflict resulted in enormous demands for new communications equipment. The Air Corps, in particular, grew increasingly impatient with the slow pace of progress, especially in relation to radar. Under intense criticism from the airmen, Chief Signal Officer Mauborgne was suddenly relieved of his duties by Chief of Staff General George C. Marshall, Jr., in August 1941. Pending Mauborgne's official retirement the following month, Brig. Gen. Dawson Olmstead stepped in as acting chief.115 On 24 October 1941, Olmstead officially became chief signal officer with the rank of major general, the fifteenth individual to hold that post. A graduate of West Point, class of 1906, Olmstead had received his commission in the Cavalry. During 1908 and 1909 he had attended the Signal School at Fort Leavenworth. After World War I, during which he had served in the Inspector General's Office of the AEF, he held a number of Signal Corps-related assignments. These included signal officer of the Hawaiian Department from 1925 to 1927, officer in charge of the Alaska communication system from 1931 to 1933, and commandant of the Signal School at Fort Monmouth from 1938 to 1941.116 For the new chief signal officer, as for the nation, war was now close at hand. Despite outstanding work by the Signal Intelligence Service, now comprising almost three hundred soldiers and civilians, the exact point of danger eluded American leaders. In August 1940 William Friedman and his staff had broken PURPLE, the Japanese diplomatic code, and the intelligence received as a consequence became known as MAGIC.117 While MAGIC yielded critical information regarding Japanese diplomatic strategy, the intercepted messages did not explicitly reveal Japanese war plans.118 American officials knew that war was imminent, but considered a Japanese attack on Hawaii no more than a remote possibility. During 1940 President Roosevelt had transferred the Pacific Fleet from bases on the West Coast of the United States to Pearl Harbor on the Hawaiian island of Oahu, hoping that its presence might act as a deterrent upon Japanese ambitions. Yet the move also made the fleet more vulnerable. Despite Oahu's strategic importance, the air warning system on the island had not become fully operational by December 1941. The Signal Corps had provided SCR-270 and 271 radar sets earlier in the year, but the construction of fixed sites had been delayed, and radar protection was limited to six mobile stations operating on a part-time DAVID SARNOFF OF RCA (LEFT) AND CAPTAIN STONER, IN CHARGE OF THE WAR basis to test the equipment and train the crews. Though aware of the dangers of war, the Army and Navy commanders on Oahu, Lt. Gen. Walter C. Short and Admiral Husband E. Kimmel, did not anticipate that Pearl Harbor would be the target; a Japanese strike against American bases in the Philippines appeared more probable. In Hawaii, sabotage and subversive acts by Japanese inhabitants seemed to pose more immediate threats, and precautions were taken. The Japanese-American population of Hawaii proved, however, to be overwhelmingly loyal to the United States.119 Because the Signal Corps' plans to modernize its strategic communications during the previous decade had been stymied, the Army had only a limited ability to communicate with the garrison in Hawaii. In 1930 the Corps had moved WAR's transmitter to Fort Myer, Virginia, and had constructed a building to house its new, high-frequency equipment. Four years later it added a new diamond antenna, which enabled faster transmission.120 But in 1939, when the Corps wished to further expand its facilities at Fort Myer to include a rhombic antenna for point-to-point communication with Seattle, it ran into difficulty. The post commander, Col. George S. Patton, Jr., objected to the Signal Corps' plans. The new antenna would encroach upon the turf he used as a polo field and the radio towers would obstruct the view. Patton held his ground and prevented the Signal Corps from installing the new equipment. At the same time, the Navy was about to abandon its Arlington radio station located adjacent to Fort Myer and offered it to the Army. Patton, wishing instead to use the Navy's buildings to house his enlisted personnel, opposed the station's transfer. As a result of the controversy, the Navy withdrew its offer and the Signal Corps lost the opportunity to improve its facilities.121 Though a seemingly minor bureaucratic battle, the situation had serious consequences two years later. Early in the afternoon of 6 December 1941, the Signal Intelligence Service began receiving a long dispatch in fourteen parts from Tokyo addressed to the Japanese embassy in Washington. The Japanese deliberately delayed sending the final portion of the message until the next day, in which they announced that the Japanese government would sever diplomatic relations with the United States effective at one o'clock that afternoon. At that hour, it would be early morning in Pearl Harbor. Upon receiving the decoded message on the morning of 7 December, Chief of Staff Marshall recognized its importance. Although he could have called Short directly, Marshall did not do so because the scrambler telephone was not considered secure. Instead, he decided to send a written message through the War Department Message Center. Unfortunately, the center's radio encountered heavy static and could not get through to Honolulu. Expanded facilities at Fort Myer could perhaps have eliminated this problem. The signal officer on duty, Lt. Col. Edward F French, therefore sent the message via commercial telegraph to San Francisco, where it was relayed by radio to the RCA office in Honolulu. That office had installed a teletype connection with Fort Shafter, but the teletypewriter was not yet functional. An RCA messenger was carrying the news to Fort Shafter by motorcycle when Japanese bombs began falling; a huge traffic jam developed because of the attack, and General Short did not receive the message until that afternoon. Earlier that day, as the sun rose over Opana on the northern tip of Oahu, two Signal Corpsmen, Pvts. George A. Elliott and Joseph L. Lockard, continued to operate their radar station, although their watch had ended at 0700. At 0702 a large echo appeared on their scope, indicating a sizable formation of incoming planes about 130 miles away. They telephoned their unusual sighting to the radar information center at Fort Shafter, but the young Air Corps lieutenant on duty told them to "Forget it." An attack was not expected, and the planes were assumed to be American bombers scheduled to arrive that morning from California. Nevertheless, Elliott and Lockard tracked the planes until they became lost on their scope. Just minutes before the attack began at 0755, the two men left their station for breakfast.122 Despite the breaking of PURPLE, the surprise at Pearl Harbor was "complete and shattering."123 The following day President Roosevelt went before Congress to ask for a declaration of war against Japan. In an eloquent speech, he called 7 December "a date which will live in infamy," and the House and Senate voted for war with only one dissenter.124 On 11 December, Germany and Italy declared war on the United States, and Congress replied in kind. Despite Woodrow Wilson's lofty intentions, World War I had not made the world safe for democracy; with Hitler's armies supreme in Europe and Japanese forces sweeping through the Far East, freedom appeared to be in greater peril than in 1917. In just twenty years the hopes for a lasting peace had vanished, and once again the United States prepared to throw its might on the side of the Allies. Angered by the bombing of Pearl Harbor, the American people entered World War II with a strong sense of mission and purpose. At the same time that Japanese war planes shattered the Pacific Fleet, they also destroyed the American sense of invulnerability-the nation's ocean bulwark had been breached. Nevertheless, displaying his characteristic optimism, President Roosevelt proclaimed on 9 December: "With confidence in our armed forces, with unbounded determination of our people, we will gain the inevitable triumph.”125 In this triumph, the Signal Corps would play a pivotal role. Return to Table of Contents
http://www.history.army.mil/books/30-17/S_6.htm
13
134
The problem addressed by trigonometry is that of describing the relations betweens angles and side lengths in a triangle. If two line segments, L and M, end at a point P they determine two angles at that point. One is the angle clockwise from L to M and the other the angle counterclockwise from L to M. The first question to consider is: how do we measure such angles? That is, how do we assign a number to either of these angles? There are several different ways to measure angle, and trigonometry is concerned with the relations between them. The first way is the additive way: that is, we define angles at P so that if you put one angle right next to the other, so that one is between L and M and the other between M and N, then the angle between L and N is the sum of the angles between L and M and between M and N. (this is a good place for a picture) If we define angles this additive way, the only thing left to do is to define the unit of angle. All other angles follow by additivity from this unit value. Naturally we have two different standard ways to define the unit of angle: historically the angle that goes all the way around, from L to L the long way, was called 360 degrees. I have no idea. Another way is to measure the angle between L and M at P by the distance along a unit circle centered at P between the two line segments. This distance is the standard measure of angle in units called radians. Since the circumference of a unit circle is 2, we can identify an angle of 360 degrees as one of 2 radians. Thus a "straight line" angle, for which L and M are in opposite directions, is half of this or 180 degrees, or radians. A right angle is half of a straight line angle, and is therefore 90 degrees or radians. The small angle between the x-axis and the "main diagonal" (x = y) is half of this or 45 degrees or radians. These definitions are all well and good, but distance around a circle is a bit awkward to measure, and there is an alternative way to describe angles. It is not additive, but much easier to measure in practice. It is called the sine of the angle, denoted as sin, and is 0 for the 0 angle and 1 for a right angle. Suppose we are interested in the angle counterclockwise from M to L at point P. Imagine that we draw a unit circle around P and that L and M meet this circle at points A and B respectively. The sine of this angle is then defined to be the distance from the point A to the line M. The sine is the fundamental entity of trigonometry. You can measure it by drawing a line from A to M perpendicular to M and measuring its length. There is one tiny complication: if the angle is bigger than a straight line angle, so that the angle counterclockwise from M to L is smaller than that clockwise, we define the sine of it to be negative. (A picture here would help.) The people who worked on such things long ago made lots of other definitions, which made perfect sense to them, but cluttered up the subject with lots of definitions, the memorization of which has hindered students of it ever since. Why did they do that? These definitions refer to obvious geometric quantities that they considered worth defining. Well, they drew a line segment from the point A tangent to the unit circle around P to the line M. The length of this segment they called the tangent of the angle from M to L. (When the line has a positive slope the tangent is taken to be negative.) The length of the line segment from P along M to the intersection of M with that tangent line at A they called the secant of the angle from M to L. (The secant is negative when this line runs away from M at P.) They also defined the complement of an angle that is less than a right angle to be the difference between a right angle and it. This got them to define the cosine, cotangent and cosecant as the sine, tangent and secant of the complement of the original angle. Fortunately for us, all of these six functions are easily related to the sine function, which means that we need only really become familiar with the sine, and we can then figure out what the others will do. Notice that the sine, by these definitions changes sign when we interchange L and M, while cosine stays the same. Here are the relations between these functions, all of which follow from the definitions from the fact that corresponding angles of similar triangles are equal 1. Draw a relevant picture for an angle that is less than showing all of these entities. Include the line perpendicular to M at P, and its intersection with the tangent to the unit circle at A. 2. Identify which triangles are similar to one another. Remember that the two angles other than a right angle in a triangle with a right angle are complementary. 3. Deduce all the claims above by using the similar triangles argument mentioned above. There are three basic theorems of trigonometry that you should know. Then there are the "addition theorems" of trigonometry and the relation between the sine and the exponential function, and you know all you should know about the subject. OK what are the basic theorems? 1.The Pythagorean Theorem: This famous result states that the square of the hypotenuse of a right triangle is the sum of the squares of its other two sides. Translated to our definitions it says that for any angle, we have which implies that, up to sign we have 2. The Law of Sines: This states that in any triangle ABC the ratio of the sines of its angle at A to its angle at B is the ratio of the lengths of the side opposite A to the side opposite B. If we describe these lengths as l(BC) and l(AC) respectively, we have This statement follows if we drop a perpendicular from C to the line AB and relate the length of that perpendicular in terms of the angle B and the length of BC and also in terms of the angle A and the length of AC. 3. The Law of Cosines: This statement gives the length of the side BC of a triangle in terms of the lengths of AB and AC and its angle at A l(BC)2 = l(AB)2 + l(AC)2 – 2 l(AB)*l(AC)*cos A This result can be deduced in several ways. One way is by dropping a perpendicular from B to AC meeting the latter line at Q. We then have l(AQ) = l(AB)*cos A, AC = AQ + QC, l(AQ)2 + l(BQ)2 = l(AB)2 l(CQ)2 + l(BQ)2 = l(BC)2 Appropriate substitution among these equations yields the stated law. What in the world is an addition theorem? We have already noticed that the standard measure of angle, in terms of degrees or radians is additive: this measure of the sum of two angles is the sum of the same measures of each summand. This statement is not true for sines or cosines. The sine of the sum of two angles is not the sum of their sines. The addition theorems tell us how to compute the sine and cosine of the sum of two angles in terms of the sines and cosines of the two angles that are summed. And what are they? Addition Theorems: Suppose we have two adjacent angles, of sizes q and j. Then the sine and cosine of their sum, q+ j are given by And where do these claims come from? Perhaps the easiest proof comes from the next discussion: the relation between sines and cosines and the exponential function. They follow quickly from that relation, on substitution, given the fundamental properties of the exponential function. 4. Write the corresponding results for the sines and cosines of and also for sines and cosines of 2. 5. Combine the last of these with the Pythagorean theorem to get expressions for and for in terms of cos. Given any function, f, we can define two others that are symmetric and antisymmetric under reflection about the origin, as follows In words, g(x) is the average f(x) and f(-x), while h(x) is the average of f(x) and –f(-x). We can deduce from these definitions that g is symmetric and h antisymmetric (which means it changes sign under this reflection). We may also notice, f = g + h. In light of these facts, we can call g the symmetric part of f and h the antisymmetric part of f, under reflection about the origin. The symmetric part of the exponential function, ex is called cosh x, while the antisymmetric part is called sinh x. These are called the hyperbolic cosine and sine respectively, and you may have noticed corresponding buttons on your calculator. If we consider the exponential function of an imaginary argument, eix we find that the symmetric part is real, while the antisymmetric part is imaginary. In fact the symmetric part of eix is cos x, and the antisymmetric part of expix is i sin x. These facts, and the fundamental property of exponents: eAeB = eA+B, which is an addition theorem for exponents, provide for straightforward deduction of the addition theorem for sines and cosines. The formal statements of the relation between sines and cosines and exponentials are as follows Exercise 6 The exponential function being its own derivative, can be written as an infinite series as follows Write out the first three terms of the series for sin x and for cos x implied by the relations just above and this statement. Here is an applet to help you get used to these concepts. Up Previous Next
http://www-math.mit.edu/~djk/calculus_beginners/chapter07/complement01.html
13
70
Here is a rundown of geometry facts you might need to know about circles for the GMAT. The Basic Terminology A circle is the set of all points equidistant from a fixed point. That means a circle is this: and not this: In other words, the circle is only the curved round edge, not the middle filled-in part. A point on the edge is “on the circle”, but a point in the middle part is “in the circle” or “inside the circle.” In the diagram below, point A is on the circle, but point B is in the circle. By far, the most important point in the circle is the center of the circle, the point equidistance from all points on the circle. Any line segment that has both endpoints on the circle is a chord. By the way, the word “chord” in this geometric sense is actually related to “chord” in the musical sense: the link goes back to Mr. Pythagoras (c. 570 – c. 495 BCE), who was fascinated with the mathematics of musical harmony. If the chord passes through the center, this chord is called a diameter. The diameter is a chord. A diameter is the longest possible chord. A diameter is the only chord that includes the center of the circle. The diameter is an important length associated with a circle, because it tells you the maximum length across the circle in any direction. An even more important length is the radius. A radius is any line segment with one endpoint at the center and the other on the circle. As is probably clear visually, the radius is exactly half the diameter, because a diameter can be divided into two radii. The radius is crucially important, because if you know the radius, it’s easy to calculate not only the diameter, but also the other two important quantities associated with a circle: the circumference and the diameter. The circumference is the length of the circle itself. This is a curve, so you would have to imagine cutting the circle and laying it flat against a ruler. As it turns out, there is a magical constant that relates the diameter (d) & radius (r) to the circumference. Of course, that magical constant is . From the very definition of itself, here are two equations for the circumference, c. If you remember the second, more common form, you don’t need to know the first. The number is slightly larger than 3 — this means that three pieces of string, each as long as the diameter, together would not be quite long enough to make it all the way around the circle. The number can be approximated by 3.14 or by the fraction 22/7. Technically, it is an irrational number that goes on forever in a non-repeating pattern. These two formulas follow from the definition of , so basically every culture on earth figured out these. By contrast, the area of circle was discovered by one brilliant mathematician, and everyone on earth has this one man to thank for his formula for the area of a circle. That man was Archimedes (c. 287 – c. 212 BCE). Here is Archimedes’ amazing formula: This is another formula you need to know cold on test day. In the next post in this series, I’ll discuss circles and angles. Here are some practice problems. 1) Given that a “12-inch pizza” means circular pizza with a diameter of 12 inches, changing from an 8-inch pizza to a 12-inch pizza gives you approximately what percent increase in the total amount of pizza? 2) What is the diameter of circle Q? Statement #1 — the circumference of Q is . Statement #2 — the area of Q is . Practice Problem Solutions 1) The 8-inch pizza has a radius of r = 4, so the area is . That area is how much pizza you get. The 12-inch pizza has a radius of r = 6 and an area of . When you change from 16 to 36, what is the percentage change? Well, that’s more than double, so it must be a percent greater than 100%. The only answer choice greater than 100% is answer E. 2) Statement #1: if you know the circumference, then you can use to solve for the radius, or to solve for the diameter. Either way, you can find the diameter, so this statement by itself is sufficient. Statement #2: if you know the area, you can find the radius, and then double that to get the diameter. This statement by itself is also sufficient. Both statements alone are sufficient. Answer = D.
http://magoosh.com/gmat/2012/an-introduction-to-circles-on-the-gmat/
13
287
Addition is a mathematical operation that represents the total amount of objects together in a collection. It is signified by the plus sign (+). For example, in the picture on the right, there are 3 + 2 apples—meaning three apples and two apples together, which is a total of 5 apples. Therefore, 3 + 2 = 5. Besides counting fruits, addition can also represent combining other physical and abstract quantities using different kinds of objects: negative numbers, fractions, irrational numbers, vectors, decimals, functions, matrices and more. Addition follows several important patterns. It is commutative, meaning that order does not matter, and it is associative, meaning that when one adds more than two numbers, order in which addition is performed does not matter (see Summation). Repeated addition of 1 is the same as counting; addition of 0 does not change a number. Addition also obeys predictable rules concerning related operations such as subtraction and multiplication. All of these rules can be proven, starting with the addition of natural numbers and generalizing up through the real numbers and beyond. General binary operations that continue these patterns are studied in abstract algebra. Performing addition is one of the simplest numerical tasks. Addition of very small numbers is accessible to toddlers; the most basic task, 1 + 1, can be performed by infants as young as five months and even some animals. In primary education, students are taught to add numbers in the decimal system, starting with single digits and progressively tackling more difficult problems. Mechanical aids range from the ancient abacus to the modern computer, where research on the most efficient implementations of addition continues to this day. Notation and terminology - (verbally, "one plus one equals two") - (verbally, "two plus two equals four") - (verbally, "three plus three equals six") - (see "associativity" below) - (see "multiplication" below) There are also situations where addition is "understood" even though no symbol appears: - A column of numbers, with the last number in the column underlined, usually indicates that the numbers in the column are to be added, with the sum written below the underlined number. - A whole number followed immediately by a fraction indicates the sum of the two, called a mixed number. For example, 3½ = 3 + ½ = 3.5. This notation can cause confusion since in most other contexts juxtaposition denotes multiplication instead. The numbers or the objects to be added in general addition are called the terms, the addends, or the summands; this terminology carries over to the summation of multiple terms. This is to be distinguished from factors, which are multiplied. Some authors call the first addend the augend. In fact, during the Renaissance, many authors did not consider the first addend an "addend" at all. Today, due to the commutative property of addition, "augend" is rarely used, and both terms are generally called addends. All of this terminology derives from Latin. "Addition" and "add" are English words derived from the Latin verb addere, which is in turn a compound of ad "to" and dare "to give", from the Proto-Indo-European root *deh₃- "to give"; thus to add is to give to. Using the gerundive suffix -nd results in "addend", "thing to be added". Likewise from augere "to increase", one gets "augend", "thing to be increased". "Sum" and "summand" derive from the Latin noun summa "the highest, the top" and associated verb summare. This is appropriate not only because the sum of two positive numbers is greater than either, but because it was once common to add upward, contrary to the modern practice of adding downward, so that a sum was literally higher than the addends. Addere and summare date back at least to Boethius, if not to earlier Roman writers such as Vitruvius and Frontinus; Boethius also used several other terms for the addition operation. The later Middle English terms "adden" and "adding" were popularized by Chaucer. Addition is used to model countless physical processes. Even for the simple case of adding natural numbers, there are many possible interpretations and even more visual representations. Combining sets Possibly the most fundamental interpretation of addition lies in combining sets: - When two or more disjoint collections are combined into a single collection, the number of objects in the single collection is the sum of the number of objects in the original collections. This interpretation is easy to visualize, with little danger of ambiguity. It is also useful in higher mathematics; for the rigorous definition it inspires, see Natural numbers below. However, it is not obvious how one should extend this version of addition to include fractional numbers or negative numbers. One possible fix is to consider collections of objects that can be easily divided, such as pies or, still better, segmented rods. Rather than just combining collections of segments, rods can be joined end-to-end, which illustrates another conception of addition: adding not the rods but the lengths of the rods. Extending a length A second interpretation of addition comes from extending an initial length by a given length: - When an original length is extended by a given amount, the final length is the sum of the original length and the length of the extension. The sum a + b can be interpreted as a binary operation that combines a and b, in an algebraic sense, or it can be interpreted as the addition of b more units to a. Under the latter interpretation, the parts of a sum a + b play asymmetric roles, and the operation a + b is viewed as applying the unary operation +b to a. Instead of calling both a and b addends, it is more appropriate to call a the augend in this case, since a plays a passive role. The unary view is also useful when discussing subtraction, because each unary addition operation has an inverse unary subtraction operation, and vice versa. Addition is commutative, meaning that one can reverse the terms in a sum left-to-right, and the result is the same as the last one. Symbolically, if a and b are any two numbers, then - a + b = b + a. The fact that addition is commutative is known as the "commutative law of addition". This phrase suggests that there are other commutative laws: for example, there is a commutative law of multiplication. However, many binary operations are not commutative, such as subtraction and division, so it is misleading to speak of an unqualified "commutative law". A somewhat subtler property of addition is associativity, which comes up when one tries to define repeated addition. Should the expression - "a + b + c" be defined to mean (a + b) + c or a + (b + c)? That addition is associative tells us that the choice of definition is irrelevant. For any three numbers a, b, and c, it is true that - (a + b) + c = a + (b + c). For example, (1 + 2) + 3 = 3 + 3 = 6 = 1 + 5 = 1 + (2 + 3). Not all operations are associative, so in expressions with other operations like subtraction, it is important to specify the order of operations. Identity element - a + 0 = 0 + a = a. This law was first identified in Brahmagupta's Brahmasphutasiddhanta in 628 AD, although he wrote it as three separate laws, depending on whether a is negative, positive, or zero itself, and he used words rather than algebraic symbols. Later Indian mathematicians refined the concept; around the year 830, Mahavira wrote, "zero becomes the same as what is added to it", corresponding to the unary statement 0 + a = a. In the 12th century, Bhaskara wrote, "In the addition of cipher, or subtraction of it, the quantity, positive or negative, remains the same", corresponding to the unary statement a + 0 = a. In the context of integers, addition of one also plays a special role: for any integer a, the integer (a + 1) is the least integer greater than a, also known as the successor of a. Because of this succession, the value of some a + b can also be seen as the successor of a, making addition iterated succession. To numerically add physical quantities with units, they must first be expressed with common units. For example, if a measure of 5 feet is extended by 2 inches, the sum is 62 inches, since 60 inches is synonymous with 5 feet. On the other hand, it is usually meaningless to try to add 3 meters and 4 square meters, since those units are incomparable; this sort of consideration is fundamental in dimensional analysis. Performing addition Innate ability Studies on mathematical development starting around the 1980s have exploited the phenomenon of habituation: infants look longer at situations that are unexpected. A seminal experiment by Karen Wynn in 1992 involving Mickey Mouse dolls manipulated behind a screen demonstrated that five-month-old infants expect 1 + 1 to be 2, and they are comparatively surprised when a physical situation seems to imply that 1 + 1 is either 1 or 3. This finding has since been affirmed by a variety of laboratories using different methodologies. Another 1992 experiment with older toddlers, between 18 to 35 months, exploited their development of motor control by allowing them to retrieve ping-pong balls from a box; the youngest responded well for small numbers, while older subjects were able to compute sums up to 5. Even some nonhuman animals show a limited ability to add, particularly primates. In a 1995 experiment imitating Wynn's 1992 result (but using eggplants instead of dolls), rhesus macaques and cottontop tamarins performed similarly to human infants. More dramatically, after being taught the meanings of the Arabic numerals 0 through 4, one chimpanzee was able to compute the sum of two numerals without further training. Discovering addition as children Typically, children first master counting. When given a problem that requires that two items and three items be combined, young children model the situation with physical objects, often fingers or a drawing, and then count the total. As they gain experience, they learn or discover the strategy of "counting-on": asked to find two plus three, children count three past two, saying "three, four, five" (usually ticking off fingers), and arriving at five. This strategy seems almost universal; children can easily pick it up from peers or teachers. Most discover it independently. With additional experience, children learn to add more quickly by exploiting the commutativity of addition by counting up from the larger number, in this case starting with three and counting "four, five." Eventually children begin to recall certain addition facts ("number bonds"), either through experience or rote memorization. Once some facts are committed to memory, children begin to derive unknown facts from known ones. For example, a child asked to add six and seven may know that 6+6=12 and then reason that 6+7 is one more, or 13. Such derived facts can be found very quickly and most elementary school student eventually rely on a mixture of memorized and derived facts to add fluently. Decimal system The prerequisite to addition in the decimal system is the fluent recall or derivation of the 100 single-digit "addition facts". One could memorize all the facts by rote, but pattern-based strategies are more enlightening and, for most people, more efficient: - Commutative property: Mentioned above, using the pattern a + b = b + a reduces the number of "addition facts" from 100 to 55. - One or two more: Adding 1 or 2 is a basic task, and it can be accomplished through counting on or, ultimately, intuition. - Zero: Since zero is the additive identity, adding zero is trivial. Nonetheless, in the teaching of arithmetic, some students are introduced to addition as a process that always increases the addends; word problems may help rationalize the "exception" of zero. - Doubles: Adding a number to itself is related to counting by two and to multiplication. Doubles facts form a backbone for many related facts, and students find them relatively easy to grasp. - Near-doubles: Sums such as 6+7=13 can be quickly derived from the doubles fact 6+6=12 by adding one more, or from 7+7=14 but subtracting one. - Five and ten: Sums of the form 5+x and 10+x are usually memorized early and can be used for deriving other facts. For example, 6+7=13 can be derived from 5+7=12 by adding one more. - Making ten: An advanced strategy uses 10 as an intermediate for sums involving 8 or 9; for example, 8 + 6 = 8 + 2 + 4 = 10 + 4 = 14. As students grow older, they commit more facts to memory, and learn to derive other facts rapidly and fluently. Many students never commit all the facts to memory, but can still find any basic fact quickly. The standard algorithm for adding multidigit numbers is to align the addends vertically and add the columns, starting from the ones column on the right. If a column exceeds ten, the extra digit is "carried" into the next column. An alternate strategy starts adding from the most significant digit on the left; this route makes carrying a little clumsier, but it is faster at getting a rough estimate of the sum. There are many other alternative methods. Analog computers work directly with physical quantities, so their addition mechanisms depend on the form of the addends. A mechanical adder might represent two addends as the positions of sliding blocks, in which case they can be added with an averaging lever. If the addends are the rotation speeds of two shafts, they can be added with a differential. A hydraulic adder can add the pressures in two chambers by exploiting Newton's second law to balance forces on an assembly of pistons. The most common situation for a general-purpose analog computer is to add two voltages (referenced to ground); this can be accomplished roughly with a resistor network, but a better design exploits an operational amplifier. Blaise Pascal invented the mechanical calculator in 1642, it was the first operational adding machine. It made use of an ingenious gravity-assisted carry mechanism. It was the only operational mechanical calculator in the 17th century and the earliest automatic, digital computers. Pascal's calculator was limited by its carry mechanism which forced its wheels to only turn one way, so it could add but, to subtract, the operator had to use of the method of complements which required as many steps as an addition. Pascal was followed by Giovanni Poleni who built the second functional mechanical calculator in 1709, a calculating clock, which was made of wood and which could, once setup, multiply two numbers automatically. Adders execute integer addition in electronic digital computers, usually using binary arithmetic. The simplest architecture is the ripple carry adder, which follows the standard multi-digit algorithm. One slight improvement is the carry skip design, again following human intuition; one does not perform all the carries in computing 999 + 1, but one bypasses the group of 9s and skips to the answer. Since they compute digits one at a time, the above methods are too slow for most modern purposes. In modern digital computers, integer addition is typically the fastest arithmetic instruction, yet it has the largest impact on performance, since it underlies all the floating-point operations as well as such basic tasks as address generation during memory access and fetching instructions during branching. To increase speed, modern designs calculate digits in parallel; these schemes go by such names as carry select, carry lookahead, and the Ling pseudocarry. Almost all modern implementations are, in fact, hybrids of these last three designs. Unlike addition on paper, addition on a computer often changes the addends. On the ancient abacus and adding board, both addends are destroyed, leaving only the sum. The influence of the abacus on mathematical thinking was strong enough that early Latin texts often claimed that in the process of adding "a number to a number", both numbers vanish. In modern times, the ADD instruction of a microprocessor replaces the augend with the sum but preserves the addend. In a high-level programming language, evaluating a + b does not change either a or b; if the goal is to replace a with the sum this must be explicitly requested, typically with the statement a = a + b. Some languages such as C or C++ allow this to be abbreviated as a += b. Addition of natural and real numbers To prove the usual properties of addition, one must first define addition for the context in question. Addition is first defined on the natural numbers. In set theory, addition is then extended to progressively larger sets that include the natural numbers: the integers, the rational numbers, and the real numbers. (In mathematics education, positive fractions are added before negative numbers are even considered; this is also the historical route) Natural numbers There are two popular ways to define the sum of two natural numbers a and b. If one defines natural numbers to be the cardinalities of finite sets, (the cardinality of a set is the number of elements in the set), then it is appropriate to define their sum as follows: - Let N(S) be the cardinality of a set S. Take two disjoint sets A and B, with N(A) = a and N(B) = b. Then a + b is defined as . Here, A U B is the union of A and B. An alternate version of this definition allows A and B to possibly overlap and then takes their disjoint union, a mechanism that allows common elements to be separated out and therefore counted twice. The other popular definition is recursive: - Let n+ be the successor of n, that is the number following n in the natural numbers, so 0+=1, 1+=2. Define a + 0 = a. Define the general sum recursively by a + (b+) = (a + b)+. Hence 1+1=1+0+=(1+0)+=1+=2. Again, there are minor variations upon this definition in the literature. Taken literally, the above definition is an application of the Recursion Theorem on the poset N2. On the other hand, some sources prefer to use a restricted Recursion Theorem that applies only to the set of natural numbers. One then considers a to be temporarily "fixed", applies recursion on b to define a function "a + ", and pastes these unary operations for all a together to form the full binary operation. This recursive formulation of addition was developed by Dedekind as early as 1854, and he would expand upon it in the following decades. He proved the associative and commutative properties, among others, through mathematical induction; for examples of such inductive proofs, see Addition of natural numbers. The simplest conception of an integer is that it consists of an absolute value (which is a natural number) and a sign (generally either positive or negative). The integer zero is a special third case, being neither positive nor negative. The corresponding definition of addition must proceed by cases: - For an integer n, let |n| be its absolute value. Let a and b be integers. If either a or b is zero, treat it as an identity. If a and b are both positive, define a + b = |a| + |b|. If a and b are both negative, define a + b = −(|a|+|b|). If a and b have different signs, define a + b to be the difference between |a| and |b|, with the sign of the term whose absolute value is larger. Although this definition can be useful for concrete problems, it is far too complicated to produce elegant general proofs; there are too many cases to consider. A much more convenient conception of the integers is the Grothendieck group construction. The essential observation is that every integer can be expressed (not uniquely) as the difference of two natural numbers, so we may as well define an integer as the difference of two natural numbers. Addition is then defined to be compatible with subtraction: - Given two integers a − b and c − d, where a, b, c, and d are natural numbers, define (a − b) + (c − d) = (a + c) − (b + d). Rational numbers (fractions) Real numbers A common construction of the set of real numbers is the Dedekind completion of the set of rational numbers. A real number is defined to be a Dedekind cut of rationals: a non-empty set of rationals that is closed downward and has no greatest element. The sum of real numbers a and b is defined element by element: - Define This definition was first published, in a slightly modified form, by Richard Dedekind in 1872. The commutativity and associativity of real addition are immediate; defining the real number 0 to be the set of negative rationals, it is easily seen to be the additive identity. Probably the trickiest part of this construction pertaining to addition is the definition of additive inverses. Unfortunately, dealing with multiplication of Dedekind cuts is a case-by-case nightmare similar to the addition of signed integers. Another approach is the metric completion of the rational numbers. A real number is essentially defined to be the a limit of a Cauchy sequence of rationals, lim an. Addition is defined term by term: - Define This definition was first published by Georg Cantor, also in 1872, although his formalism was slightly different. One must prove that this operation is well-defined, dealing with co-Cauchy sequences. Once that task is done, all the properties of real addition follow immediately from the properties of rational numbers. Furthermore, the other arithmetic operations, including multiplication, have straightforward, analogous definitions. - There are many things that can be added: numbers, vectors, matrices, spaces, shapes, sets, functions, equations, strings, chains... —Alexander Bogomolny There are many binary operations that can be viewed as generalizations of the addition operation on the real numbers. The field of abstract algebra is centrally concerned with such generalized operations, and they also appear in set theory and category theory. Addition in abstract algebra In linear algebra, a vector space is an algebraic structure that allows for adding any two vectors and for scaling vectors. A familiar vector space is the set of all ordered pairs of real numbers; the ordered pair (a,b) is interpreted as a vector from the origin in the Euclidean plane to the point (a,b) in the plane. The sum of two vectors is obtained by adding their individual coordinates: - (a,b) + (c,d) = (a+c,b+d). In modular arithmetic, the set of integers modulo 12 has twelve elements; it inherits an addition operation from the integers that is central to musical set theory. The set of integers modulo 2 has just two elements; the addition operation it inherits is known in Boolean logic as the "exclusive or" function. In geometry, the sum of two angle measures is often taken to be their sum as real numbers modulo 2π. This amounts to an addition operation on the circle, which in turn generalizes to addition operations on many-dimensional tori. The general theory of abstract algebra allows an "addition" operation to be any associative and commutative operation on a set. Basic algebraic structures with such an addition operation include commutative monoids and abelian groups. Addition in set theory and category theory A far-reaching generalization of addition of natural numbers is the addition of ordinal numbers and cardinal numbers in set theory. These give two different generalizations of addition of natural numbers to the transfinite. Unlike most addition operations, addition of ordinal numbers is not commutative. Addition of cardinal numbers, however, is a commutative operation closely related to the disjoint union operation. In category theory, disjoint union is seen as a particular case of the coproduct operation, and general coproducts are perhaps the most abstract of all the generalizations of addition. Some coproducts, such as Direct sum and Wedge sum, are named to evoke their connection with addition. Related operations Subtraction can be thought of as a kind of addition—that is, the addition of an additive inverse. Subtraction is itself a sort of inverse to addition, in that adding x and subtracting x are inverse functions. Given a set with an addition operation, one cannot always define a corresponding subtraction operation on that set; the set of natural numbers is a simple example. On the other hand, a subtraction operation uniquely determines an addition operation, an additive inverse operation, and an additive identity; for this reason, an additive group can be described as a set that is closed under subtraction. Multiplication can be thought of as repeated addition. If a single term x appears in a sum n times, then the sum is the product of n and x. If n is not a natural number, the product may still make sense; for example, multiplication by −1 yields the additive inverse of a number. In the real and complex numbers, addition and multiplication can be interchanged by the exponential function: - ea + b = ea eb. This identity allows multiplication to be carried out by consulting a table of logarithms and computing addition by hand; it also enables multiplication on a slide rule. The formula is still a good first-order approximation in the broad context of Lie groups, where it relates multiplication of infinitesimal group elements with addition of vectors in the associated Lie algebra. There are even more generalizations of multiplication than addition. In general, multiplication operations always distribute over addition; this requirement is formalized in the definition of a ring. In some contexts, such as the integers, distributivity over addition and the existence of a multiplicative identity is enough to uniquely determine the multiplication operation. The distributive property also provides information about addition; by expanding the product (1 + 1)(a + b) in both ways, one concludes that addition is forced to be commutative. For this reason, ring addition is commutative in general. Division is an arithmetic operation remotely related to addition. Since a/b = a(b−1), division is right distributive over addition: (a + b) / c = a / c + b / c. However, division is not left distributive over addition; 1/ (2 + 2) is not the same as 1/2 + 1/2. The maximum operation "max (a, b)" is a binary operation similar to addition. In fact, if two nonnegative numbers a and b are of different orders of magnitude, then their sum is approximately equal to their maximum. This approximation is extremely useful in the applications of mathematics, for example in truncating Taylor series. However, it presents a perpetual difficulty in numerical analysis, essentially since "max" is not invertible. If b is much greater than a, then a straightforward calculation of (a + b) − b can accumulate an unacceptable round-off error, perhaps even returning zero. See also Loss of significance. The approximation becomes exact in a kind of infinite limit; if either a or b is an infinite cardinal number, their cardinal sum is exactly equal to the greater of the two. Accordingly, there is no subtraction operation for infinite cardinals. Maximization is commutative and associative, like addition. Furthermore, since addition preserves the ordering of real numbers, addition distributes over "max" in the same way that multiplication distributes over addition: - a + max (b, c) = max (a + b, a + c). For these reasons, in tropical geometry one replaces multiplication with addition and addition with maximization. In this context, addition is called "tropical multiplication", maximization is called "tropical addition", and the tropical "additive identity" is negative infinity. Some authors prefer to replace addition with minimization; then the additive identity is positive infinity. Tying these observations together, tropical addition is approximately related to regular addition through the logarithm: - log (a + b) ≈ max (log a, log b), which becomes more accurate as the base of the logarithm increases. The approximation can be made exact by extracting a constant h, named by analogy with Planck's constant from quantum mechanics, and taking the "classical limit" as h tends to zero: In this sense, the maximum operation is a dequantized version of addition. Other ways to add Summation describes the addition of arbitrarily many numbers, usually more than just two. It includes the idea of the sum of a single number, which is itself, and the empty sum, which is zero. An infinite summation is a delicate procedure known as a series. Counting a finite set is equivalent to summing 1 over the set. Linear combinations combine multiplication and summation; they are sums in which each term has a multiplier, usually a real or complex number. Linear combinations are especially useful in contexts where straightforward addition would violate some normalization rule, such as mixing of strategies in game theory or superposition of states in quantum mechanics. Convolution is used to add two independent random variables defined by distribution functions. Its usual definition combines integration, subtraction, and multiplication. In general, convolution is useful as a kind of domain-side addition; by contrast, vector addition is a kind of range-side addition. In literature - In chapter 9 of Lewis Carroll's Through the Looking-Glass, the White Queen asks Alice, "And you do Addition? ... What's one and one and one and one and one and one and one and one and one and one?" Alice admits that she lost count, and the Red Queen declares, "She can't do Addition". - In George Orwell's Nineteen Eighty-Four, the value of 2 + 2 is questioned; the State contends that if it declares 2 + 2 = 5, then it is so. See Two plus two make five for the history of this idea. - From Enderton (p.138): "...select two sets K and L with card K = 2 and card L = 3. Sets of fingers are handy; sets of apples are preferred by textbooks." - Devine et al. p.263 - Schwartzman p.19 - "Addend" is not a Latin word; in Latin it must be further conjugated, as in numerus addendus "the number to be added". - Karpinski pp.56–57, reproduced on p.104 - Schwartzman (p.212) attributes adding upwards to the Greeks and Romans, saying it was about as common as adding downwards. On the other hand, Karpinski (p.103) writes that Leonard of Pisa "introduces the novelty of writing the sum above the addends"; it is unclear whether Karpinski is claiming this as an original invention or simply the introduction of the practice to Europe. - Karpinski pp.150–153 - See Viro 2001 for an example of the sophistication involved in adding with sets of "fractional cardinality". - Adding it up (p.73) compares adding measuring rods to adding sets of cats: "For example, inches can be subdivided into parts, which are hard to tell from the wholes, except that they are shorter; whereas it is painful to cats to divide them into parts, and it seriously changes their nature." - Kaplan pp.69–71 - Wynn p.5 - Wynn p.15 - Wynn p.17 - Wynn p.19 - F. Smith p.130 - Carpenter, Thomas; Fennema, Elizabeth; Franke, Megan Loef; Levi, Linda; Empson, Susan (1999). Children's mathematics: Cognitively guided instruction. Portsmouth, NH: Heinemann. ISBN 0-325-00137-5. - Henry, Valerie J.; Brown, Richard S. (2008). "First-grade basic facts: An investigation into teaching and learning of an accelerated, high-demand memorization standard". Journal for Research in Mathematics Education 39 (2): 153–183. doi:10.2307/30034895. - Fosnot and Dolk p. 99 - The word "carry" may be inappropriate for education; Van de Walle (p.211) calls it "obsolete and conceptually misleading", preferring the word "trade". - Truitt and Rogers pp.1;44–49 and pp.2;77–78 - Jean Marguin, p. 48 (1994) ; Quoting René Taton (1963) - See Competing designs in Pascal's calculator article - Flynn and Overman pp.2, 8 - Flynn and Overman pp.1–9 - Karpinski pp.102–103 - The identity of the augend and addend varies with architecture. For ADD in x86 see Horowitz and Hill p.679; for ADD in 68k see p.767. - Enderton chapters 4 and 5, for example, follow this development. - California standards; see grades 2, 3, and 4. - Baez (p.37) explains the historical development, in "stark contrast" with the set theory presentation: "Apparently, half an apple is easier to understand than a negative apple!" - Begle p.49, Johnson p.120, Devine et al. p.75 - Enderton p.79 - For a version that applies to any poset with the descending chain condition, see Bergman p.100. - Enderton (p.79) observes, "But we want one binary operation +, not all these little one-place functions." - Ferreirós p.223 - K. Smith p.234, Sparks and Rees p.66 - Enderton p.92 - The verifications are carried out in Enderton p.104 and sketched for a general field of fractions over a commutative ring in Dummit and Foote p.263. - Enderton p.114 - Ferreirós p.135; see section 6 of Stetigkeit und irrationale Zahlen. - The intuitive approach, inverting every element of a cut and taking its complement, works only for irrational numbers; see Enderton p.117 for details. - Textbook constructions are usually not so cavalier with the "lim" symbol; see Burrill (p. 138) for a more careful, drawn-out development of addition with Cauchy sequences. - Ferreirós p.128 - Burrill p.140 - The set still must be nonempty. Dummit and Foote (p.48) discuss this criterion written multiplicatively. - Rudin p.178 - Lee p.526, Proposition 20.9 - Linderholm (p.49) observes, "By multiplication, properly speaking, a mathematician may mean practically anything. By addition he may mean a great variety of things, but not so great a variety as he will mean by 'multiplication'." - Dummit and Foote p.224. For this argument to work, one still must assume that addition is a group operation and that multiplication has an identity. - For an example of left and right distributivity, see Loday, especially p.15. - Compare Viro Figure 1 (p.2) - Enderton calls this statement the "Absorption Law of Cardinal Arithmetic"; it depends on the comparability of cardinals and therefore on the Axiom of Choice. - Enderton p.164 - Mikhalkin p.1 - Akian et al. p.4 - Mikhalkin p.2 - Litvinov et al. p.3 - Viro p.4 - Martin p.49 - Stewart p.8 - Bunt, Jones, and Bedient (1976). The historical roots of elementary mathematics. Prentice-Hall. ISBN 0-13-389015-5. - Ferreirós, José (1999). Labyrinth of thought: A history of set theory and its role in modern mathematics. Birkhäuser. ISBN 0-8176-5749-5. - Kaplan, Robert (2000). The nothing that is: A natural history of zero. Oxford UP. ISBN 0-19-512842-7. - Karpinski, Louis (1925). The history of arithmetic. Rand McNally. LCC QA21.K3. - Schwartzman, Steven (1994). The words of mathematics: An etymological dictionary of mathematical terms used in English. MAA. ISBN 0-88385-511-9. - Williams, Michael (1985). A history of computing technology. Prentice-Hall. ISBN 0-13-389917-9. - Elementary mathematics - Davison, Landau, McCracken, and Thompson (1999). Mathematics: Explorations & Applications (TE ed.). Prentice Hall. ISBN 0-13-435817-1. - F. Sparks and C. Rees (1979). A survey of basic mathematics. McGraw-Hill. ISBN 0-07-059902-5. - Begle, Edward (1975). The mathematics of the elementary school. McGraw-Hill. ISBN 0-07-004325-6. - California State Board of Education mathematics content standards Adopted December 1997, accessed December 2005. - D. Devine, J. Olson, and M. Olson (1991). Elementary mathematics for teachers (2e ed.). Wiley. ISBN 0-471-85947-8. - National Research Council (2001). Adding it up: Helping children learn mathematics. National Academy Press. ISBN 0-309-06995-5. - Van de Walle, John (2004). Elementary and middle school mathematics: Teaching developmentally (5e ed.). Pearson. ISBN 0-205-38689-X. - Cognitive science - Baroody and Tiilikainen (2003). "Two perspectives on addition development". The development of arithmetic concepts and skills. p. 75. ISBN 0-8058-3155-X. - Fosnot and Dolk (2001). Young mathematicians at work: Constructing number sense, addition, and subtraction. Heinemann. ISBN 0-325-00353-X. - Weaver, J. Fred (1982). "Interpretations of number operations and symbolic representations of addition and subtraction". Addition and subtraction: A cognitive perspective. p. 60. ISBN 0-89859-171-6. - Wynn, Karen (1998). "Numerical competence in infants". The development of mathematical skills. p. 3. ISBN 0-86377-816-X. - Mathematical exposition - Bogomolny, Alexander (1996). "Addition". Interactive Mathematics Miscellany and Puzzles (cut-the-knot.org). Archived from the original on 6 February 2006. Retrieved 3 February 2006. - Dunham, William (1994). The mathematical universe. Wiley. ISBN 0-471-53656-3. - Johnson, Paul (1975). From sticks and stones: Personal adventures in mathematics. Science Research Associates. ISBN 0-574-19115-1. - Linderholm, Carl (1971). Mathematics Made Difficult. Wolfe. ISBN 0-7234-0415-1. - Smith, Frank (2002). The glass wall: Why mathematics can seem difficult. Teachers College Press. ISBN 0-8077-4242-2. - Smith, Karl (1980). The nature of modern mathematics (3e ed.). Wadsworth. ISBN 0-8185-0352-1. - Advanced mathematics - Bergman, George (2005). An invitation to general algebra and universal constructions (2.3e ed.). General Printing. ISBN 0-9655211-4-1. - Burrill, Claude (1967). Foundations of real numbers. McGraw-Hill. LCC QA248.B95. - D. Dummit and R. Foote (1999). Abstract algebra (2e ed.). Wiley. ISBN 0-471-36857-1. - Enderton, Herbert (1977). Elements of set theory. Academic Press. ISBN 0-12-238440-7. - Lee, John (2003). Introduction to smooth manifolds. Springer. ISBN 0-387-95448-1. - Martin, John (2003). Introduction to languages and the theory of computation (3e ed.). McGraw-Hill. ISBN 0-07-232200-4. - Rudin, Walter (1976). Principles of mathematical analysis (3e ed.). McGraw-Hill. ISBN 0-07-054235-X. - Stewart, James (1999). Calculus: Early transcendentals (4e ed.). Brooks/Cole. ISBN 0-534-36298-2. - Mathematical research - Akian, Bapat, and Gaubert (2005). "Min-plus methods in eigenvalue perturbation theory and generalised Lidskii-Vishik-Ljusternik theorem". INRIA reports. arXiv:math.SP/0402090. - J. Baez and J. Dolan (2001). "From Finite Sets to Feynman Diagrams". Mathematics Unlimited— 2001 and Beyond. p. 29. arXiv:math.QA/0004133. ISBN 3-540-66913-2. - Litvinov, Maslov, and Sobolevskii (1999). Idempotent mathematics and interval analysis. Reliable Computing, Kluwer. - Loday, Jean-Louis (2002). "Arithmetree". J. Of Algebra 258: 275. arXiv:math/0112034. doi:10.1016/S0021-8693(02)00510-0. - Mikhalkin, Grigory (2006). "Tropical Geometry and its applications". To appear at the Madrid ICM. arXiv:math.AG/0601041. - Viro, Oleg (2001). Dequantization of real algebraic geometry on logarithmic paper. In Cascuberta, Carles; Miró-Roig, Rosa Maria; Verdera, Joan et al. "European Congress of Mathematics: Barcelona, July 10–14, 2000, Volume I". Progress in Mathematics (Basel: Birkhäuser) 201: 135–146. arXiv:0005163. ISBN 3-7643-6417-3. - M. Flynn and S. Oberman (2001). Advanced computer arithmetic design. Wiley. ISBN 0-471-41209-0. - P. Horowitz and W. Hill (2001). The art of electronics (2e ed.). Cambridge UP. ISBN 0-521-37095-7. - Jackson, Albert (1960). Analog computation. McGraw-Hill. LCC QA76.4 J3. - T. Truitt and A. Rogers (1960). Basics of analog computers. John F. Rider. LCC QA76.4 T7. - Marguin, Jean (1994). Histoire des instruments et machines à calculer, trois siècles de mécanique pensante 1642-1942 (in fr). Hermann. ISBN 978-2-7056-6166-3. - Taton, René (1963). Le calcul mécanique. Que sais-je ? n° 367 (in fr). Presses universitaires de France. pp. 20–28. - Marguin, Jean (1994). Histoire des instruments et machines à calculer, trois siècles de mécanique pensante 1642-1942 (in fr). Hermann. ISBN 978-2-7056-6166-3.
http://en.wikipedia.org/wiki/Addition
13
227
There were few common elements in the militia organization to be found among the southern states. Virginia and South Carolina along the sea coast were heavily populated whereas in most of North Carolina the government had the greatest difficulty finding enough men within a day's ride to make up a militia company. The greatest problem for all the southern colonies came in organizing the militia on the frontier. The principal, if not exclusive, reason why the southern colonies created a militia was to combat the native Americans, with whom clashes occurred almost constantly from the earliest days forward. The second reason the southern militias were formed was to contain the growing slave populations, which, in some areas, outnumbered the slave-owning population. Virginia, and occasionally the other southern colonies, used the militia to contain the growing number of indentured servants and convict laborers. While the northern and middle provinces had enlisted indentured servants, Amerindians and even black slaves in their militias, southern colonies were rarely prepared to admit any of these classes into their militias. These exclusions were generally enforced despite the fact that the pool of eligible white, free males was so greatly reduced that the southern militia system was unable to function well. The militia system in the southern states was able to provide protection because, for the most part, the aborigines were too weak and undivided to offer much of a challenge, and because several civilized tribes sided with the colonists. In Virginia and a part of Maryland, the Algonquin tribes, especially the Powhatan Confederation, fed and sustained the English settlers more frequently than they fought with them. In Georgia there were essentially no problems with the Amerindians until the English stirred them up at the time of the American War for Independence. The southern tribes, such as the Cherokee, Catawba, Yamasee and Creek were essentially agricultural peoples who were more settled than the northern tribes. The large body of Cherokee remained generally friendly until 1759. South Carolina's Catawba were removed far enough from the settlers on the coast that they did not believe that the whites were a threat until about the end of the eighteenth century. Because the Spanish settlers in Florida favored the local tribes, the great Creek nation, traditional enemies of the Florida tribes, sided with the English who hated the Spanish. The principal Indian problems came from the Yamasee, a displaced tribe from Florida who fought the Carolinians as early as 1715. Also, there was essentially no rivalry in conquest from any other European power the way the northern colonies suffered from the rivalry between the French and the English for supremacy in North America. Occasionally, Georgia experienced incursions from the Spanish; and in the Seven Years' War the French presented a very few minor problems in Georgia, the Carolinas and Virginia. In that war Virginia, Maryland and to a far lesser degree, the Carolinas, did supply troops to fight against the French in western Pennsylvania. With a substantial portion of the southern population being slaves the militias in the south took on a special duty that was inappropriate to the north: they ran slave patrols. Slaves generally could not carry or own firearms. On each plantation one slave could be licensed to carry a gun for the purpose of hunting down predators and otherwise protecting the master's property. For cause, additional trusted slaves might be entrusted with arms, usually shotguns. The slave patrols were always staffed with white militiamen. Typical of laws enacted in response to real or imagined slave revolts was that resolved by the Norfolk, Virginia, city council, of 7 July 1741.(1) Resolved by the Common Council that for the future the inhabitants of this Borough shall, to prevent any invasion or insurrection, be armed at the Church upon Sundays, or other Days of Worship or Divine Service, upon the penalty of five shillings . . . . Josiah Smith, Mayor Were these duties not tied to the militia by law and custom, one might imagine that slave patrols were logically tied more to the posse comitatus. This ancient Anglo-Saxon legal term refers to the power or force of the county. In medieval times a sheriff could summon all able-bodied men, 15 years of age or older, to assist in the enforcement of the law, maintenance of the peace and the pursuit and apprehension of felons and runaway slaves and servants. In the United States a sheriff may summon a posse to search for a criminal or assist in making an arrest.(2) Southern militia constituted a standing posse, available at any time and often deployed on a regular schedule, whether there was suspicion of a crime or runaway slave or not. After regular forces and select militia units were created in the south, militia units often had no real function or duties save for slave patrols. During the Revolution the southern militia served primarily as guerrillas to harass the British army, like forces to counter the tory militias and auxiliaries which occupied territory and prevented extensive British control of populations and land. Many southern political leaders, however, treated the militia as an alternative to, or substitute for, the regular standing army, rather than as an auxiliary.(3) I should like to express my appreciation to the Marguerite Eyer Wilbur Foundation and the Second Amendment Foundation for their support. Much credit is also due to two devoted assistants, Damon Dale Weyant and Kevin Ray Spiker, Jr. Professor W. Reynold McLeon offered valuable suggestions as did my anonymous referees. My esteemed department chair, Allan Hammock supplied support for copying. I thank Mrs Mildred Moyers and Mrs Christine Chang of the West Virginia University Library system were most courteous and helpful in locating and gathering up materials for me. The Virginia militia was of greatest significance in the seventeenth century, during which time the development passed through several stages. The first quarter of the seventeenth century was marked by improvisation and experimentation as the colonists attempted to develop a formula which would work in the colony's particular circumstances. In the second quarter of the century "this system was reorganized, refined, and repeatedly tested in combat." In the third quarter the colonial leaders excluded slaves and indentured servants, but dwelled on intensive training of specialized units, such as the frontier rangers. Virtually all adult, free, white males answered the muster call. Following Bacon's Rebellion, 1675-77, the base of recruitment was further reduced and a gentlemen's militia, similar to the militia found Stuart England, emerged. The bulk of the population after 1677 constituted an under-utilized, rarely mustered reserve, similar to the medieval great fyrd. After 1680 few poor men served in any militia capacity, although some might enlist in a crown regiment for the pay. The chronic economic crises had reduced much of the population to poverty, so most of the poor were delighted to discover that the militia law was not going to be universally enforced. To most poor reduction of military duty meant that they had more time to plow and harvest and could pocket the money they might otherwise have to lay out for militia arms, supplies, gunpowder and accoutrements. The government began to establish central armories and gunpowder magazines rather than depending on the populace to store individual supplies. Changes in the number and distribution of guns as Virginia approached the eighteenth century were functions of economic and social factors.(4) In 1606 the English King provided a charter to the Virginia Company of London. It required the civil authority to recruit and train a militia and other prepare defenses to "encounter, repulse and resist" all the king's and the colony's enemies, suppress insurrection and treason and to enforce the law.(5) The Virginia Charter of 1612 required the government to provide the citizenry with "Armour, Weapons, Ordnance, Munition, Powder [and] Shot" for its defense.(6) The first settlers arrived on 24 May on the Sarah Constant, Goodspeed and Discovery, establishing Fort James, soon named Jamestown. The Company sent John Smith, a hardened military veteran, to establish a self-defense force. Upon his initial review of the men Smith observed that they were "for most part of such tender educations and small experience in martial accidents" as to be essentially useless. He immediately undertook to train them to "march, fight and skirmish" and to form an "order of battle" wherewith to provide some defense against the native aborigine. He exercised the company every Saturday night. Smith especially emphasized forming a proper battle order designed for the New World.(7) Smith departed Virginia in 1609, but there was no change in the exercise of the martial arts since the new leaders sent by the Virginia Company were also veterans of many European battles. If anything the new military leaders intensified the militarization of the colony. Much of Smith's work had come unraveled because of famine, disease and deaths at the hands of the Indians. Understanding that development and maintenance of a militia was the primary necessity, the "excellent old soldiers" divided the colonists into "several Companies for war." They appointed an officer for each fifty men "to train them at convenient times and to teach them to use their arms and weapons."(8) The primary problems with the defense of the colony were not military. The colonists had settled on one of the most inhospitable and undesirable pieces of land available and diseases of all kinds reduced the numbers of colonists. Famine was also a constant threat. By 1610 the Virginia militia was sufficiently powerful to take the offensive against the natives. Beginning with small forays into Amerindian territory, the militia became emboldened with small victories its first campaign. In 1614 the militia captured Powhatan's daughter Pocahontas, and this brought the first Indian campaign to halt, with the militia having tasted victory for the first time. Initial successes and the removal of the immediate threat to the settlement brought a certain inertia and the militia ceased its frequent practices. Peace also brought an end the military dictatorship of the militia company and its professional officers.(9) In 1613 Sir Thomas Dale concluded a treaty with the Chickahominies under Powhatan who were now closely allied because of the marriage of Pocahontas. Among other provisions, the tribe agreed that all members were now Englishmen, subject to the king, and that "every fighting man at gathering their Corn should bring two Bushel;s to the Store as Tribute, for which they should receive as many hatchets." They also agreed to supply 300 men to join the colonial militia to fight against the Spanish or any other enemy of the Crown.(10) On the whole it must be said that Powhatan Confederation sustained the colonists more frequently than it made war upon them. The Powhatan Algonquins initially did not view the settlers as much of a threat. Reports came to Powhatan that the English had neither much corn nor many trees in England, and thus were extremely poor. For their part, the English saw the Chickahominies as a potential threat, although within a sixty mile radius of Jamestown there were few villages of more than fifty inhabitants, and the entire Amerindian population was probably less than 5000, of which perhaps 1500 were warriors. Tribes allied with Powhatan could have raised fewer than 2500 warriors. The colonists could match these numbers, and they were armed with firearms and iron and steel weapons. Nonetheless, the governor published and edict that "no Indian should be taught to shoot with Guns, on Pain of Death to Teacher and Learner."(11) In 1618 the Virginia company reorganized, with a wholly civilian rule replacing the military one. No civil officer held military rank or was selected because of his military expertise or service. Another part of that reorganization brought about a change in the mission of the militia. Henceforth the militia was to be a defensive force, prepared only to keep the peace. The civil officers issued a stern warning against stirring up the Amerindians or violating any part of the negotiated peace. The new officers discouraged private ownership of martial arms and neglected the militia, essentially unilaterally disarming the colony.(12) On 24 July 1621 Governor Francis Wyatt issued three important orders. First, he instructed masters and apprentices to remain loyal to their trades and not give them up to make quick and easy profits "planting tobacco or any such useless commodity." Second, he ordered that any servants condemned to punishment for "common offenses" be placed to work on public works projects for the benefit of the whole colony. Third, he ordered that guards be placed around public fields for the protection of farmers.(13) During the first fifteen years of Virginia's existence as many as 10,000 English settlers and their slaves had come to the colony, but in 1622 perhaps only about 2200 remained. Many died and others returned to England. The temporary peace with the Amerindians did not last. In 1622 the Amerindians, angered at the rapid expansion of the colony made war against the whites along the James River On 22 March 1622 an Amerindian attack left 347 colonists dead, although the colony was saved because Christian Indians warned some men of the impending attack. Governor Francis Wyatt led the survivors into pallisaded towns where they took refuge against the marauders. As hunger, thirst and disease again ravished the colonists Wyatt ordered that available military stores be brought forth for whatever storage areas had been created when the colony demilitarized. With almost no training, save for the distant memories of a few of the earlier discipline, the militia sallied forth. With more luck than good management, Wyatt managed to win more skirmishes than he lost. Firearms and steel edged weapons proved decisive against the stone age weapons of the aborigine. The Amerindians had planned little for a campaign and had laid up few supplies and were therefore as vulnerable in their own way as the undrilled colonial militia. In March 1624 Wyatt recommended additional laws be enacted by the legislature to reduce the threat from the Amerindians. Article 23 required that all homes be pallisaded, article 24 required the people to go about armed at all times, and article 28 set a night watch for every community. Article 32 provided for state support of families of men killed, and for men disabled, in action against the Amerindians.(14) The colonists appealed to England for assistance. On 17 July the colonists received a reply. "His majesty was so far sensible of the loss of his subjects and of the present estate of the Colony . . . he was graciously pleased to promise them assistance . . . . It [the petition] was answered [with] munition . . . whereby they might be enabled to take a just revenge of these treacherous Indians . . . . It pleased his Majesty to promise them some arms out of the Tower as was desired . . . ." The king sent 100 brigandines, also called plate coats; 40 jackets of mail; 400 jerkins or shirts of mail; 200 skull caps and an unspecified quantity of halberts and spears. This initial shipment was followed by a shipment of 20 barrels of gunpowder and 100 firearms of unspecified type.(15) Wyatt decided that he would not be caught unprepared again. He also knew that the could not count on support from the financially troubled Virginia Company, so he had no choice but to revitalize the militia and revamp the militia law. Virginia statutes of 1622 provided death penalty for servants who ran away and traded or sold guns to the Amerindians;(16) and statutes of 1623 provided that "no man go or send abroad without a sufficient will [well] armed" and that the work "men go not to worke in the ground without their arms." Furthere, "the commander of every Plantation [is] to take care that there be sufficient of powder and ammunition within their Plantation under his command and their pieces [of war equipment] fixed and their armes compleat."(17) In 1624 the militia law provided that militiamen wounded or otherwise injured while in the public service would receive public support and the families of those killed while in service would be supported at the public expense. Survivors of the early years were exempted from further compulsory militia service. When a militiaman was impressed into duty his neighbors were required to spend one day a week assisting with his duties and chores at home.(18) Shortly after the enactment of the new militia law Wyatt received word that the Virginia Company had failed and that hereafter the colony would be under the Crown. The Stuart kings had provided no more assistance to the colony than had the Virginia Company. Defense remained a local obligation. All able-bodied males between 16 and 60 years of age, excepting only older veterans and certain newcomers, were enrolled in the militia. Those not serving in the militia were taxed for its support and were required to offer assistance on the farms of those who were in actual militia service. Gentlemen were to be placed in proper ranks, so that there was no social levelling and they were not reduced to serving as common soldiers. Regular drill was mandated by law. The law created officers whose duty it was to "exercise and drill them, whereby they may be made more fit for service upon any occasion." The legislature also ordered that a regular system of defensive shelters be built and maintained.(19) In October 1629 the General Assembly enacted a new series of militia laws. Plantation overseers were to reorganize their militias in preparation for new wars against the native aborigine. Three expedition, to begin in November 1629 and March and July 1630, were designed to "doe all manner of spoile and offence to the Indians that may possibly bee effected." So successful was the first expedition that the legislature ordered that the war be prosecuted without the possibility of surrender or peace.(20) At this time the Virginia militia could muster no less than 2000 men. The second war with the Powhatan Indians continued until 1632, but the weight of numbers and superiority of equipment enabled the colonists to win. This time, following the Second Powhatan War, there was no disassembly of the militia. On 21 February 1631 Governor Harvey recommended that the legislature place a tax upon ships entering and leaving Virginia harbors. This tax, Harvey wrote, will give the colony "a continuall supply of ammunition." The House of Burgesses agreed and enacted this dedicated tax.(21) In 1632 the Virginia House of Burgesses ordered that every physically fit free white male bring his gun to church services so that, immediately following Sunday service, he might join his neighbors in exercising with it. No settler was even to speak to an Amerindian under penalty of the law. Militiamen were authorized to kill any Amerindian found "lurking" or thought to be stealing cattle. In 1633 the legislature set the new penalty for selling guns to the Amerindians as the loss of goods and chattles and imprisonment for life.(22) In 1634 the militia was reorganized following the lines of the eight existing counties. The governor appointed county lieutenants and other officers in each county. In 1639 the governor issued a call for select militiamen, fifteen from each county, to punish one or more bands of marauding natives. While each militiaman provided his own gun and edged weapon, his neighbors otherwise supplied him. His neighbors also looked after his farm and each provided one day's service to the militiaman.(23) Under the act of 6 January 1639, all able-bodied men were made liable for militia service and were to provide themselves with arms and ammunition "or be fined as the pleasure of the Governor and Council." Slaves were specifically exempted from the obligation, for the act contained the language, "all persons except negroes."(24) In 1633 the colony recognized the importance of musicians and appointed drummers, paying them 1000 pounds of tobacco and six barrels of corn per year.(25) In the earliest days Virginia struggled to provide enough food to ward off starvation. After the first few years the colony could afford to sustain a militia and, with basic food, clothing and shelter provided for, mandate attendance at militia musters without disrupting the colony. Sundays became the regular militia training days, combining religious, military and social functions. Forty years later the militia trained only three times a year. In that time span much of the threat from the native aborigine had been contained. But another development, the trained specialist, had emerged, usually in the guise of frontiersmen, who knew how to fight the Indians on their own terms. These specialists served at times for pay and at other times as volunteer militia. There were also frontier forts to be garrisoned and this required a considerable number of men. Demands were so great that Virginia had to resort to paying some men. Since many volunteers had to be paid, there was a constant drain on the treasury. In the early eighteenth century was generally too impoverished to defend itself adequately so it had to rely primarily on retaliation. Since the frontier, with its large plantations and farms, was but sparsely settled and the loss of a few men from a particular area usually meant disaster. The families of the settlers could not defend themselves and would often abandon the land and return to the east. There were few fortified places or garrison houses on the frontier wherein settlers or their families could take refuge except for the scattered forts. Neither were there sufficient resources on the frontier to sustain the militia when units were deployed there. In 1642 the Lords of Trade sent a new set of orders to Governor Berkeley, including instructions concerning the militia. 11. To the end the country may be the better served against all Hostil Invasions it is requisite that all persons from the age of 16 to 60 be armed with arms, both offensive and defensive. And if any person be defective in this kind, wee strictly charge you to command them to provide themselves of sufficient arms within one year or sooner if possible it may be done, and if any shall fail to be armed at the end of the Term limited we will that you punish them severely. 12. And for that Arms without the Knowledge of the use of them are of no effect wee ordain that there be one Muster Master Generall, appointed by as for the Colony, who shall 4 times in the year and oftener (if cause be) not only view the arms, ammunition and furniture of every Person in the Colony, but also train and exercise the people, touching the use and order of arms and shall also certify the defects if any be either of appearance or otherwise to you the Governor and Councill. . . . And for his competent maintenance we will that you, the Governor and Councill, so order the business at a General Assembly that every Plantation be rated equally according to the number of persons, wherein you are to follow the course practised in the Realm of England. 13. That you cause likewise 10 Guarders to he maintained for the Port at Point Comfort. And that you take course that ye Capt of ye said Port have a competent allowance for his services there. Also that the said fort be well kept in Reparation and provided with ammunition. 14. That new Comers be exempted the 1st yeare from going in p'son or contributing to the wars Save only in defence of the place where they shall inhabit and that only when the enemies shall assail them, but all others in the Colony shall go or be rated to the maintenance of the war proportionately to their abilitys, neither shall any man be priviledged for going to the warr that is above 16 years old and under 60, respect being had to the quality of the person, that officers be not forced to go as private soldiers or in places inferior to their Degrees, unless in case of supreme necessity.(26) Virginia pursued a conscious plan of confrontation with the Amerindians between 1622 and 1644, a policy aimed at extermination or at least complete pacification. Initially Virginia's political authorities considered all Amerindians to be enemies and hostiles to be eliminated, adopting for perhaps the first time the maxim that the only good Indian was a dead one. There was almost constant warfare, although the number of real battles was few. In such a war of attrition, the demands on the militia were great and men groaned under the constant militia musters. An essential ingredient of the policy was constant and unremitting harassment of the enemy. The legislature again in 1643 ordered that no quarter be given to warring Amerindian tribes. This law essentially allowed militia to attack villages at will. Home county courts of the militia paid the expenses of the roving bands of terrorists.(27) On 17 April 1643, the Northampton County Court ordered that "no person or persons whatsoever within the County of Northampton except those of the Commission, shall from henceforth travel from house to house within said county without a sufficient fixed gun with powder and shot." Penalty for non-compliance was 100 pounds of tobacco, with the possibility of imprisonment for repeated failures to carry a gun.(28) Following the enactment of this local legislation, the Virginia legislature enacted a similar law. That law required that "every family shall bring with them to Church on Sundays, one fixed and serviceable gun . . . under penalty of ten pounds of tobacco." White male servants who were required otherwise to bear arms were to receive guns from their masters. If they failed to carry their guns to church they were subject to the penalty of "twenty lashes, well laid on."(29) In 1644 the Powhatan Indians again attacked the outlying and isolated farms along the James River. The governor ordered the militia, some 300 strong, into the field, where, for six weeks, they pursued the Indians, burned the crops and sacked their towns. This marked the end of the threat from the exhausted and depleted Powhatan tribes. Following the Third Powhatan War, the governor divided Virginia into two basic military districts, one north, and one south, of the James River. Each district made its own military policies and created its own strategy.(30) In 1651 the militia was again reorganized along county lines, following the model created by Massachusetts in 1643.(31) In February 1645 the legislature authorized the association of its three principal counties to create the first regimental structure in the colony. The law also designated the militia as the official source for soldiers. For each fifteen militiamen the counties were to furnish one soldier.(32) The system of drafting one man among each 15 taxables proved to be quite unpopular, especially when the pool of 15 could not agree upon which man should serve. There was widespread resistance to the drafting of militia, forcing the legislature to pass an explanatory act in 1648. The colony augmented force with some vague promises of scalp bounties, plunder, profits for sales of prisoners and land grants for service as an indispensable to sustaining support and morale. These laws were repealed only after peace had been established. In 1645 the legislature pursued a war against the Mansimum and their allies by "cutting up their corn and doing or performing any act of hostility against them" to such a degree that their towns were destroyed and the Amerindians reduced to hiding in the woods and ambushing whatever whites they might fall upon.(33) Although a populous and prosperous state Virginia could not sustain the costs of constant Indian wars that this policy promoted. The colony attempted to support those wounded in Indian wars, or their widows and offspring, or to at least remit taxes upon those injured or widowed.(34) By 1646 the colony adopted a kinder, more gentle policy toward the Amerindians. The colony made peace with the Mansimum in October 1646 on its own terms. The Indians ceded all land between the falls of the James River and the York River, acknowledged the sovereignty of the English king, surrendered all firearms, and return all runaway slaves, escaped prisoners, and indentured servants. Indians who returned to their former homes could be killed instantly.(35) The legislature considered several interesting ideas about "civilizing" and pacifying their former enemies. First, they would offer the chiefs a cow for every eight wolfs' heads turned in. When the men came to collect, the churchmen would attempt to convert them to Christianity. Amerindian children could be brought into settlers' homes provided they be instructed in Christianity. Indian traders would be controlled and licensed and would guide clergymen to the villages. As we have seen, Virginia had long attempted to contain the Amerindians by limiting their access to firearms and gunpowder, and this ban was to be continued.(36) On 25 November 1652, the colonial legislature passed a new law which provided, Whereas divers of the Inhabitants of this [Northumberland] County doe employ Indians with guns & powder and shott, very frequently and usually to the great danger of a Massacre, the Court doth think fitt to declare and publish unto the whole county that if any person or persons who so ever shall with 10 days after the date hereof deliver either gun powder or shott to any Indian under what pretence so over shall be proceeded with all according to the Act of assembly in that case provided and after that manner of persons that have any guns out amongst the Indians after publication hereof shall get them in with all convenient speed and that no persons what so ever imploy any Indian at all nor supply them with powder and shott.(37) In 1658 the Virginia House of Burgesses created a rudimentary militia act which required that a provident supplie be made of gunn powder and shott to our owne people, and this strictly to bee lookt to by the officers of the militia, vizt., That every man able to beare armes have in his house a fitt gunn, two pounds of powder and eight pound of shott at least which are to be provided by every man for his family before the last of March next, and whosoever shall faile of makeing such provision to be fined fiftie pounds of tobacco to bee laied out by the county courts for a common stock of amunition for the county.(38) In the same year, the legislature attempted to guarantee the natives' title to their land in the Shenandoah Valley and beyond, but still allowed, even authorized and financed, exploration of the area. The law still permitted the killing of an aborigine if he was suspected of "mischief." The legislature also permitted them to own guns, although there was no clear avenue for their sale or barter, or of supplying gunpowder, lead and flints.(39) It was simply a matter of time until additional land, especially in the fertile and beautiful Shenandoah Valley, was traded for trinkets, guns and supplies. Where title was obtained, the organization of a militia among the settlers was inevitable. In 1660 John Powell carried a complaint to the legislature in which he alleged that Amerindians had encroached on his land, committing unspecified damages. The legislature authorized him to capture as many Amerindians as would satisfy his claim and sell them as slaves abroad. The local militia was authorized to assist him in rounding up the slaves. In other cases over the next few years, the legislature attempted to protect the Amerindians' land, even to the point of ordering the militia to burn houses built on illegally obtained land. The legislature voided some questionable land conveyances, protecting the natives in a way as if "the same had bin done to an Englishman."(40) With the increasing encroachment of settlers into the western areas of Virginia, tribes on the frontier came under increasing pressure. Additionally, the Iroquois made occasional raids as far south as Maryland and Virginia. A few tribes found support from some of their traditional enemies. It appears that the Amerindians were beginning to understand that the tribes must either stand united or be decimated piecemeal. As early as 1662 Virginia warned the western Amerindians that they must not encroach on settlements, raid villages or homesteads, or molest tributary Indians. The whites, fearing that an alliance was in the making, demanded that a number of children be surrendered from the Potomaks and allied northern tribes. In a white man was killed the Amerindians in the closest village were to be held responsible.(41) In 1666 Thomas Ludwell wrote a travelogue of Virginia. He observed the militia and reported to the Lords of Trade, Every county within ye said Province hath a regiment of ffoot under ye command of a colonel and other inferior officers and many of them a troop of horse under ye command of a captain . . . . Great care is taken that ye respective officers doe train them and see their Armes [are] well fixed and truly, my Lords, I believe all to be in so good order as an Enemy would gain little advantage by attempting anything upon them.(42) By 1675 Virginia was fighting the last of its great colonial Indian wars. The natives were in submission and most were nominally allied with the colony, which, in reality, meant that they were dependent upon Virginia for daily support and protection. The Senecas of the Five Nations stirred up the Susquehannocks and Piscataways along the Potomac and a large combined force attacked the settlers in Maryland and northern Virginia. Six chiefs attempted to reestablish the peace, but were treacherously murdered. This outrage roused the Amerindians who slew a hundred colonists in revenge. A second time the confederated tribes offered peace and a second time their offer was rejected. The colonists were bent on revenge for the merciless slayings and wanted to exterminate the Indians. Initially, Governor William Berkeley had sought to adopt a largely defensive posture which required a minimum number of troops. But the legislature supported the people who clamored for war and authorized the counties to call out their militias. It declared war and passed a number of laws designed to bring the militia units up to full strength. Taxes were increased to pay for the equipping and salaries of the militiamen.(43) Meanwhile, the colony was torn with contentions incident to the Restoration, and these troubles culminated in what is known as Bacon's Rebellion. The deprivations and outrages perpetrated by the Amerindians, and the stiffening Amerindian resistance, afforded the rebels their excuse for arming. In 1676 Nathaniel Bacon, Jr., led a group of settlers who applied great pressure on Berkeley to double the number of militia called for duty in order to launch a great all-out offensive against the Amerindians, designed to end the menace forever. In 1676 Sir William Berkeley, Governor of the colony, called for a standing army of 500 levies drafted out of the militia units, and paid for by the increase in public taxation. The planters who dominated the legislature objected, saying that the colony could not sustain the additional taxes.(44) Bacon, an articulate planter, made a counter-proposal, calling for a force of 1000 volunteers, funded by the spoils of war. The assembly was dominated by Bacon's followers and it authorized the creation of the full fore of one thousand militiamen by assigning quotas to each of the eighteen counties. Berkeley correctly surmised that Bacon's mercenaries would plunder the wealthiest tribes, which were peaceful, and ignore the poorer ones that were warlike.(45) The uncivilized and more warlike aborigine had few desirable goods whereas the more peaceful "civilized" Indians had considerable goods. Still, since Bacon was able to dominate the legislature, which became known as the Bacon Assembly, his legislation passed. His militia act attempted to distinguish between friendly and hostile Amerindians. The act declared that any Amerindian found outside his village was to be considered an enemy. All Amerindians had to surrender their arms, even guns that had been heretofore legally owned. They must agree not to hide, shelter, conceal, or even trade with, any warriors from other tribes, and had to deliver up any strangers who came among them. If the visitors were too strong for the hosts to capture, they must assist the militia in taking them. Each town must provide an accounting of its warriors, women and children. All Amerindians taken in battle were to be enslaved, with proceeds of their sale to be accounted as booty of war.(46) Following the massacre of the relatively unarmed and peaceful Occaneechee in May 1676, and just before the anticipated slaughter of the like Pamunkeys, Berkeley ordered Bacon to disband and relinquish his command. Bacon marched against Berkeley, burned James Town, and assumed political control of the colony.(47) Commands were given by trumpet for the first time in the Virginia colony.(48) The Bacon Assembly suspended all trade with the Amerindians, but this caused too great a loss to the traders so trade was permitted with those adjudged to be friendly. Natives wishing to trade had to come unarmed. Two forty day trading sessions were established north and south of the James River, with the governor and council receiving a percentage of the profits. At any point, whites might demand that any native approaching must lay down his arms.(49) Upon hearing that a British army was on its way from the Chesapeake area to restore order, Bacon was unshaken. He would merely adopt tactics learned in fighting the Amerindians. "Are we not acquainted with the country, so that we can lay ambuscades?" Bacon asked, "Can we not hide behind trees to render their discipline of no avail? Are we not as good or better shots than they?"(50) Bacon's position became gospel to the colonists and is something that might have been uttered by any of a large number of militiamen during the American Revolution. In October 1676 Bacon died and Berkeley reestablished his authority. One thousand regular troops arrived, sent by the Stuarts from England, and a commission investigated Berkeley's alleged despotism. In May 1677 Berkeley returned to England, but died there on 9 July 1677, before the matter was settled. With Stuart troops firmly in charge the remainder of Bacon's militia disbanded and melted back into the frontier. Since both of the principals were now dead nothing more was done at court and, having received a pledge of renewed loyalty from the colonists, the troops were withdrawn.(51) William Sherwood's account was hardly flattering of Bacon and his men, viewing them as seditious rebels. "Ye Rabble giveing out they will have their owne Lawes, demanding ye Militia to be settled in them with such like rebellious practices."(52) Some settlers complained that they were forced to leave their farms and neglect their occupations and stand seacoast watch and garrison duty in outlying frontier forts, but received no compensation for serving their militia duty and that impressment into the militia was a cause of the rebellion. Others complained of the burden the law imposed by requiring them to buy arms to stand the hated militia duty. Having armed themselves, they found themselves disarmed by the same government which imposed the purchase upon them. "Wee have been compelled to buy ourselves Guns, Pistols and other Armes . . . [and] have now had them taken away from us, the which wee desire to be restored to us again."(53) The destruction of the Amerindians was essentially complete. The poor remnants that remained were of no great consequence, with most reduced to tue most wretched poverty. Tribal distinctions all but disappeared as the survivors struggled merely to exist. Berkeley in 1680 claimed that "the Indians our neighbours are absolutely subjected, so that there is no fear of them." Amerindian country was clear for western expansion.(54) Soon after Bacon's Rebellion, the North Carolina's government was threatened by a second popular uprising, known as Culpeper's Rebellion. As a protest against the arbitrary rule of Governor John Jenkins, Thomas Miller, unpopular leader of the proprietary faction, combined the functions of governor with the lucrative post of customs collector. On 3 December 1677 the anti-proprietary faction arrested and imprisoned Miller. Miller escaped and fled to England and put his case before the Privy Council. The governor considered calling out the militia to restore order and the home government considered dispatching troops from England. John Culpeper of Virginia defended the leaders of the anti-proprietary party. Meanwhile, the Earl of Shaftsbury, having decided that Miller had exceeded his authority, mediated the dispute, and the uneasy peace was made permanent. In 1679 the Assembly decided to construct four garrison-houses on the headwaters of the four great rivers, Potomac, Rappahannock, Mattapony and James, "and that every 40 tithables within this colony be assessed and be obliged to fitt out and sett forth one able and sufficient man and horse with furniture well and completely armed with a case of good pistols, carbine or short Gunn, and a sword." The settlers on the Rappahannock were to have "in readiness upon all occasions, at beate of drum, fifty able men well armed." Additionally, two hundred men were to be counted as reserves, to be called when needed. Major Lawrence Smith was to organize the militia and for this service was to receive 14,000 acres of land. William Bird was to have the same amount of land for organizing the militia near the fall on the James River.(55) In 1680 the assembly in Jamestown, Virginia, ordered that all persons of color be disarmed. Blacks were prohibited from carrying swords, clubs, guns or any other weapons for either offensive or defensive use. The assembly was likewise afraid of the black assemblies because "the frequent meetings of considerable numbers of negroe slaves, under pretence of feasts and burialls is judged [to be] of dangerous consequence."(56) In 1705 the law was mitigated by substituting the word slave for negro, and that "all and every such person or persons be exempted from serving either in horse or foot."(57) At this point Virginia reconsidered its militia policy. Few poor men could realistically afford to buy their firearms and other militia supplies so the colony undertook to finance many expenses for individual militiamen. The government could not afford to both maintain the militia and provide static fortifications. By recruiting only among gentlemen the colony was freed from having to make contributions to the support of the militiamen. No formal law or edict disarmed the poor. They were merely relegated to a position as inactive militia. Disarmament occurred by attrition. No one inspected arms or mustered the great militia and the poor neglected to maintain and update their arms. In April 1684 Charles II approved a major change in the colony's militia law. The law is significant in several ways. It decreed the right, as well as the obligation, of colonists to own their weapons; and it protected the arms owned by the subjects from government confiscation. For the encouragement of the inhabitants of this his majesties collony and dominion of Virginia, to provide themselves with arms and ammunition, for the defence of this his majesties country, and that they may appear well and compleatly furnished when commanded to musters and other the king's service which many persons have hitherto delayed to do; for that their arms have been imprest and taken from them. Be it (a) enacted by the governour, council and burgesses of this present general assembly, and the authority thereof, and it is hereby enacted, that all such swords, musketts, (b) pistolls, carbines, guns, and other armes and furniture, as the inhabitants of this country are already provided, or shall provide and furnish themselves with, for their necessary use and service, shall from henceforth be free and exempted from being imprest or taken from him or them, that already are provided or shall soe provide or furnish himselfe, neither shall the same be lyable to be taken by any distresse, seizure, attachment or execution. Any law, usage or custom to the contrary thereof notwithstanding. And be it further enacted, That between this and the five and twentieth day of March, which shall be in the yeare of our Lord one thousand six hundred eighty six, every trooper of the respective counties of this country, shall furnish and supply himself with a good able horse, saddle, and all arms and (c) furniture, fitt and compleat for a trooper, and that every foot soldier, shall furnish and supply himselfe, with a sword, musquet and other furniture fitt for a soldier, and that each trooper and foot souldier, be provided with two pounds of powder, and eight pounds of shott, and shall continually keep their armes well fixt, cleane and fitt for the king's service. And be it further enacted, That every trooper, failing to supply himselfe within the time aforesaid, with such arms and furniture, and not afterwards keeping the same well fixt, shall forfeite four hundred pounds of tobacco, to his majesty, for the use of the county in which the (a) delinquent shall live, towards the provideing of colours, drums and trumpetts therein, and every foot souldier soe failing to provide himselfe, within the time aforesaid, and not keeping the same well fixt, shall forfeit two hundred pounds of tobacco to his majesty, for the use aforesaid, and that all the militia officers of this country, take care to see the execution and due observation of this act, in their several and respective regiments, troops and companies. And be it further enacted by the authority aforesaid, That every collonell of a regiment within this country, shall once every yeare, upon the first Thursday in October, yearly, cause a generall muster, and exercise of the regiment under his command, or oftener if occasion shall require. And that every captain or commander of any troop of horse or foot company, within this country, shall once at the least in every three months, muster, traine and exercise, the troop or company under his command, to the end, they may be the better fitted and enabled, for his majesties and the countryes service when they shall be commanded thereunto.(58) Some thought that there were problems with he practice of the militia law, if not defects in the law itself. The governor was frequently remiss in appointing officers to take control over the colony's militia. On 4 July 1687 Lieutenant-colonel William Fitzhugh complained that in Stafford County, "I know not there being one Militia Officer in Commission in the whole County & consequently people best spared cannot be commanded into Service & appointed to guard the remotest, most suspected and dangerous places." He submitted a full list of men eligible for militia duty, but pointed out that a select militia would make more sense. At least on the frontier, where few musters could be readily scheduled, intensively training the few made more sense than half training the many. "A full number with a soldier like appearance," Fitzhugh wrote, "is far more suitable and commendable than a far greater number presenting themselves in the field with clubs and staves rather like a rabble rout than a well disciplined militia."(59) In this year the legislature appropriated tax money for the purchase of colors, drums and trumpets for the militia. It also agreed to purchase all musicians' instruments at public expense.(60) Those exempted from militia service in the 1690s included physicians, surgeons, readers, clerks, ferrymen and persons of color.(61) By effectively disarming the poorer classes the authorities had less cause to worry about a popular uprising.(62) In 1691 the legislature repealed all former prohibitions to, and restrictions on, the Indian trade. This act also had the effect of protecting all Amerindians from being newly enslaved after that date. Neither could they be enlisted in the militia against their will.(63) Ranging companies were commonplace in the middle colonies by the time of the American Revolution, but were uncommon in the seventeenth century. Virginia had formed companies of rangers by 1690, for there is a notation in the British Public Records Office dated 23 April 1692 which refers to gunpowder and other supplies having been sent to the rangers of King & Queen County, Virginia.(64) By 1701 the militia of those two counties alone numbered 132 officers and non-commissioned officers; 152 horsemen; 222 dragoons; 415 foot soldiers. Among their arms were 575 swords, 141 pistols and 543 muskets.(65) A new military-Indian policy proved to be more reasonable. Virginia would ally with and materially support friendly, civilized tribes who would guarantee the provincial borders. The colony built a string of forts along the frontier and recruited mounted rangers to maintain order and peace. These militia-cavalry were the equivalent of the much vaunted New England minutemen. The system generally worked well. On 9 December 1698 the king appointed a new executive, Lieutenant-governor Nicholson. He proved to be highly unpopular by exercising powers heretofore reserved to, or traditionally exercised by, council or legislature. Two usurpations of power were related to the militia. First, Nicholson assumed appointment of superior militia officers. Second, he was charged with "advancing men of inferior stations to the chief commands of the militia" while "all colonels, lieutenant-colonels, majors and captains . . . are put in and turned out" arbitrarily."(66) So great were the protestations that the king removed Nicholson on 15 August 1705. One of the unique functions of the militia in the late seventeenth, and early eighteenth, centuries was the enforcement of religious participation. The militia was charged with forcing all persons, whether religious or not, to attend services at the Church of England. By the end of the seventeenth century Virginia's needs for militia were changing. The population of the colony increased, making training and recruitment easier and expediting the creation and maintenance of militia enrollment lists. Still, increasingly poorer emigrants swelled the ranks while failing because of poverty to arm themselves adequately. Her concern for Amerindian attacks was minimal since by 1700 the colony had subdued the stronger tribes. The Carolinas served as a successful barrier to the south and the Appalachian mountains, with a few frontier forts, guarded her western boundary. The French did not threaten Virginia's interests for another half century. What remained of the decimated Amerindian tribes received support from the colonial government. They frequently sold their services as scouts and even warriors. The colony had to provide only money, command and a few supplemental frontiersmen to serve as scouts. In the Tuscarora War of 1712 Virginia was able to rely on the Carolina militias and Governor Spotswood's diplomacy.(67) When Colonel John Barnwell took his troops into battle in the Tuscarora War he manipulated his mounted troops with trumpets and his foot soldiers with drums.(68) In 1710 the Assembly authorized the lieutenant-governor, as military commander of the colony, to form several bands of rangers. Each county lieutenant "shall choose out and list eleven able-bodied men, with horses and accouterments, arms and ammunition, resideing as near as conveniently may be to that frontier station." The lieutenant served simultaneously as county militia commander and commandant of the rangers.(69) With the coming of the war known as Queen Anne's War, 1702-1713, authorities thought that Virginia needed an adequate militia law. The militia law of 1705 was the first truly comprehensive enactment on the subject promulgated in the colony. The law created a general obligation to keep and bear arms in defense of country. There was a long list of exemptions to the requirement that men muster, including: millers with active mills; members of the House of Burgesses and the King's Council; slaves and imported servants; officers and men on active duty with the king's forces; the attorney general; justices of the peace; the clerks of parishes, council, counties and the general court; constables and sheriffs; ministers; schoolmasters; and overseers charged with the supervision of four or more slaves. Those exempted still had to supply their own arms and could be fined for failure to do so. Those exempted were charged with an obligation to "provide and keep at their respective places of abode a troopers horse, furniture, arms and ammunition, according to the directions of this act hereafter mentioned." They could be mustered in case of invasion or insurrection. "And in case of any rebellion or invasion[they] shall also be obliged to appear when thereunto required, and serve in such stations as are suitable for gentlemen, under the direction of the colonel or chief officer of the county where he or they shall reside, under the same penaltys as any other person or persons, who by this act are injoyned to be listed in the militia. . . ." Militiamen who failed to appear with the required arms, ammunition and accoutrements were fined 100 pounds of tobacco. The commander of each troop was required to appoint a clerk who was to record courts-martial and receive the company fines. The other major provisions of the law read as follows. For the settling, arming and training a militia for her majestie's service, to be ready on all occasions for the defence and preservation of this her colony and dominion, be it enacted, by the governor, council, and burgesses, of this present general assembly . . . to list all male persons whatsoever, from sixteen to sixty years of age within his respective county, to serve in horse or foot, as in his discretion he shall see cause and think reasonable . . . . The colonell or chief officer of the militia of every county be required, and every of them is hereby required, as soon as conveniently may be, after the publication of this act, to make or cause to be made, a new list of all the male persons in his respective county capable by this act to serve in the militia, and to order and dispose them into troops or companys . . . . each trouper or ffoot soldier may be thereby guided to provide and furnish himself with such arms and ammunition and within such time as this act hereafter directs. . . . That every ffoot soldier be provided with a firelock, muskett or fusee well fixed, a good sword and cartouch box, and six charges of powder, and appear constantly with the same at time and place appointed for muster and exercise, and that besides those each foot soldier have at his place of abode two pounds of powder and eight pounds of shott, and bring the same into the field with him when thereunto specially required, and that every soldier belonging to the horse be provided with a good serviceable horse, a good saddle, holsters, brest plate and crouper, a case of good pistolls well fixed, sword and double cartouch box, and twelve charges of powder, and constantly appear with the same when and where appointed to muster and exercise, and that besides those each soldier belonging to the horse have at his usuall place of abode a well fixed carabine, with belt and swivel, two pounds of powder and eight pounds of shott, and bring the same into the ffield with him, when thereunto specially required. . . . eighteen months time [is] be given and allowed to each trouper and ffoot soldier . . . to furnish and provide himself with arms and ammunition . . . . for the encouragement of every soldier in horse or ffoot to provide and furnish himself according to this act and his security to keep his horse, arms and ammunition, when provided, . . . the musket or ffuzee, the sword, cartouch box and ammunition of every ffoot soldier, and time horse, saddle and furniture, the carbine, pistolls, sword, cartouch box and ammunition of every trooper provided and kept in pursuance of this act to appear and exercise withall be free and exempted at all times from being impressed upon any account whatsoever, and likewise from being seized or taken by any manner of distress, attachment, or writt of execution, and that every distress, seizure, attachment or execution made or served upon any of the premises, be unlawful and void . . . . the colonel or chief officer of the militia of every county once every year at least, [is to] cause a general muster and exercise of all the horse and ffoot in his county . . . [and] every captain both of horse and foot once in every three months, muster, train and exercise his troop or company, or oftener if occasion require. Provided, That no soldier in horse or foot, be fined above five times in one year for neglect in appearing. . . . all soldiers in horse and ffoot during the time they are in arms, shall observe amid obediently perform the commands of their officer relating to their exercising according to time best of their skill, and that the chief officers upon time place shall and may imprison mutineers and such soldiers as do not their dutys as soldiers at time day of their musters and training, and shall and may inflict for punishment for every such offence, any mulet not exceeding fifty pounds of tobacco, or the penalty of imprisonment without bail or main prise, not exceeding ten days.(70) The militia act did not yield the desired results. At the end of Queen Anne's War, Governor Alexander Spotswood thought "the Virginians to be capable of being made as good a militia as any in the World, yet I do take them to be at this time the worst in the King's Dominions."(71) In 1713 Governor Spotswood called out the militia against a weak Amerindian enemy, but it failed to respond. He attempted to recruit, first by a call for volunteers, and then by offering substantial pay incentives, an army of frontiersmen. He found that those living inland shared little concern for the lives of the frontiersmen; and that in time of Amerindian threat the frontiersmen did not want to leave their homes, farms, crops and families. In a long letter to the Board of Trade he argued that the rich had gotten off too easily in the past and that the poor had unfairly borne the burden. After a year filled with great frustration, Spotswood declared that "no Man of an Estate is under any Obligation to Muster . . . [while] even the Servants or Overseers of the Rich are likewise exempted," and thus "the whole burthen lyes upon the poorest sort of people," he thought to scrap the whole militia system. Disgusted, he proposed that the House of Burgesses rewrite the law, changing the general militia into a select one. What Spotswood proposed would constitute a radical change. A select force of skilled, trained and disciplined militiamen would be recruited, consisting of approximately one-third of the adult, free, male population. The remaining two-thirds would be taxed to support the select militia. The citizen-soldiers would be exempted from paying the militia tax. The militia would exercise ten times a year. He proposed extending the frontier mounted ranger principle to encompass the entire militia system. A select militia system would wholly replace the general militia and "Persons of Estates . . . would not come off so easily as they do now." (72) Disillusioned by defeat of his plan in the legislature, Spotswood made peace with the Tuscarora who soon moved on to become the sixth tribe associated with the League of the Iroquois. As in New England, militia training days, especially the annual regimental muster, had become important social events in Virginia. As 1737 the militia put on a public demonstration of its skills at a county fair, passing in review before those assembled and practicing the manual of arms and other drill exercises. The militia musicians played music for the entertainment of the spectators "and gave as great Satisfaction, in general, as could be possibly expected." Refreshments, games and general socializing followed the militia's performance. The most accomplished regimental trumpeter often displayed his skills in support of a horse race. Few events were more popular among the spectators than the culminating parade in which all militia units passed in formal review before the highest ranking militia officers and various political authorities.(73) The Lords of Trade inquired of Governor Spotswood as to the number of inhabitants and the state of the militia in 1712. Spotswood responded on 26 July 1712. "The number of freemen fit to bear arms . . . [is] 12,051 and I believe there cannot be less than an equal number of Negroes and other Servants, if it were fit to arm them upon any occasion."(74) On 16 February 1716 Governor Spotswood reported to the Lords of Trade on the numbers enrolled in the Virginia militia. "Ye number of Militia of this Colony . . . consists of about 14,000 horse and foot . . . The list of tythables . . . last year amounted to 31,658 . . . all male persons, white and black, above ye age of 16." He also reported that there were 300 firelocks in the public stores.(75) On 7 February 1716 Spotswood proposed the Commission of Trade and Plantations that Virginia form a "standing militia" of select membership. Membership would rotate on an annual basis, but those serving during a certain year would be in "permanent condition of muster." He called for 3000 foot and 1500 horsemen "at a yearly cost of 600,000 pounds of tobacco."(76) He argued his case in a letter to the Board of Trade, What my Designs were, by the Scheme I laid before the Assembly regulating their Militia, will best appear from the Project it self, which, because it is not inserted in the Journals of the Assembly . . . I think it becomes me to employ my Thoughts in search of what may better conduce to the Welfare of the People committed to my charge, and do apprehend that I have the same Liberty of Recommending my notions to the Assembly, to be brought, (if they consent,) into a Bill, as they have of Proposing Their's to me to be pass'd, (if I assent,) into a Law; yet I offer'd no Scheme upon this Head 'till, after the House of Burgesses had Addressed,(77) expressing their Inclinations to have the Militia of this Colony under a better Regulation, and, at the same time, desiring me to propose a Method by which it might be rendered more usefull . . . my Project for the better Regulation of the Militia was no more than what is agreeable to the Constitution of Great Britain, I hope your Lordships will rather approve the same, and not judgde that I have endeavoured to destroy a profitable People by desiring them to imitate the Justice and Policy of their Mother Country, where no such unequal Burthen is laid upon the poor as that of defending the Estates of the Rich, while those contribute nothing themselves; For, according to the present constitution of the Militia here, no Man of an Estate is under any Obligation to Muster, and even ye Servants or Overseers of the Rich are likewise exempted; the whole Burthen lyes upon the poorest sort of people, who are to subsist by their Labour; these are Finable if they don't provide themselves with Arms, Ammunition and Accoutrements, and appear at Muster five times in a Year; but an officer may appear without Arms, who may absent himself from Duty as often as he pleases without being liable to any Fine at all; nay, and if it be his interest to ingratiate himself with the Men, he will not command them out, and then the Soldier, not being summoned to march, is not liable to be fined any more than the Officer. Besides, when the Poorer Inhabitants are diverted from their Labour to attend at Muster, it is to no manner of purpose, their being not one Officer in the Militia of this Government that has served in any Station in the Army, nor knows how to exercise his Men when he calls them together. This is the State of the Militia under the present Law, and therefore I could not imagine that my endeavouring a Reformation thereof would be imputed to me as a Crime; That 3,000 Foot and 1,500 Horse should be more a Standing Army or a greater means for me to govern Arbitrarily than 11,000 Foot and 4,000 Horse, of which the Militia now consists, is surprizing to every Body's understanding but the Querist's own. That these 15,000 men, mustering each five times in a year, should be less burthensome than 4,500 Men, mustering ten times in a year, is no less strange, unless the Querist has found out a new kind of Arithmetick, or that he looks upon the Labour of those People who are now obliged to Muster to be of no value. On the contrary, it is demonstrable by my Scheme that above two-thirds of the Inhabitants now listed in the Militia would have been eased from the trouble of Mustering, and consequently that the Man which stayed at home would not be charged with so much as half the pay of him that attended in the Field, which Exemption, costing less than Seven pounds of Tobacco per Muster, there is scarce one man serving in the Militia now who would not be content to pay more than Thrice as much for being to follow his own business instead of travelling 20 or 30 Miles to a Muster. And if, by one Man thus paying his poorer Neighbour for four or five days' Service in a Year, above 600,000 pounds of Tobacco, (as the Querist computes,) should be spent throughout the whole Colony; yet, far from granting that such a Charge must be to the entire Ruin of the Country, I apprehend yet it must be rather a benefit to the Publick by the Circulation of Money and Credit that would be increased thereby, and this circulation would be more just and beneficial, seeing ye Payments would generally happen to be made by the Richer to the Poorer sort. It is true, that by my Scheme Persons of Estates would not come off so easily as they do now, They must have contributed to the Arming as well as Paying the Men who were to be train'd up for the defence of their Estates; And I cannot but pitty the simplicity of the Vulgar here, who, at every offer of a Governor to make their Militia usefull, (tho' the Regulation be never so much in their favour,) are set on to cry out against him as if he was to introduce a Standing Army, Arbitrary Power, burthensome Taxes, &c. And as for their Abettors, who chose rather to risk their whole Country than to be brought to Club for its defence, I wish they or their Posterity may not have cause to Repent of their present Folly When an Enemy shall happen to be at their Doors. For, tho' I will allow the Virginians to be capable of being made as good a Militia as any in the World, Yet I do take them to be at this time the worst in the King's Dominions, and I do think it's not in the power of a Governor to make them Serviceable under the present constitution of the Law. It is, indeed, a Strange Inference. The Querist, upon the Proposal of Adjutants, that they were to huff and Bully the People, This, I am sure, was never intended as any part of their Office in my Scheme, nor am I apt to believe the House of Burgesses, to whom it was referred, would readily have given 'em such an authority. These Adjutants were proposed to be of the Inhabitants of the Country who were first to be exercised and instructed by me in Military Discipline, and afterwards to go into their respective Countys to teach the Officers and Soldiers. However, if, in the above mentioned Scheme there appeared any thing disagreeable to the Inclinations or Interest of the People, I was far from pressing them to it, Seeing it is evident from my Message to the House of Burgesses that I left it to them to adapt it to the Circumstances of the Country.(78) The Tuscarora War of 1711-12 in North Carolina, in which at least two hundred settlers were massacred, had been won only with the assistance of the militias of South Carolina and Virginia. As the remnants of the once mighty Tuscarora began to migrate northward, Virginia thought it wise policy to exclude these savage warriors from its lands. When two Germans, Lawson and deGraffeured, seeking land for a colony of their countrymen in western Virginia, were taken by Tuscarora in September 1711, the governor mustered frontier ranging militia and dispatched to the area of the New River. Alexander Spotswood attempted to forge a treaty with the Tuscarora, secured by Amerindian hostages, to guarantee the peace of his colony, but failed. Spotswood next tried to make a show of force by mustering six hundred of the best militia to be located, but the Tuscarora had seen militia in the Carolinas and were unimpressed. For his part, the governor genuinely sought an honest, just and equitable settlement and peace. But the legislature entered the picture, thinking Governor Spotswood's response to be quite inadequate. The legislators feared the Tuscarora who would thought still to include as many as two thousand warriors, while the province could field 12,051 militiamen who were scattered all over the vast territory. So they created a special regiment of rangers, empowering it with the power to kill hostiles on sight. The legislative definition of hostiles included any Amerindian fleeing from a white man or refusing to respond to an order to halt. Fleeing braves could be killed without any fear of prosecution. Indians who were found in the forest and who could not "give a good account of themselves" might be killed, enslaved or imprisoned. Enemy Indians who were captured were enslaved and sold to the benefit of the militiaman. The law excluded these rangers from accountability and punishment for killing any presumed hostile Amerindian. When a company commander certified that a militiaman had killed an Amerindian who had previously attacked or killed any white man or woman, the man received a bonus of £20. As a bonus, those who served as rangers would be exempted for one year thereafter from serving in the militia or being subject to parish or county levies. The legislature denied the Tuscarora the right to live, gather firewood, hunt, or be servants within the provincial boundaries. It budgeted £20,000 to fund the militia. The act was given an effective period of only one year, but was extended at least twice.(79) Spotswood thought the measures to be far too harsh. In reporting the overreaction of the legislature to the Board of Trade, he wrote, So violent an humour amongst them [the Assembly] for extirpating all the Indians without distinction of Friends or Enemys that even a project I laid before them for assisting the College to support the charge of those Hostages has been thrown aside without allowing it a debate in their House tho' it was proposed on such a foot as would not have cost the country one farthing.(80) The Tuscarora initially capitulated and accepted the legislature's conditions after learning of the extent of Virginia's response. They surrendered the hostages, children of their principal leaders, who were then to be converted to Christianity and educated. They released deGraffenreid. The legislative enactment permitted only men of the Eastern Shore, Pamunkey and Chickahominy tribes to hunt in any area east of the Shenandoah Valley. These tribes became known as the Tributary Indians and the law afforded them certain protections and a few privileges. They alone could harvest seafish and shellfish, although they had to wait until the whites had taken all they wanted first. They were required to act as spies and report on any movements of foreign warriors on the frontier. They were expected to join the militia in wars against the hostile tribes to the west. In 1712 the legislature expanded the list of tributary Indians to include the Nansemond, Nottoway, Maherin, Sapon, Stukanocks, Occoneechee, and Tottero tribes. These Amerindians could trade for arms, ammunition and lead.(81) However, the scope of the conflict widened. Southern tribes who were traditional enemies of the Tuscarora entered the conflict by offering their services to North Carolina. The Cherokee, Creek, Catawba and Yammassee tribes joined with South Carolina to eliminate the Tuscarora menace. The Iroquois Confederation, or at least the Seneca tribe, threatened to join with the Tuscarora, drawing all the northern colonies into the conflict. Spotswood, if not his legislature, thought Virginia to be too divided to wage war effectively, and he wished merely to preserve the peace. But the South Carolina militia, much emboldened by the Amerindian support, fell upon the villages that were supposedly protected by treaty. The Tuscarora and their allies retaliated by massacring both settlers and the tributary Indians. The Nottoways bore the brunt of the attacks. The large combined force of Carolina militia, Virginia militia, and southern Indians engaged the Tuscarora at the Neuse River and soundly defeated them. Many captives were sold in the West Indies as slaves. The hostile remnant of the Tuscarora migrated for to the north, eventually allying with the Iroquois as the sixth confederated tribe.(82) With the Tuscarora War finally over, Virginia again turned its eyes westward. The next arena of military action would be in the rich trading area west of the Allegheny mountain range. The Virginia merchants competed with the French for control of the great Mississippi Valley. During the fifteen hundred mile trips, the traders were at great risk from the warring, often intoxicated, Indians who were allied with the French.(83) By treaty signed at Albany, New York, the Iroquois were not to make war, travel, or trade south of the Poyomac River or east of the Allegheny mountain range without a passport from the New York governor. Virginia's tributary Indians were to remain east of the Alleghenies and south of the Potomac River. By these means the colonists sought to establish peace, enlarge their domain, and increase their settlements.(84) Spotswood thought the frontier inhabitants to have been comprised of people "of the lowest sort." Most had been transported to the colony either as indentured or convict servants "and being out of their time they settle themselves where land is to be taken up and that will produce the necessarys of Life with little Labour. It is pretty well known what morals such people bring with them. . . ." They quickly learned that an enormous profit could be earned by selling liquor to the natives "and make no scruple of first making them drunk and then cheating them of their skins, and even beating them in the bargain." Spotswood thought them incapable of dealing honestly, serving in the militia faithfully, or supporting the government fully. Their misbehavior and cheating ways prompted Indian wars.(85) On 9 May 1723, the militia law was revised, requiring service of men between ages 21 and sixty. Regarding persons of color, the law was changed back to its original language, denying to any "free negro, mulatto or indian whatsoever," the right to "keep or carry any gun, powder or shot, or any club, or other weapon whatsoever, offensive or defensive" under penalty of "whipping, not to exceed 39 lashes." However, "every free Negro, Mulatto or Indian . . . listed in the Militia may be permitted to keep one gun, powder and shot." Those not enlisted were given a few months in which to dispose of any arms they possessed. Slaves and free blacks could be required to serve as musicians. In time of invasion, rebellion or insurrection, persons of color "shall be obliged to attend and march with the militia, as to do the duty of pioneers, or such other servile labor as they shall be directed to perform."(86) In case of emergency free or enslaved blacks might be required to join the militia to do "the duty of pioneers, or other such servile labor as they shall be directed to perform."(87) Before 1713, Virginia demanded and received two hostages from each tributary Indian village. Governor Spotswood though that this was the best way to keep these Amerindians peaceful, while giving some of the most talented of their numbers an English style education. As early as 1713 there were seventeen of these students being educated by the College of William and Mary. Shortly thereafter, a special Indian school was erected at Christanna and some additional tributary Indians were brought from reservations to be educated there. A mathematics professor, Hugh Jones, left a memoir of his experience with them. The young Indians, procured from the tributary Indians . . .with much difficulty were formerly boarded and lodged in town, where the abundance of them used to die, either through sickness, change of provision, and way of life, . . . often for want of proper necessaries and due care of them. Those of them that have escaped well, and have been taught to read and write, have, for the most part, returned to their home. . . . A few of them have lived as servants with the English. . . . But it is a pity more care is not taken of them after they are dismissed from school. They have admirable capacities when their humors and tempers are perfectly understood.(88) Virginia, like most colonies, used the militia as a reservoir from which troops could be recruited into select ranging forces and such regular military units as were populated by Americans. These units were not under the standard militia limitation of being confined to deployment within the colony. Virginia sought to recruit by advertising for recruits. An Act for raising Levies and Recruits to serve in the present expedition against the French on the Ohio. Whereas his Majesty has been pleased to send Instructions to his Lieutenant-Governor of this Colony, to raise and levy Soldiers for carrying on the present Expedition against the French on the Ohio; and this present General Assembly being desirous, upon all Occasions, to testify their Loyalty and Duty; and taking into their Consideration that there in every County and Corporation within this Colony, able-bodied Persons, for to serve his Majesty . . . . The Justices of the Peace of every County and Corporation within this Colony . . . are appointed or impowered to solicit Men, to raise and levy such able bodied men . . . to serve his Majesty as Soldiers on the present Expedition . . . . Nothing in this Act contained shall extend to the taking or levying any Person to serve as a Soldier . . . who is, or shall be, an indented or bought Servant, or any person under the Age of 21 years or above the Age of 65 years.(89) Between 1727 and 1749 Governor William Gooch reported that the Virginia militia consisted of 8800 foot soldiers divided into 176 companies; and 5000 horsemen in 100 troops. The unenrolled militia consisted of all able bodied freemen between ages 21 and 60. The enrolled militia, Gooch ordered, "will be constantly kept under regular discipline and the common men [i.e., unenrolled militia] will be improved in their manner, which want not a little pushing."(90) In 1726 King and Queen County reported that the number of militia to include 221 horsemen and 607 foot-men.(91) In 1728 William Byrd wrote on the recurrent problems with the Amerindians. He noted that nearly all Amerindian tribes with which Virginians came into contact were now armed with firearms, having completely abandoned their traditional weapons. Byrd wondered why they have given up their bows for a warrior could fire most of a quiver of arrows in the time it took to reload a gun. The Amerindians could make bows and arrows themselves and thus did not become dependent on whites for supplies. They were dependent upon traders and others to supply them with gunpowder, flints and lead balls. Time was on the side of the colonists because they Indians failed to maintain their arms and they could not themselves repair firearms or manufacture gunpowder. Control of the Indian trade was far more important than several companies of militia.(92) By act of 1738, the legislature mandated that the county militia officers "shall list all free male persons, above the age of one and twenty years" and train them as he saw fit. The men were to provide suitable arms at their own expense for service either as foot soldiers or cavalrymen. The law, reaffirmed by acts of 1755 and 1758, required free blacks, Indians and mulattoes to report at militia musters. Failure to appear invoked the fine of 100 pounds of tobacco. Blacks, whether enslaved or free, and Indians living within white settlements were still forbidden to own or carry firearms. They could serve as pioneers, sappers and miners, trumpeters and drummers.(93) Many blacks served as musicians in the Virginia, and other colonial, militia units. England declared war on Spain on 19 October 1739 in what is commonly known as the War of Jenkins' Ear. Britain assigned a quota of men to be recruited in the thirteen colonies for service in the West Indies. Lord Cathcart commanded British troops and troops of the thirteen colonies came under the command of Governor Alexander Spotswood of Virginia who was to hold the rank of major-general, quartermaster-general, chief of colonial staff and second in command of the expedition. Colonel William Blakeney was to assist Spotswood in recruiting, drafting if necessary, troops from the colonial militias. Blakeney carried with him signed blank commissions for colonial officers and arms and supplies. Included in Blakeney's instructions was a provision that, if Spotswood could not command the colonial troops, Virginia's Lieutenant-governor William Gooch was to serve in his stead. Spotswood died of a chill on 7 June 1740, before Blakeney arrived with his commission. Thus, responsibility for filling both the Virginia and the entire colonial assessment of troops devolved on Gooch. The American recruits became popularly known as Gooch's American Foot.(94) The entire expedition soon devolved into a complicated mess. Gooch's commission was inspecific as to rank, so he served as a junior colonel, and was not included in the Council of War once the troops arrived in Jamiaica. When the men and officers left on 25 September 1741, money was not available for transportation of all troops, so the cost was borne through private subscription and the generosity of private ship owners. When the colonial troops arrived they found that no one had made provision for their rations or pay so officers pooled their funds and purchased rations at exorbitant prices from British merchants. Likewise, the colonial troops were not included in orders given to the medical staff and few, if any, physicians and surgeons had been recruited in the colonies. It was common practice for each regiment to guard its own medical facilities jealously and to refuse to treat the men of other regiments unless ordered to do so. Most colonials were impressed into sea service and were given the most degrading physical duties, such as manning bilge pumps. British naval officers moved colonial enlisted men around among the ships as they chose, often in open defiance of their officers, although this practice had long been prohibited to British soldiers. Two men were reportedly killed or maimed after being beaten or flogged according to British naval custom. Had Spotswood lived he would have been a member of the Council of War, and as a major-general, would have been privy to the most intimate circles of command. As it was, Gooch was treated as a colonel of inferior standing, ignored and excluded from command decisions. He and other colonial officers wrote memorials to the senior British officers, but these had little effect. No records are available to account for colonial casualties, but all evidence points to their having been large. Disease took a heavy toll of lives. The American regiment was disbanded of 24 October 1742, on which date there were still 7 officers and 133 enlisted men hospitalized. The experiences of militiamen in the War of Jenkins' Ear were, to say the least, bitter. Doubtless, many colonial volunteers were of the lower class, freebooters, adventurers and just plain scoundrels, but many others were unemployed laborers and frontiersmen seeking cash to support their families or to buy a piece of property. They came back with stinging tales of army brutality and of the open disdain in which both British officers and soldiers held them. They were much disgusted with the lack of planning for their arrival, their mis-deployment once they arrived and the failure of the Council of War to integrate them into the army once their presence was made known. The British soldiers and officers, for their part, were unimpressed with the Americans whom they saw only serving duties for which they were ill suited and for which they had not been recruited.(95) Yet another step had been taken down the road to independence. By 1742 the frontiersmen had pushed west of the mountains, into what is now the state of West Virginia. The first recorded clash between the Virginia provincial militia and Indians west of the Blue Ridge Mountains in Virginia occurred on December 18-19, 1742. Colonel James Patton, commander of the Augusta County regiment, reported on the engagement which occurred near Balcony Falls in present-day Rockbridge County, Virginia to Virginia's Governor Gooch. Colonel Patton's first report was dated December 18. The second, dated December 23, contains a longer account but differed from the first in the number of men slain. A parcel of Indians appear'd in an hostile manner amongst us Killing and carrying off Horses &c. Capt. John Buchanan and Capt. John McDowel came up with them this day, and sent a Man with a Signal of Peace to them, which Man they kill'd on the Spot and fir'd on our Men, which was return'd with Bravery, in about 45 Minutes the Indians fled, leaving eight or ten of their Men dead, and eleven of ours are dead, among whom is Capt. McDowel, we have also sundry wounded. Last night I had an Account of ye Behaviour of the Indians, and immediately travel'd towards them with a Party of Men, and came up within two or three hours after the Battle was over. I have summon'd all the Men in our County together in order to prevent their doing any further Damage, and to repel them force by force. We hear of many Indians on our Frontiers: the particulars of the Battle and Motions of the Enemy I have not time now to write. I am, Yr. Honor's most obedient Servt., James Patton P.S. There are some white men (whom we believe to be French) among the Indians. Our People are uneasy but full of Spirits, and hope yr Behaviour will shew it for the future, they not being any way daunted at what has happen 'd. Augusta County Xher 22 1742 Honrd Sr.: Thirty six Indians appear 'd in our County ye 5th Instant well equipp'd for War, pretending a Visit to the Catabaws, they had a Letter dated the 10th of Ober from James Silver near Harris's ferry in pensilvania directed to one Wm. Hogg a Justice o' Peace desiring him to give them a Pass to travel through Virginia to their Enemies, wch Letter they shew'd here, and I serv'd as a pass where Silver's hand was well known. Instead of going directly along the Road they visited moot of our Plantations, killing our Stock, and taking Provisions by force. The 14th Instant they got into Burden's Land about 20 miles from my house, the 15th Capt. McDowel by an Express inform'd me of their insolent Behaviour as also of the uneasiness of the Neighbours, and desird my Directions, on wch I wrote to him and Capt. John Buchanan that the Law of Nature and Nations obliged us to repel an Enemy force by force, but that they were to supply those Indians wth Provisions wch they shd be paid for at the Governments Charge, at the same time to attend yr Motions until they got fairly out of our County. The 16th 17th and 18th Instant they kill'd several valuable Horses, besides carrying off many for their Luggage, which so exasperated our Men that they upbraided our two Captains with Cowardice. Never the less our Captains to prevent mischief sent two men with a White Flag the 10th Instant, desiring Peace and Friendship, to which they answer'd, "O Friends are you there, have we found you, and on that fir d on our Flag, kill'd Capt. McDowel and six more of our Men, on which Capt. Buchanan gave the word of Command and bravely return 'd ye Compliment, and stood his Ground with a very few hands (for our Men were not all come up) in 45 Minutes the Indians fled, leaving 8 of yr Men dead on the spot, amongst whom were two of their Captains. Our Capt. pursued them with only 8 Men several hundred yards, the Enemy getting into a Thicket, he return'd to the Field which he cow'd not by any means prevail on his Men to keep, and stand by him. The Night before the Engagemt I heard of the Indians Behaviour, and march 'd up with 23 Men, and met our Capt. returning 14 Miles distance from where they had ingaged, to which place I went next Day and brought off our Dead being 8 in Number, Capt. Buchanan having taken off ye Wounded the Day before. I have order'd out Patrawlers on all our Frontiers well equipped, and drafted out a certain Number of Young Men out of each Company to be in readiness to reinforce any Party or Place that first needs help, have ordered the Captains to guard their own precincts, have appointed places of Rendez-vous where each Neighbourhood may draw to an Occasion, and have call'd in the stragling Families that lived at a Distance.(96) Under an act passed in October 1748, slaves living on plantations located on the frontier, and threatened by Amerindian attack, could obtain licensed firearms. The slaves' owners had to sign applications allowing the slave to own guns and they were made responsible for the slaves' use of the guns. While this act did not formally admit slaves to membership in the militias, it did have the effect of allowing them to act as a levees en masse in defense of their own lives and the property and safety of their owners. On 25 October 1743 France signed a treaty known as the Second Family Compact with Spain and on 15 March 1744 joined Spain's war against England. The French made an unsuccessful assault on Annapolis Royal [Port Royal], Nova Scotia, in 1744. On 16 June 1745 Sir Peter Warren captured Fort Louisbourg. The press in New England was highly critical of Virginia for failing to support the expedition. Virginia had contributed no money and only 150 volunteer militiamen to the expedition, although Virginia was the most populous province and the richest.(97) In the early 1750s there were many reports that the French were stirring up the Amerindians in the western frontier of the Carolinas and Virginia. Reportedly the French were building forts as bases of supply for the coming war. The Ohio Company assisted the province of Virginia in recruiting and equipping volunteers who would serve in the militia.(98) The newspapers continued to report the alleged movements and actions of the French throughout 1753 and 1754 with great anxiety. The French were alleged to have issued orders to kill or take prisoner all whites, especially traders, caught within the territory they claimed, including Ohio.(99) The press paid no attention to provincial boundaries in reporting "trouble on the frontier" and one article might contain unsubstantiated reports of Indian attacks from the Carolinas to New England.(100) Governor Robert Dinwiddie, a man with essentially no experience in military affairs, was so anxious to enter the war and chase the French from the Ohio territory that he moved without authorization from his superiors or the legislature. He was unable to convince the House of Burgesses that Virginia had any interest in the war. He attempted to use his executive powers to order out a draft of the militia which was essentially a paper organization.(101) The end result was unsatisfactory to everyone. On 27 February 1752 the legislature passed a new militia act. Each county lieutenant was to enlist all able-bodied men between ages 18 and sixty, excepting indentured servants and slaves, Amerindians and free persons of color. Within two months of the passage of the law, the militia commanders were to muster and enumerate the men and report their names to the governor. Amerindians and free and enslaved black men could still be admitted to service as musicians, or be used in servile capacities as required in emergencies. Strangely, there was no mention of any militia obligation for indentured servants.(102) The French and Indian war opened with an engagement between the Virginia militia commanded by George Washington and the French in what is now western Pennsylvania, territory then claimed by Virginia. In the absence of any militia force from Pennsylvania, Virginia Governor Dinwiddie ordered his colonial militia to build fortifications at the Forks of Ohio [present day Pittsburgh]. The French had already erected Fort Duquesne and Washington's militia, which had constructed Fort Necessity, clashed with a force led by Coulon de Villiers at Great Meadows on 28 May 1754. Washington had about 150 militiamen and other recruits which brought his force to about 300. The French had about 900 men. In July 1754 Washington was forced to capitulate after losing 30 killed and 70 wounded. He optimistically reported that he had inflicted 300 casualties on the French force.(103) The news of the beginning of hostilities was widely reported. On 14 February 1754 the Assembly appropriated £10,000 "for the encouragement and protection of western settlers." Five days later Governor Dinwiddie issued a proclamation granting land bounties, in addition to regular pay, to all militiamen who would volunteer "to expel the French and Indians and help to erect a fort at the Forks of the Monongahela." As it turned out, only about 90 men shared in grants that totaled 200,000 acres, most of it between the Kanawha and Great Sandy rivers.(104) George Washington was ordered to go from Williamsburg to Fort Cumberland a few days later. He assumed command of some Virginia men and a company each from South Carolina and New York and on 20 March was promoted to lieutenant-colonel. Colonel Joshua Fry recruited the first Virginia volunteer regiment at Alexandria, consisting of 75 men, of which Fry had personally enlisted 50. The volunteers now numbered about 300. As his men marched westward Fry died at Patterson's Creek, probably on 31 May. As we have seen, Colonel George Washington had assumed command of the Virginia volunteers upon the death of Colonel Fry. Washington's command was forced to seek terms from the French on 17 April 1754. On 3 July he returned to Mount Vernon and in October resigned his commission. Colonels William Byrd and Adam Stephen joined the Virginia volunteers as officers. Colonel James Innes assumed command at Fort Cumberland, Maryland. In October 1754 the Assembly again authorized recruitment of volunteers, and the drafting of the unemployed, to serve against the French in the West. Justices of the peace, county lieutenants, and other officers were "to raise and levy such able-bodied men as do not follow or exercises any lawful calling or employment, or have not some other lawful and sufficient support and maintenance, to serve his Majesty as soldiers." Any soldier maimed would be supported afterward at the public expense, and families of those killed would also receive public support.(105) By the first of September, Dinwiddie had received numerous petitions from the southwestern frontier reporting Amerindian incursions and massacres of isolated homesteads. He proposed building several forts on Holstein's and Green Brier's rivers. On 6 October 1754 Colonel Lewis led forty or fifty men on a punitive expedition into the Indian country. Lewis remained in West Augusta until February 1755. In 1755 Dinwiddie reported to the Lords of Trade the number of militia and inhabitants in Virginia. There were 43,329 white heads of households and an estimated total white population of 173,316. He estimated that there were 60,078 black males of military age and a total population of 120,156 blacks in the province. That provided an estimated total population of 293,472 persons in Virginia. He numbered the militia at 36,000, with another 6000 potential militiamen exempted by various provisions of the militia law. Worse, Dinwiddie reported, "the Militia are not above one-half armed, and their Small Arms are of different Bores."(106) On 19 February 1755 General Edward Braddock arrived at Hampton, Virginia. The next day Braddock assumed command of all the king's troops in North America. Washington accepted reappointment to his old rank and joined Braddock. Braddock formed two companies of artificers, principally skilled carpenters, to accompany his expedition to cut a road and build fortifications. He next selected a company of light horsemen and four companies of rangers to join his two Irish regiments.(107) Dinwiddie called a council of governors, which met on 14 April, at Alexandria, to discuss manning, equipping, supplying, and funding Braddock's expedition. Meanwhile, Braddock's army marched to Winchester and on to Cumberland, arriving there on 10 May. Like all southern colonies Virginia constantly feared a slave revolt and took legislative action designed to minimize the possibility of such an armed insurrection among a most numerous population. Virginia Governor Robert Dinwiddie, upon hearing of slave problems near Fort Cumberland, remarked that "The villainy of the Negroes on an emergency of government is what I always feared."(108) However, General Edward Braddock notified Dinwiddie that he intended to utilize a number of free blacks and mulattoes, although he would not necessarily arm them.(109) On 27 June, Braddock's force was joined by Cherokee and Catawba warriors. On 9 July Braddock was surprised near Fort DuQuesne and his army decimated. Before his defeat, Braddock had predicted that, if his army were to be destroyed, the savages would fall upon the frontier settlements with a vengeance. He also predicted that, as his army neared Fort DuQuesne, the Amerindians would circle around and attack along the frontier to the south. Dinwiddie agreed, and ordered his militia to increase the number serving watch duty. At least one-tenth of the militia was to be stationed at armed readiness at all times. Fast runners were to stationed at all vital spots to carry messages to various ranging stations, the militia, and the governor. Despite the many precautions, massacres occurred along the Holstein River. Dinwiddie summoned Colonel Lewis, asking him to increase the number of rangers and lookouts on the frontier.(110) Dinwiddie's first recorded correspondence acknowledging Braddock's defeat was dated 16 July. Dinwiddie wrote to Colonel Patton in the Greenbrier area, asking that he strengthen the militia under his command and ordering him to do as much damage as possible to the marauding Amerindian forces. "I have ordered the whole militia of this dominion to be in arms," Dinwiddie wrote, "and your neighboring counties are directed to send men to your assistance." He dispatched Colonel Stewart and about fifty rangers to assist. In the New River area, between October 1754 and August 1755, 21 persons were killed, 7 wounded and nine taken prisoner. Among those killed were Colonel Patton and his deputy, Lieutenant Wright. The latter was killed just three days after Braddock's defeat by Amerindians whose courage had been bolstered by news of that event. At about the same time the first reports of the terrible massacre were received along the New River. Reverend Hugh McAden, who kept a journal of his times, reported that settlers by the hundred were fleeing the frontier. Many came first to Bedford, and then moved to North Carolina. John Madison, clerk of Augusta County, reported families fleeing from the Roanoke area.(111) On 25 July Dinwiddie wrote to Washington, informing him that he had ordered three companies of rangers to patrol the frontier. To Colonel John Buchanan he wrote a letter urging him to stand firm and reporting that his ranging company would soon be augmented by the addition of fifty rangers from Lunenburg County under Captain Nathaniel Terry and companies of forty or more rangers led by Captains Lewis, Patton, and Smith. These, Dinwiddie thought, "will be sufficient for the Protection of the Frontiers, without calling out the militia, which is not to be done till a great Extremity." Dinwiddie requested Samuel Overton to raise a company of volunteers in Hanover County and Captain John Phelps to do the same in Bedford County. All ranging companies were to "proceed with all expedition to annoy and destroy the enemy." As an incentive to enlist men and to have them fight, Dinwiddie placed a bounty of £5 on Amerindian scalps. The governor thought the incursions would end by Christmas and that peace would come to the frontier by spring.(112) Governor Dinwiddie expressed his hope that Colonel Dunbar would not take the remnants of Braddock's army into winter camp, leaving the frontier undefended. Dinwiddie decided to pursue a multi-faceted self-help plan for defense of the colony. He would equip and support the ranging companies, improve the militia, build a select militia, continue the bounty payments for Amerindian scalps, obtain adequate firearms for his troops, and enlist the aid of friendly natives. Most parts of the policy, with the notable exception of the creation of the select militia, had proven effective in years passed. In 1755, in the wake of Braddock's defeat, and the subsequent Amerindian attacks all along the frontier, Virginia's legislature passed an act placing a bounty on the scalps of the hostiles, in effect confirming the governor's earlier executive order. Whereas, divers cruel and barbarous murders have been lately committed in the upper parts of this colony, by Indians supposed to be in the interest of the French, without any provocation from us, and contrary to the laws of nature and nations, and they still continue in skulking parties to perpetrate their barbarous and savage cruelties, in the most base and treacherous manner, surprising, torturing, killing and scalping, not only our men, who live dispersedly in the frontiers, but also their helpless wives and children, sparing neither age nor sex; for prevention of which shocking inhumanities, and for repelling such malicious and detestable enemies, be it enacted by the lieutenant-governor, council and burgesses of this present General Assembly, and it is hereby enacted by the authority of the same, that the sum of ten pounds shall be paid by the treasurer of this colony, out of the public money in his hands, to any person or persons, party or parties, either in the pay of this colony, or other the inhabitants thereof, for every male Indian enemy, above the age of twelve years, by him or them taken prisoner, killed or destroyed, within the limits of this colony, at any time within the space of two years after the end of this session of Assembly. [The act further provided that] the scalp of every Indian, so to be killed or destroyed, as aforesaid, shall be produced to the governor or commander-in-chief.(113) On 14 July 1755 Dinwiddie commissioned William Preston captain of a ranging company, to serve until 24 June 1756. Preston had to recruit his own men and was nominally under the command of Colonel Patton. By the middle of August he had recruited only thirty men, few of whom were from Virginia. On 14 August Dinwiddie promoted Washington to colonel of the Virginia Regiment and made him supreme commander of all provincial forces raised in defense of the frontier. Dinwiddie promised him sixteen regiments of his countrymen with command post to be established at Winchester and field offices at Alexandria and Fredericksburg. The office in Alexandria would be used primarily for recruitment. Meanwhile, Dinwiddie would obtain arms, ammunition, clothing, and other supplies. Upon his arrival at Winchester, Washington found the recruits to be in "terrible bad order." No man followed orders unless the officers threatened physical punishment. When he ordered them drilled, it became immediately obvious that they had not been exercised in recent times. The distressed refugees from the frontier cowered in fear of the drunken behavior of most of the recruits. Recruiting officers had obviously given thought only to the collection of bounties and not to the creation of a formidable fighting force. Provincial officers assigned to recruiting duty showed no interest in carrying out their assignment and returned after several weeks' work without signing a single man. Many recruits were persistent idlers, some criminals, others escaped bondsmen, and still others physically unsuited for service. Many men who had been drafted from militia units chafed at the thought of discipline and complained of their bad luck in having been selected. Few showed any aptitude for, or interest in, military life. Drill sergeants complained about the "insolence" of almost all recruits. Recruits ignored frontiersmen who attempted to explain some of the critical points of Indian fighting. Officers leading men on forced marches often encountered settlers fleeing from the frontier. These poor creatures detained the officers, telling them their tales of woe and beseeching them to return and liberate their homesteads.(114) George Washington had a prejudice of long standing against the militia. That bias showed throughout the Revolution, but its origins were in the Seven Years War. Writing to his friend and rival Adam Stephen, later a general in the Revolutionary Army, on 18 November 1755, Washington observed that in the "life of Military Discipline" required that "we enforce obedience and obedience will be expected of us." He wished that militiamen be "be subject to death as in Military Law." He urged that bounties be placed on those who desert from the militia as was already the case for deserters from the army. But, he observed, "the Assembly will make no Alteration in the Militia Law."(115) In reality, Washington made no greater progress with the governor that he had with the legislature. Writing from Fort Cumberland on 13 July 1756, he complained to Captain Thomas Waggener, that the "Governor has ordered the Militia to be discharged as soon as harvest."(116) On 4 August 1756 he expressed his disdain for the militia to Governor Dinwiddie. Reporting on his experience in western Virginia he pointed put that when he was ambushed "near Fort Ashby" he received little militia support. He wrote of the "dastardly behavior of the Militia who ran off without half of them having discharged their pieces."(117) He characterized the militia to Dinwiddie as "obstinate, self-willed, perverse, of little or no service to the people and very burthensome to the Country."(118) Washington was much concerned about the sad condition of the Virginia militia well before Braddock's defeat. He first wrote to Dinwiddie on 21 August 1754, urging greater training of the colonial militia.(119) Following Braddock's defeat George Washington, on 2 August 1755, asked help from Colin Campbell to put the militia "in proper order" to meet the expected onslaught on the frontier.(120) He began correspondence in earnest with Governor Dinwiddie asking his assistance on the same subject. On 8 October 1755, writing from Fredericksburg, Washington told the governor that "I must again take the liberty of mentioning to your honor the necessity of putting the militia under better regulation than they are at present." He urged that Virginia revise its militia law.(121) That letter was followed in rapid succession with another letter dated 11 October in which he threatened to resign his commission "unless the Assembly will enact a law to enforce military law in all its Parts."(122) He suggested to Dinwiddie that the militia law be so revised as to force deserters who were apprehended to be "immediately draughted as Soldiers into the Virginia Regiment."(123) Washington's views were shared by others, including Governor Dinwiddie, who thought that it lacked both organization and proper discipline. So great was the governor's distrust of the county militias that only under the most dire circumstances would he order it out, depending instead on ranging units. Dinwiddie asked the legislature to take the necessary and proper steps to place them in readiness. In the governor's mind, it was a simple problem requiring only an equally simple remedy. The militia "had not been properly disciplined, or under proper command" and those who neglected their duty were rarely, if ever, punished. A new militia law, requiring service under a more severe set of penalties, and mandating periodic training sessions, would do much to remedy the problem. Had the settlers responded immediately by banding together, they would never have had to leave their homes and crops and would have repelled the invasions. The great body of trained militia could have saved themselves great losses and misfortune.(124) The legislature responded by passing a new militia law, mandating service of all able-bodied men between ages 16 and sixty. Exemptions to this act included most political officials, millers, farm and slave overseers, and those engaged in mining and refining lead, brass and iron. Men were required to provide at their own expense a "well fixed firelock" with a bayonet, cutting sword, and cartridge box. Those who could afford to provide the appropriate equipment could join the companies of horse. However, this service was necessarily restricted to the wealthy and their sons because of the rather considerable equipment required: a horse, good saddle, breast-plate, crupper, curb-bridle, carbine with boot, brace of pistols with holsters, double cartridge box, and a sword. The law restricted use of militia to the province and no more than five miles beyond habitation on the frontier.(125) On 23 February 1756 Dinwiddie reported to the Lords of Trade on his progress with militia training. "On my arrival at my government [post] I found the militia in bad order." Although there was an enrollment reported of more than 36,000 men, far fewer men were armed and most were undisciplined or trained in militia tactics. "The militia are not above half-armed, and their small arms [are] of different bores, making them very inconvenient in time of action." The exemptions to the Militia Act were many. There were far too many classes which "are exempted by Act of Assembly from appearing under arms." Those exempted included judges, justices of the peace, plantation overseers, millers and most politicians and public officers. Additionally, many tradesmen were exempted by virtue of their trades. All together those exempted by law amounted, according to Dinwiddie's estimates, to an additional 6000 men who might have been serving in the militia. Dinwiddie then asked the legislature "to vote a general tax to purchase arms of one bore for the militia," but lamented that "I have not yet prevailed with them."(126) However, Dinwiddie, in an address to the legislature, referred most favorably to the militia. "Our militia, under God, is our chief dependence for the protection of our lives and fortunes."(127) The select militia were specially trained citizen-soldiers who had little frontier experience and whose service was to be primarily in urban areas. On 17 September 1755 Dinwiddie issued orders for the dress of the select militia. The officers of the regular militia were to be dressed in a "suit of regimentals of good blue cloath coat to be faced and faced with scarlet and trimmed with silver; a scarlet waist-coat, with silver lace; blue breeches with silver-laced hat." The officers sent into the woods were also to have one set "of common soldiers' dress."(128) Governor Dinwiddie valued George Washington's advice and the militia colonel convinced his superior that the enlistment of friendly Amerindians was crucial to the defense of the frontier. Washington knew that the governor could exploit the ancient tribal antagonisms. There were many advantages to be gained at little cost or inconvenience. Obviously, those natives who assisted the colonists would not be at war with them. Their contacts with other tribes would render many vital scouting and intelligence services. They were experienced trackers and woodsmen. Considerable numbers could be enlisted for trinkets worth only a few hundred pounds. Their considerable presence might act as a shield against other, more hostile, tribes. Virginia Governor Dinwiddie joined the growing effort to take the offensive against the French. Responding in large measure to Washington's several letters,(129) he asked the House of Burgesses to appropriate money to support the British effort against the French at Crown Point, and to supply and arm the militia in the spring of 1756.(130) North Carolina Governor Arthur Dobbs offered aid and militia supplies to Virginia.(131) The press throughout the American colonies reported Governor Dinwiddie's several calls for increased military preparedness.(132) In Williamsburg the House of Burgesses appropriated money for defense and ordered the militia to be trained and equipped.(133) New militia districts were drawn and training was to be improved.(134) Dinwiddie decided to take the offensive in February 1756. Major Lewis was to assume command assisted by two "old woodsmen," Captains Woodson and Smith. A supply of 150 small arms, along with gunpowder and lead, was accompanied by a much-needed surgeon, Lieutenant William Fleming. The Cherokees promised aid and Dinwiddie enthusiastically reported to Washington that he hoped to have about 350 men in Lewis' command. The individual companies marched through the Roanoke Valley and assembled at Dunkard's Bottom on the New River at a post optimistically called Fort Frederick. A local minister named Brown appeared to bless the troops, preach a military sermon, and invoke God's protection. Almost immediately word arrived that a Shawnee raiding party had caused mischief about a day's march to the west. Lewis had ordered a man "switched" for swearing and the sight of such physical punishment disgusted the Cherokees who deserted. Major Lewis and Captain Pearis followed them and persuaded them to return, but valuable time had been lost. Scouts picked up signs of the Shawnee war party along with their prisoners, but the trail was difficult and food soon ran short. Lewis ordered the men to go on half-rations. The New River at many points ran through steep mountain passes with no level land to be found on either shore. The party had to cross the river almost every mile. The men obtained canoes, but most capsized, damaging and destroying supplies. Eight of Smith's men deserted and a part of Preston's company was compelled to continue on the their mission only under the threat of being shot. Unable to contain the spreading mutiny, Major Lewis delivered an impassioned speech urging the men to perform their duty. Only about thirty men and the officers agreed to continue, while the volunteers from the companies led by Smith, Dunlap, Preston and Montgomery deserted. The remaining party pursued the natives without being able to engage them. Casualties were caused either by natural disaster or the ambush of deserters. Disgusted and frustrated, Lewis returned and delivered his report. On 24 April, Dinwiddie sent him to Cherokee country to construct a fort which was completed at a cost of £2000. Captain Dunlap constructed another fort at the mouth of Craig's Creek. Captain Preston continued to march his men through the woods along the Catawba and Buffalo creeks, after which he commanded a portion of the Augusta County militia that had been mustered to defend the frontier. Frontiersmen circulated a petition, asking that a chain of new forts be constructed along the entire frontier. Meanwhile, the House of Burgesses conducted an inquiry into the conduct of the officers assigned to the Shawnee expedition, finding them all innocent.(135) Dinwiddie proposed to the Lords of Trade that they authorize the construction of a string of forts along the Allegheny mountains, with emphasis on the mountain passes. The legislature took up the call, demanding that forts be erected from Great Capon in Hampshire County in the north and extending to the south fork of the Mayo River in Halifax County. Many frontiersmen, upon hearing of this policy consideration, supported it by sending memorials and petitions to both the chief executive and the House of Burgesses.(136) Washington entered the debate. His logic was impeccable. To have the desired effect, each fort would have to have a garrison of approximately eighty to one hundred men. At any time about forty to fifty men would have to be assigned to patrols. The chain of forts would have to be built at intervals not greater than one day's march. The state could not afford to maintain an adequate garrison at so many places. If fewer forts were built, the Amerindians would soon learn how to circumvent them. If fewer men were assigned, the natives would isolate and destroy the smaller garrisons. If the men remained in the forts, they would serve no good purpose. Dinwiddie appointed Washington to chair a conference on this matter, to be held on 10 July 1756 at Fort Cumberland. The conferees expended most of their energy arguing over the best locations for forts. In April 1756 the Virginia militia skirmished with a party of Amerindians led by French officers. Papers taken from a dead French officer revealed that his party, and possibly others, were to harass Virginia settlements and isolated farms along a broad line. They were to penetrate to within 50 miles of major towns and cities.(137) By May 1756 the Amerindian incursions on the frontier had cut communications among many of the frontier towns. Dinwiddie received reports that "the French and indians to the amount of some thousands have invaded our Back Settlements, committed the greatest Cruelties by murdering many of our Subjects without the least regard to age or sex and burnt a great many Houses." He found it difficult to draft men because few were willing to abandon their families to the savages. He requested cannon and small arms from the home government.(138) Dinwiddie sent Richard Pearis to the Cherokee nation on 21 April 1756, with gifts and a letter asking them to come to the aid of the province. An Indian trader claimed that the nation owed him 2586 pounds of deer hides for trade goods delivered and that they must hunt until the debt was paid. Pearis, on his own initiative, assigned the debt to Virginia and burned the books. He was then able to recruit 82 warriors to accompany him. The House of Burgesses awarded Pearis £100 for pay his expenses and to discharge the debt.(139) In late June, Major Lewis gathered several units of rangers and added the 82 Cherokees and set out on another expedition against the Shawnees. The greatest difficulty Lewis encountered was finding a sufficient number of arms to equip his men. The Shawnee spotted the movement of Lewis' troops and on 25 June fell upon the inhabitants in the Roanoke area, massacring many and destroying the only fort in the area. A survivor, John Smith, sent a memorial to the House of Burgesses in which he described the massacre and claimed that a party of eight hundred men could "easily" destroy the Shawnees and burn their principal towns.(140) On 5 May 1756 Dinwiddie issued instructions to the county lieutenants. They were to make two drafts among the militia, one being for the little army that was needed to fill the void left after the English had fled to the safety of the eastern seaboard. This group would serve garrison duty at the various forts and comprise an army to seek and destroy the enemy. The second draft was for a group of minutemen who would be available to respond to Amerindian incursions on the frontier.(141) On 24 May Dinwiddie wrote to Maryland Governor Sharpe that he was saddened by the failure of the Pennsylvania legislature to adopt a proper militia law and to offer sufficient support in arms, food and other materials of war. He was heartened by the emergence of a strong militia among the propertied class. "We have a volunteer Association of Gentlemen of this Province," he wrote, "to the number of 200." Dinwiddie was optimistic that "it will be of service in animating the lower Class of our people."(142) But there was little good news elsewhere. Washington had reported that on the "dastardly behavior" of the militia serving with him. Dinwiddie accepted Washington's report on 27 May and apologized for inability of the militia officers to control their men or instill in them the least sense of discipline. He ordered some militia home and suggested that measures be taken to create an orderly martial atmosphere.(143) Dinwiddie received a letter from Washington that he had received an order from William Shirley to send what remained of his meager supplies on the frontier, beginning with gunpowder stored at his most important frontier post, Fort Cumberland, to New York to be used in campaigns Shirley planned in the northeast. Dinwiddie wrote Major-General James Abercrombie, "I hope the order will be countermanded, as there are many forts on the frontiers depending on supplies."(144) On 22 July 1756 Dinwiddie expressed his disappointment in the provincial militia to ironmaster William Byrd, III. He lamented that "if the militia would only, [even] in small numbers, appear with proper spirit, the banditti of Indians would not face them."(145) In preparation for a new campaign in July 1756 the General Assembly passed a new militia act which differed but little from earlier laws. It required that all able-bodied white males, except indentured servants, between ages 18 and 60 be enrolled in the county militia wherein they resided. Residents of Hampshire County were also exempted from the provisions of the act, perhaps because they represented the county closest the scene of the action. Doubtless, these people were expected to act as levees en masse in defense of their homes. Free blacks, Amerindians and slaves could serve as musicians and manual laborers, but could not bear arms. Since indentured servants were not mentioned in any additional provisions of the law, it may be assumed that no service of any kind was expected of them.(146) This law was reenacted through July 1773. Dinwiddie decided to build three forts in Halifax County and one in Bedford County. He assigned various county militia units to guard duty, but there were problems almost immediately when the Augusta County militia proved to be ineffective and quite uncooperative. A settler named Stalnaker reported that the Shawnee were gathering a force to attack as far east as Winchester. Dinwiddie gave him £100 to build a fort at Draper's Meadows and told him to raise a company of volunteer militia to defend it. In August Dinwiddie again met with Washington who advised him to build three additional forts in the frontier counties of Augusta, Bedford, and Hampshire. Manning these new forts, along with the existing ones, would severely tax the militia. Washington raised another issue. What was to be done about the ranging companies, most of whose men had deserted? Washington was still unhappy about the high desertion rate during the first Shawnee expedition. The forts, Washington reminded Dinwiddie, were useless without militia to garrison them. The forts had to gather information on the enemy Indians and send out period patrols. Rangers were supposedly the most skilled and highly trained troops available for frontier patrol duty and gathering intelligence. Dinwiddie suggested that his militia commander-in-chief make an inspection tour of the frontier. After attending briefly to some personal business, Washington set off on his grand tour. Most militiamen had no idea how to build a fort and the officers had no plans for fortifications and rarely issued comprehendible orders during the construction phase. Washington was appalled that, following an Amerindian attack on the headwaters of Catawba Creek, the fort's commander, Colonel Nash, could not recruit a ranging company to track and pursue the Shawnees. A second call for militia yielded only a few officers and eight men from Bedford County. Washington moved on to another fort being constructed in Augusta County being built by Captain Hogg. Only eighteen men had shown up to assist, although supposedly another thirty from Lunenburg County were on their way. Still, Colonel John Buchanan assured Washington that, in an emergency, he could turn out 2000 militia on short notice. Washington concluded that about one man out of thirteen had performed his duty. He reported to Dinwiddie, "The militia are under such bad order and discipline, that they will go and come when and where they please, without regarding time, their officers, or the safety of the inhabitants."(147) The tour showed him clearly the terrible state of discipline among the militia, the poor condition of the forts, and the dispirited defense of the garrison troops. On 20 July 1756 the home government attempted to assess the true situation by requesting that the colonial governors respond to certain questions. One of the principal concerns in London was: what measures were the colonies taking to provide for their own defense. The result was the Blair Report on the military preparedness of the colonies. Dinwiddie submitted his report to the king, but it largely repeated findings of which the king was already aware and which we have already discussed.(148) In Virginia the militia consisted of about 36,000, but was only half-armed. The guns in quality and usefulness varied enormously and they certainly did not all fire the same ammunition, "which is inconvenient in time of action." Almost any citizen could escape the Virginia militia service by paying £10.(149) About 1760 there were theoretically 43,329 citizens liable for militia service, but there were over 8,000 exceptions. Blacks, free and enslaved, numbering 60,078, were entirely disarmed and thus were useless for militia duty.(150) At the end of the summer things were looking up. Dinwiddie managed to gather a reasonably effective militia force by early autumn 1756. He reported to Loudoun on 28 October 1756 that he now had 400 effective rangers guarding the frontier. Washington had sufficiently reenforced Fort Cumberland so that it appeared to be sufficiently strong to "protect it from falling into the Enemy's hands. He was making some headway in recruiting men for the Royal Americans. Still landholders especially resisted long-term enlistment, especially for service outside their home areas. For service in the Royal Americans Dinwiddie "applied for one-twentieth Part of our Militia, but to no effect. As they are mostly free-holders, they insist on their Privileges and can't be persuaded voluntarily to join in Arms for the Protection of their Lives, Liberties and Properties."(151) In the late autumn Major Lewis returned from Cherokee Fort, having completed his mission. On 15 November, Governor Dinwiddie called Lewis and Colonel Buchanan into conference because he was greatly concerned about the rising costs of maintaining the Augusta County militia. The six companies from Augusta cost more than all the other militia units in the field. The military advisers suggested reducing the active number of men to three companies of sixty men each and sending the rest home. On 23 November Lewis issued orders to Captain Preston to draft sixty men from the militia to relieve the Augusta militia at Miller's Fort and other frontier posts. Those militiamen who had been drafted complained bitterly about their misfortune, but remained on duty through January 1757. In mid-winter, Dinwiddie proposed launching a second expedition against the Shawnee. Captain Vause and Morris Griffith, who had been captured in the Roanoke Valley and escaped, proposed enlisting 250 to 300 volunteers, to be supplied with arms, ammunition and clothing, and to be given only ordinary militia pay, plunder and £10 bounty for scalps. Captain Stalnaker would act as guide. The three companies from Augusta, Dinwiddie thought, would be sufficient to guard the frontier. Vause and Griffith thought that they would have no trouble enlisting men if only because so many men were upset, and many had been personally touched, by the earlier Amerindian massacres. Because the frontiersmen had initiated the expedition, its supporters became known as the Associators. Meanwhile, Dinwiddie attended a strategy meeting in Philadelphia, where it was decided to enlarge the punitive force to six hundred men. Upon his return he discovered a number of letters and petitions from frontiersmen advising against the expedition. The principal complaints revolved around the election and appointment of officers, state of equipment, and availability of commissary. Colonel Clement Read, writing from Lunenburg County, offer his opinion. "I am sorry the Expedition so well intended against the Shawnee is likely to be defeated, and all our schemes for carrying it on rendered abortive by an ill-timed jealousy and malicious insinuations."(152) News reached Dinwiddie in April that atrocities and massacres had occurred in Halifax County and the inhabitants were blaming the presumably allied Cherokees. In May Captain Stalnaker reported the passage through Halifax of at least four hundred Amerindians, including Catawbas, Tuscaroras and Cherokees. Dinwiddie proposed the adoption of a three part plan. First, he called for the creation of three new ranging companies under Colonel John Buchanan and Captain Hogg. Second, he also ordered the drafting of one thousand militiamen into the First Virginia Regiment with Washington as the commander-in-chief. The number of men in the pay of the colony was now two thousand exclusive of rangers, constituting a considerable financial burden on the colony. Third, he ordered the creation of a series of block-houses and forts along the southern frontier.(153) The governor thought that conditions on the frontier had been pacified, but decided to maintain a presence. He sent a new draft of sixty militiamen to Miller's Fort to relieve Preston's first company. Under Preston's leadership, this band built new fortifications at Bull's Pasture, Fort George, and Fort Prince George. None of these outposts reported significant Amerindian activity and no new tales of massacres were heard. In June 1757 Dinwiddie received another bit of encouraging news. An Amerindian friend of the Virginians, known as Old Hop, dispatched 30 warriors to assist in repelling incursions of the French Indians near Winchester, and promised to send at least three more similar bands. It was a mixed blessing because Dinwiddie was asked to provide each warrior with a shirt, leggings, pants, a small arm, powder, lead and blankets and they demanded match coats which Dinwiddie could not supply. To keep their Amerindian allies loyal to the British side, the legislature appropriated £5000 to reestablish the Indian trade.(154) A few days later Dinwiddie reported to William Pitt that 220 Catawbas, Nottoways and Tuscaroras had joined his militia at Winchester and had just brought the first scalps and a few prisoners. Another party of 70 warriors, largely Cherokees, was working with the militia toward Fort Duquesne. He was optimistic that he would soon have as many as 1500 Amerindians fighting on the British side.(155) In 1757 the Virginia legislature again revised the colony's basic militia law because "the Militia of this Colony should be well regulated and disciplined." The act required that, henceforth, all officers, superior or inferior, should be residents of the county in which they command. It covered all able-bodied, free, white male inhabitants, ages 16 to 60, except newly imported servants and members of council, House of Burgesses and most colonial, county and local officials; professors and students of the College of William and Mary; overseers of four or more slaves or servants; millers and founders; persons employed in copper, tin or lead mines; and priests and ministers of the Gospel. The county and local officials and a few others exempted "shall provide Arms for the Use of the County, City or Borough, wherein they shall respectively reside." Councilors were to provide "for complete sets of Arms." The day following a general muster the county officers were to meet at the court house "and to inquire of the Age and Abilities of all Persons enlisted, and to exempt such as they shall adjudge incapable of Service." Free blacks, persons of mixed racial heritage and Amerindians who chose to enlist were to be "employed as Drummers, Trumpeters or Pioneers, or in other servile Labour." Within twelve months of receiving their appointments county lieutenants, colonels, lieutenant-colonels and majors had to provide themselves with suitable swords. Captains and lieutenants had to have firelocks and swords; and corporals and sergeants, swords and halberts. Every militiaman had to provide himself with a well fixed firelock, bayonet and double cartridge box; and keep in his home a pound of gunpowder and four pounds of musket balls fitted to his gun. Parents were required to provide arms for their sons; and masters arms for their servants. Those too poor to afford a musket were to certify the same to the county officers and then the county would provide a musket branded with the county markings. On the death or removal of a poor militiaman, or his attainment of age 60, the musket was to be surrendered to the county lieutenant. An officer could "order all Soldiers . . . to go armed to their respective Parish Churches." "For the better training and exercising the Militia," the county commanders were to "muster, train and exercise his Company . . . in the Months of March and April or September or October, yearly." Failure to appear at muster subjected a militiaman to discipline, usually a fine. The officers were to "cause such Offender to be tied Neck and Heels for any Time not exceeding five Minutes, or inflict such corporal Punishment as he shall think fit, not exceeding 20 Lashes." The law was quite specific as to the use of militia fines. The officers were to "dispose of such Fines for buying Drums and Trophies for the Use of the Colony and for supplying the Militia of said County with Arms." Officers were required to take the following oath: " I --- do swear that I will do equal Right and Justice to all Men, according to the Act of Assembly for the better regulating and discipling the Militia." Under the militia act, county lieutenants were required to appoint one inferior officer and as many men as he though were needed to serve as slave patrols. The law charged these patrols with visiting all "Negro Quarters and other places suspected of entertaining unlawful Assemblies of Salves or other disorderly Persons." Slaves absent from their own masters' plantations were "to receive any Number of Lashes, not exceeding 20 on his or her bare back, well laid on. Militiamen serving slave patrol received ten pounds of tobacco for each day's or night's service.(156) Militiamen in several cities were covered by separate acts. Citizens of Williamsburg and Norfolk were mustered and trained according to laws passed in 1736 and 1739. Exempted by these acts were sailors and masters of ships. These militias had the additional responsibility to stand seacoast watch. Cities had nightly slave patrols, which were assigned duty within the city limits and one-half mile beyond in all directions. The legislature also passed an act for "making Provision against Invasions and Insurrections" which gave the governor full authority over the militia in times of emergency. He "shall have full Power and Authority to levy, raise, arm and muster, such a Number of Forces out of the Militia of this Colony as shall be thought needful for repelling the Invasion or suppressing the Insurrection, or other Danger." Penalties for failure to muster were substantially increased, up to death or dismemberment.(157) The colonial regiment might be sent to the aid of royal forces or incorporated as part of such troops whereas the ranging units had been developed for the protection of the frontier and were not subject to royal draft. The legislature appropriated £1500 for the support of the troops provided the rangers remain always in the service of the colony. The royal authorities had no choice but to accept the legislature's terms for the crown needed men to join General Forbes' expedition against Fort DuQuesne. Major Lewis joined Washington at Winchester, bring a significant portion of the volunteer regiment with him. This left Colonels John Buchanan and William Byrd and Captains Preston, Dickinson, and Young to guard the frontier. These men built a new fort on the James River named after Francis Farquier who, in January 1758, succeeded Dinwiddie as the colony's governor. With the best military men serving with the First Virginia Regiment, poor leadership plagued the militia. In one major blunder, Captain Robert Wade led a party of militiamen up the New River where they encountered a band of friendly natives, fell upon them, and massacred many warriors. Colonel Byrd made a similar mistake in the late autumn.(158) Washington was still far from being pleased with progress the province was making in discipling and training the militia. He complained to Governor Francis Farquier of the sad state of the militia in early 1758. On 25 June 1758 Farquier replied to Washington's letter. "I am extremely sensible of all you say in your letter of the nineteenth, instant, relative to the bad condition of the militia and wish I knew how to redress it."(159) Farquier decided to appoint William Byrd to serve as colonel of the second Virginia regiment, although he was placed nominally under Washington's orders. Byrd sent an Indian trader named George Turner to the Cherokee camp to carry gifts to atone for Wade's and Byrd's earlier slaughter of their braves, and to recruit them into Virginia service and the assistance of Forbes expedition against Fort DuQuesne. The crown ordered, and the legislature concurred, that the volunteers in both Virginia regiments should remain in royal service until January. The volunteers complained that this extended their service several months beyond their contractual time, but appeals to patriotism, revenge, and additional pay won the cause. The Forbes expedition was a resounding success, highlighted by the capture of Fort DuQuesne on 26 November 1758. Forbes suffered few casualties beyond the needless loss of about four hundred men under Majors Grant and Lewis. Washington resigned his commission and was succeeded by William Byrd as provincial commander-in-chief. The French were now gone from the Ohio territory so Virginia turned its attention to the former French allies, the Shawnee and associated tribes, and against the troublesome part of the Cherokee nation. Based largely on captured French records spies, and officers, Forbes estimated the following numbers of hostile Amerindians: the Delawares between Ohio River and Lake Erie, 500 warriors; the Shawnee on the Scioto and Muskingum rivers, 500 braves; the Mingoes on the Scioto River, 60 warriors; and the Wyandots on Miami River, 300 men at arms. Additionally, the Cherokees in western North Carolina and eastern Tennessee had 1500 to 2000 warriors.(160) In January 1759 Governor Farquier convened a military council, including his council and Colonel Byrd, to plan the Cherokee expedition. He ordered Byrd to position his second regiment to its best advantage in anticipation of a move south. Forbes demanded that a portion of the regiment be stationed at Pittsburgh to guard against a return of the French. Since this fit well with the provincial desire to hold the western Amerindian tribes at bay, council agreed. Farquier ordered that the militia of the counties of Frederick, Hampshire and Augusta, and the rangers in Bedford and Halifax counties, to be placed in readiness to assist in maintaining the peace on the frontier. Two hundred artisans were to be recruited by offering an enlistment bounty of £5 and then deployed in the strengthening fortifications. The men of the proposed expedition remained in camp, adopting a defensive, rather than offensive, posture. Three hundred of the militia and frontier were enlisted "to secure and preserve the several forts and places . . . and protect the frontiers from the threatened invasion of the Cherokees and other Indians." By March 1760, additional rangers and militia were placed in readiness on the southern frontier. In May, an additional seven hundred men were recruited by offering a bounty of £10 and sent to the southwestern frontier and the relief of Fort Loudoun. Major Lewis assumed command of the new recruits. In the summer of 1761 Captain William Preston stationed rangers in several fortifications on New River to protect the inhabitants from the Amerindians. He thought the situation to be sufficiently dangerous to muster the militia, but Governor Farquier refused permission, telling him to solve the problems by peaceful means. Provincial expenses were high enough without having to pay more militiamen. Farquier wrote Preston, urging him to persuade the frontiersmen to remain on their plantations. Preston was able to settle his problems with the Cherokee by having a local surveyor, Thomas Lewis, draw a boundary between their land and that of the colony. Andrew Lewis, brother of the surveyor, met the Cherokees and made a treaty that obliged them to guard the southwestern frontier. So successful was the peace treaty that surveying continued along the Roanoke River in 1762 and 1763.(161) The Cherokee expedition was finally ready to move on the enemy in April 1761. Colonel Byrd ordered the various component companies to assemble at Captain James Campbell's plantation in Roanoke. By act of the legislature of 31 March 1761, Byrd was authorized to proceed with one thousand men. The money was not forthcoming and Byrd was unable to offer cash for the bounties or purchase supplies for his commissary under Thomas Walker. Byrd decided to recall all available men from Pittsburgh and to proceed with the five hundred men he could pay and supply. On 1 August the supplies had not arrived nor had more men been recruited. Byrd ordered the old Cherokee Fort to be refitted, strengthened and garrison by sixty militia recruits. The Cherokees retreated from their northern towns and Colonel Grant, commander of the advance forces, failed to engage them. Enlistment of the volunteers were expiring and the legislature authorized the extension of service through May 1762. Adam Stephen then assumed command with orders from council to proceed against the Cherokees. He moved his three hundred men to Great Island, built Fort Robinson thereon, and set up camp there for the winter under Captain John McNeil. The Amerindians were now some three hundred miles away from the inhabitants of the southern Virginia frontier. Declaring the frontier to be safe, and the Cherokees driven south, council disbanded the second regiment in February 1762, and then commended them on their service.(162) In 1761 all British subjects "living on the western waters" were ordered to vacate their homesteads since these lands were to be reserved to the Amerindians. A few cabins were burned, but English authority was never firmly established on the frontier and the area was far too vast to police effectively. The normally docile Shawnee especially resented the incursion on their lands and in 1761 effectively isolated the settlers in the Greenbrier area. In July 1763 massacres again occurred along the southwestern frontier. On 27 July, Colonel Preston reported, "Our situation at present is very different from what it was. . . . All the valleys of Roanoke River and along the waters of the Mississippi are depopulated." He sent the Bedford County militia out in pursuit of a Shawnee raiding party. His report continued. I have built a little fort in which are 87 persons, 20 of whom bear arms. We are in a pretty good posture of defence, and with the aid of God are determined to make a stand. In 5 or 6 other places in this part of the country they have fallen into the same method and with the same resolution. How long we may keep them is uncertain. No enemy have appeared here as yet. Their guns are frequently heard and their footing observed, which makes us believe they will pay us a visit. . . . We bear our misfortunes so far with fortitude and are in hopes of being relieved.(163) Governor Farquier sent Preston a letter in which he promised to move militia from other counties to assist in the relief of Roanoke. He promoted Andrew Lewis to the rank of major, to serve under Preston who was the county lieutenant. In October 1763 Captain William Christian led a party of Amherst County militia to the New River where they engaged a band of about twenty Amerindians. After an exchange of gunfire, and the massacre of a settler held captive, the savages fled. Otherwise, the expedition was essentially unremarkable. Lieutenant David Robinson, an officer in the Bedford County contingent of Captain Preston's rangers, led his men in yet another fruitless tour of the New River area in February 1764. William Thompson and a Captain Sayers followed Robinson and they, too, had no luck in engaging the natives. Still, isolated Shawnee raids decimated isolated settlements and slaughtered their inhabitants. One unfortunate incident followed the killing of members of a party of Shawnee who had murdered the Cloyd family. The militiamen recovered the family's "fortune" of £137/18/0, mostly in gold and silver coins, but fought over the distribution despite the fact that the militiamen were the Cloyds' neighbors. The dispute ended only when the county court decided to grant each man thirty shillings.(164) By 1764 they had pushed into the Shenandoah Valley as far as Staunton. The militia was ineffective in responding to these expeditions. In April, Dr. William Fleming, then living in Staunton, wrote Governor Farquier, telling him that the local militia was unequal to the task of defending the town. Farquier dispatched 450 militia under Colonel Andrew Lewis to defend the town, but they did not encounter any hostiles and, after three months of inactivity, were discharged. Lewis retained 130 rangers in service in the area until September.(165) General Bouquet decided he must carry the war against the Shawnee into the Ohio territory. Accompanying his army were two hundred Augusta County militiamen under Captain John McClenachan. On 9 November 1764 Bouquet concluded a peace treaty with the Shawnee at Muskingum. One part of the agreement required that prisoners held in Shawnee camps be returned. Throughout the winter and into the following spring, prisoners were delivered to Fort Pitt and other posts and placed under the care of the Virginia militia. Bouquet's peace lasted until 1774. Still, sporadic raids occurred against isolated settlements in the southwestern frontier. In May 1765 a party of Shawnee camping at John Anderson's house in the Greenbrier Valley was attacked by Augusta County militiamen in retribution for various earlier raids. Colonel Lewis and Dr William Fleming intervened on behalf of the Indians, saving at least some of their lives. Leaders of the "Augusta boys" offered a reward of £1000 for Lewis' scalp and £500 for Fleming. Cooler heads prevailed, the community came to its collective senses, and Lewis and Fleming emerged as heroes.(166) During Pontiac's uprising Virginia had kept over 1000 militiamen on duty on the frontier and reduced casualties significantly. Still, the natives could strike anywhere at almost anytime and no system of defense was foolproof. However, Virginia's losses were negligible compared to those of Pennsylvania. George Croghan, well known Indian trader and diplomat to the Pennsylvania and New York tribes, estimated that Pennsylvania lost over 2000 inhabitants during that short war, and Virginia nearly as many.(167) Governor Dinwiddie thought the militia should have repelled the Amerindian incursion. General Jeffery Amherst called on Virginia to furnish volunteers and militia to garrison Fort Pitt and to carry out the reduction of the Shawnee towns in Ohio. If Virginia would supply the frontier fighters Amherst would try to spare some regulars "to join the Virginians in offensive operations against the Shawanese Towns on the Banks of the Ohio."(168) In 1766 the Virginia legislature again revised the fundamental militia act. The act renewed the list of those exempted from militia, adding physicians and surgeons, Quakers and other religious dissenters, tobacco inspectors at public warehouses, acting judges and justices of the peace. The provisions for the purchase and maintenance of militia arms were reenacted, with the penalty increased to £5. The act brought Williamsburg and Norfolk under the obligation to muster and train in March or April, and to attend a regimental muster once a year, although other provisions of the particular acts of 1736 and 1739 for these boroughs remained in force. The authorities of James City and York were clearly and legally separated from Norfolk and Williamsburg.(169) At this time the mounted militia substituted trumpets for the traditional drums used by foot soldiers.(170) In January 1774 John Murray, Earl of Dunmore (1732-1809), royal governor of Virginia,(171) seized western Pennsylvania and set up a new government in and near Pittsburgh under James Connolly. Simultaneously, he encouraged more hunters, traders and settlers to enter that region of Virginia known as Kentucky. Certain disaffected persons, at home and in England, used the colonial independence as an opportunity to forment trouble with the native Americans as much to embarrass the Whigs as to advance their interests in western lands. Some believe that British Indian agents urged the Shawnee, peaceful since the treaty ten years earlier, to resist colonial encroachment on their lands by warring against the traders in their lands. Massacres of some traders precipitated a response by Virginia. Shawnee and Ottawa war leaders decided to end this encroachment upon their lands, leading to what is known as Dunmore's War. Two columns of Virginia militia and volunteers responded. Dunmore led an expedition down the Ohio River while Colonel Andrew Lewis led a second militia column down the Great Kanawha River. Dunmore's militiamen rode their horses into battle as mounted infantry, but having overloaded the poor animals and chosen old and otherwise useless horses, to avoid having good animals killed or wounded, the men were forced to rest the animals frequently. The three columns traveled at different speeds and during rest periods lost contact with one another. The Amerindians used that opportunity to divide and conquer and so launched the attack during a rest period. What should have been a resounding colonial victory turned into the indecisive Battle of Point Pleasant on 10 October 1774. The standard newspaper account exaggerated the size of the enemy force and underestimated the size of the militia force by claiming that 600 Virginia militia and volunteers had fought against 900 Amerindians at the mouth of Kanawha River and won a "resounding victory."(172) Most objective accounts conclude that neither side that gained an advantage. In reality, the colonial militia outnumbered the Amerindians 1000 men to about 300. The war ended with the Amerindians yielding hunting rights in Kentucky and guaranteeing free passage on the Ohio River in the Treaty of Camp Charlotte.(173) Still, like Dinwiddie's war in 1754, Dunmore's war was a failure. Like Dinwiddie, Dunmore had antagonized the House of Burgesses. Time was already past in 1754, let alone in 1774, when a governor could order a major deployment of the militia without first receiving legislative acquiescence. The legislature and some local officials, not the governor controlled the militia. The wars were both very unpopular and the general population was generally displeased with both the cost and the result. On 24 December 1774 Governor Dunmore wrote to Lord Dartmouth, "every county is now arming a company of men whom they call an independent company."(174) Most counties had already formed, or were in the process of forming, such independent companies. By the end of the year at least six companies were fully formed, armed and prepared for action.(175) Patrick Henry(176) assumed political leadership, realizing that a number of independent volunteer companies, formed in and by various counties, could not provide the force necessary for a sustained war. He saw these companies as barriers against the Amerindians and as a reservoir of trained or semi-trained manpower from which an regular force might draw. He had not yet considered the possibility of enlistment in a national regular army, but was bound to the concept of a statewide militia under state command. Henry's position at the end of 1774 may be summed up by the following resolution which he offered at the First Virginia Convention. Resolved, That a well regulated militia, Composed of gentlemen and yeomen, is the natural strength and only security of a free government; that such a militia in this colony would for ever render it unnecessary for the mother country to keep among us, for the purpose of our defence, any standing army of mercenary soldiers always subversive of the quiet, and dangerous to the liberties of time people, and would obviate the pretext of taxing us for their support. That the establishment of such a militia is, at this time, peculiarly necessary, by time state of our laws for the protection and defence of the country, some of which have already expired, and others will shortly be so; and that time known remissness of the government in calling us together in legislative capacity, renders it too insecure, in this time of danger and distress, to rely that opportunity will be given of renewing them, in general assembly, or making any provision to secure our inestimable rights and liberties, from those further violations with which they are threatened. Resolved, therefore, That this colony be immediately put into a state of defence, and that there be a committee to prepare a plan for embodying, arming, and discipling such a number of men, as may be sufficient for that purpose.(177) The Virginia militia filled a number of vital and important roles during the Revolution, supporting the patriot cause in both the north and south. In March 1775 the Virginia Convention met in Old St. John's Church on a hill above the falls of the James River in Richmond. The delegates were seeking privacy and distance from royalist Governor Dunmore. Patrick Henry immediately moved that the "Colony be immediately put in a state of defense," meaning that the militia be formed, disciplined and armed. Opposed by even some of the patriots, Henry then delivered his famous "Give me liberty or give me death" speech, which was more than sufficient to carry the motion.(178) Henry's speech at the Convention was based on the assumption that a simple militia would be insufficient because a prolonged war was inevitable and that a real, substantial force, based on, but separate from, the general militia, was to be absolutely necessary for the defense of Virginia. Henry argued that a mere show of force in the form of a general and broad muster of the militia would accomplish nothing because the British authorities would not be intimidated. His purpose was to convince the assembly that they should abandon all hopes of a peaceful reconciliation and prepare for a prolonged war.(179) After considerable debate Henry introduced a second resolution which called for placing the colony in a full state of military preparedness. The state was to call into service a body of men sufficient to defend it from both the English forces along the coast and the Amerindians whom the British might seduce into making raids along the frontier. The men were to be completely trained in military arts, fully armed and subjected to standard military discipline. This would become the select militia of yeomen and gentlemen of which Henry had spoken earlier in his first motion. Richard Henry Lee, who had spoken in favor of Henry's position, seconded the motion. Thomas Jefferson also rose in support of Henry's plan, as did the distinguished jurist, St. George Tucker and John Taylor of Caroline County. Thomas Nelson, one of the wealthiest men in Virginia, declared that, should the British land troops in his county, he would summon his militia to resist whether he had authorization from the Convention or not. Other militia officers rose to second Nelson's position. Washington, perhaps recalling his distaste for militia, said nothing.(180) The Williamsburg "gunpowder affair" became for Virginia what the British attempts at confiscation of the same commodity at Lexington and Concord was from the militia of Massachusetts. Patrick Henry had demanded that Dunmore release the colony's supply of gunpowder at the Williamsburg Magazine for militia use. Dunmore related the order of 19 October 1774 from Lord Dartmouth which forbade the export of gunpowder and arms to the American colonies. The royal governor interpreted the order as including the distribution of arms and powder already in the colonies, stored in the royal armories and magazines. Henry argued that the arms and gunpowder in question had been sent for militia use and the royal authorities had simply neglected to distribute these to the county militias. Dunmore sent 20 kegs of gunpowder from the public magazine on the night of 20 April 1775 and had it loaded aboard the schooner Magdalen. As word of this confiscation circulated many Virginians talked open rebellion. Council, on Henry's recommendation, addressed a communication to Dunmore, pointing out that the powder had been stored for the protection and security of the colony and that it must be restored to it. Dunmore claimed that the mere presence of the gunpowder among the militia constituted a call to arms and an open invitation to the more rebellious leaders of the militia to actually rebel. He would release the powder immediately upon hearing of any Amerindian incursion, but, for the time being, it would remain with Captain Collins aboard the Magdalen. Henry summoned the militia. A significant body of armed men gathered at Fredericksburg. Volunteers arrived from Hanover and New Castle. With the arrival of each new militia, the commanders sent messages to Williamsburg, bragging on their gathering strength. By 26 April, the governor saw that his position was untenable. Dunmore acquiesced to Henry's demand by pledging his honor to return the powder, but he considered this the first act of rebellion in his colony.(181) Honoring his pledge to return the confiscated gunpowder proved to be Dunmore's last act as the generally recognized royal political authority in Virginia. On 29 April the Virginia Gazette carried news of the events at Lexington and Concord. Patrick Henry used this news as an occasion to spur the patriots onto greater action. To him, after the "robbery" of the gunpowder, "the next step will be to disarm them, and they will then be ready to arms to defend themselves."(182) Even after the return of the powder, Dunmore had planned to remain in his mansion. He took the precautionary step of ordering that it be fortified, even to the point of bringing in artillery, but he was soon intimidated by the gathering militia from the countryside. On the morning of 8 June 1775, Dunmore abandoned Williamsburg, escaped to Yorktown and boarded the man of war Fowey.(183) There he issued his final report on the gunpowder affair. I have been informed, from undoubted authority, that a certain Patrick Henry, of the county of Hanover, and a number of his deluded followers, have taken up arms and styling themselves an Independent Company, have marched out of their County, encamped, and put themselves in a posture for war, and have written and dispatched letters to divers parts of the Country, exciting the people to join in these outrageous and rebellious practices, to the great terror of all His Majesty's faithful subjects, and in open defiance of law and government; and have committed other acts of violence, particularly in extorting from His Majesty's Receiver-General the sum of Three hundred and Thirty Pounds, under pretence of replacing the Powder I thought proper to order from the Magazine; whence it undeniably appears that there is no longer the least security for time life or property of any man: I have thought proper, with the advice of His Majesty's Council, and in His Majesty's name, to issue this my Proclamation . . . . Reaction in Virginia to reports of the events of Lexington and Concord were much the same as among the people of the other states. One American living on the Rappahannock River wrote to a London newspaper that "It would really surprise you to see the preparations [we are] making for our defence, all persons arming themselves, and independent companies, from 100 to 150 men in every county of Virginia, well equipped and daily endeavouring to instruct themselves in the art of war." He claimed that "in a few days an army of at least 7 or 8 thousand well disciplined men" who were "well armed" would "be together for the protection of this country." (184) Patrick Henry addressed the militia at New Castle, claiming that the British Ministry had created a plan "to reduce the colonies to subjugation, by robbing them of the means of defending their rights."(185) Another correspondent from Virginia reported to a London newspaper, We shall therefore in a few weeks have about 8000 volunteers (about 1500 of which are horse) all completely equipped at their own expence, and you may depend are as ready to face death in defence of their civil and religious liberty as any men under heaven. These volunteers are but a small part of our militia; we have in the whole about 100,000 men. The New England provinces have at this day 50,000 of as well trained soldiers as any in Europe, ready to take the field at a day's warning, it is as much as the more prudent and moderate among them can do, to prevent the more violent from crushing General Gage's little army. But I still hope there is justice and humanity, wisdom and sound policy, sufficient in the British nation to prevent the fatal consequences that must inevitably follow the attempting to force by violence the tyrannical acts of which we complain. It must involve you in utter ruin, and us in great calamities, which I pray heaven to avert, and that we may once more shake hands in cordial affection as we have hitherto done, and as brethren ought ever to do. . . . Messrs. Hancock and Adams passed through this city a few days ago . . . about 1000 of our inhabitants went out to meet them, under arms . . . . By last accounts from Boston, there were before the town 15,000 or 20,000 brave fellows to defend their country, in high spirits . . . . Should the King's troops attack, the inhabitants will be joined with 70,000 or 80,000 men at very short notice. . . .(186) In June 1775 Lord Dunmore abandoned his capitol, taking refuge aboard a British man o' war, and went through the pretense of asserting royal authority. The colonists thereafter were to charge that he conducted warfare by plundering isolated plantations, abusing women, abducting children, stealing slaves, and burning wharves. In October he was repulsed at Hampton and in December defeated at Norfolk. The royal government was dissolved. On New Year's Day 1776, Dunmore made his last raid and then sailed away to England.(187) A convention met at Richmond with the charge to reconstitute government. The interim government ordered the formation of two regiments of the Northern Continental Line under the command of George Washington and two bodies of militia: the regular militia and a body of special minutemen to be organized along the lines of minutemen in New England. By November 1775 Accomack County reported that "almost to a man" the whole body of freemen of that and surrounding counties were "ready to embody themselves as a militia."(188) The new Virginia militia act, passed in July 1775, came as a legal reaction to the spontaneous popular reaction to the massacre of the patriots in Massachusetts. The act created two classes of militia, the regular companies and special companies of minute-men. The militia law was enacted providing that all free males, between the ages of sixteen and fifty, with certain exceptions, should be enrolled. These militia were organized into companies of from thirty two to sixty eight men strong, and companies were organized into regiments. The Governor appointed the regimental officers. All the militia in a county were under an officer called the County Lieutenant, who held the rank of colonel, who, on taking the field, ranked all colonels commanding regiments.(189) In the winter of 1775-76 Virginia organized Minute Men. The State was divided into districts, each furnishing a battalion. Selected officers were appointed who secured their men from the State militia. The were required to have extra drills and were better clothed an armed than the militia. They were subject to call at any time.(190) The militia act of July 1775 created a specially trained select militia, the Minutemen. Regarding the minutemen, the Convention resolved, That the minute-men in each respective district, so soon as they are enlisted and approved . . . shall be embodied and formed into separate battalions, and shall be kept in training under their adjutant for 20 successive days, at such convenient place as shall be appointed by the committee of deputies in each district; and after performing such battalion duty, the several companies of each battalion shall, in their respective counties, be mustered, and to continue to exercise four successive days in each month, except December, January and February . . . care being taken that such appointments do not interfere with battalion duty. . . . and be it further ordained, that, in order to render them the more skillful and expert in military exercise and discipline, the several companies of minute-men shall twice in every year, after the exercise of 20 days, be again embodied and formed into distinct battalions within their districts, and shall at each meeting, continue in regular service and training for 12 successive days . . . . And as well for the case of the minute-men, as that they may be returned in regular rotation to the bodies of their respective militias, be it further ordained, that after serving 12 months, 16 minute-men shall be discharged from each company . . . and the like number the end of every year, beginning with those who stand first on the roll, and who first enlisted; and if those who stand first should choose to continue in the service, taking the next in succession being desirous of being discharged, and so from time to time proceeding in regular progression. . . . The minute-men shall not be under the command of the militia officers . . . (191) The minutemen were a select militia which was assigned defense of the state and especially the frontiers. The minutemen were separate in the chain of command from the great militia, and one set of officers had authority over the other organization only when they were expressly mustered in joint action. The minute-men in each respective district, so soon as they are enlisted and approved, as before directed, shall be embodied, and formed into separate battalions, and shall be kept in training under their adjutant for 20 successive days, as such convenient place as shall be appointed . . . and after performing such battalion duty, the several companies of each battalion, shall in their respective counties be mustered, and continue to exercise for successive days in each month, except in December, January and February. . . . in order to render them more skillful and expert in military exercise and discipline, the several companies of minute-men shall twice in each year, after the exercise of 20 days, be again embodied and formed again into distinct battalions within their districts, and shall in each meeting continue in regular service and training. . . but the minute-men shall be under the command of the militia officers, nor the militia under the command of minute officers, unless drawn out upon duty together.(192) The minutemen were to be rotated so that no individual was unduly burdened. As well for the case of the minute-men, as that they may be returned in regular rotation to the bodies of their respective militias, be it further ordained, after serving 12 months, 16 minute-men shall be discharged from each company . . . and the like number at the end of every year, beginning with those who stand first on the roll, and who were first enlisted; and if those who stand first should choose to continue in service, taking the next in succession desirous of being discharged, and so from time to time proceeding in regular progression.(193) Robert Carter Nicholas, one of Virginia's delegates to the Continental Congress, warned the state legislature of the limitations of the militia. "Neither militia nor Minute-men will do except for sudden and expeditious service."(194) One of the first actions assigned to the minutemen was the capture of Lord Dunmore, last royal governor of Virginia. Dunmore had recruited a band of loyalists and escaped servants and slaves and had erected fortifications on Gwynn's Island, Matthews County. Scotch merchant James Parker, writing from Norfolk, Virginia, to a friend in Edinburgh, Scotland, on 12 June 1775, observed, You will see the Governor [Lord Dunmore] and his family again. I do not think his lady will return to Williamsburg. Tis said he will, provided the shirtmen are sent away. These shirtmen of Virginia uniform are dressed with an Oznaburg shirt over their clothes, a belt round them with a Tommyhawk or Scalping Knife. They look like a band of assassins and it is my opinion, if they fight at all, it will be in that way.(195) Newly elected Governor Patrick Henry resolved to end this threat to the security of the state. Dunmore referred to the minutemen as "shirtmen" on account of their habit of wearing buckskin or homespun shirts instead of regular uniforms.(196) Dunmore was aware of the deadly accuracy of the rifle-equipped shirtmen, having seen them in action during Dunmore's War just two years earlier. Moreover, at the Battle of Great Bridge, on 9 December 1775, the shirtmen killed or mortally wounded 62 British troops with their deadly rifle fire, while losing no men of their own. The British commander, Captain Fordyce, fell early in the engagement, his body pierced by 14 rifle shots.(197) After warning his command that the shirtmen would surely scalp all survivors alive, as well as all dead loyalists, Dunmore fled, boarding a small man-of-war in the James River, leaving the New World forever behind. The minutemen found Gwynn's Island deserted.(198) An American correspondent wrote to a London newspaper in early spring 1776, reporting that "nothing has happened in Virginia since the entire destruction of Norfolk." However, he optimistically reported that the state "by the month of April will have 30,000 or 40,000 men to take the field." Many were common militia, but "amongst these are a great number of riflemen."(199) One historian claimed that, at the outbreak of the war, approximately 45,000 men were eligible for service in Virginia and that, during the entire war, that number was never less than 40,000. However, only about one-quarter of the number was ever engaged in any significant service. When the war began, large numbers of militiamen were still in Dunmore's service on the frontier. Later, others served in the expedition against the Cherokee nation in the west, and still others had been sent to the aid of North Carolina in its Cherokee War.(200) On 14 August the Virginia Convention received news that Dunmore was planning an attack upon Williamsburg, with the intention of capturing as many of the rebel leaders as possible. The Convention requested the Committee of Safety to enlist volunteers to protect the city, and to call out the militia. The legislature acted quickly, calling out 8180 militiamen to be equipped as minutemen. And "the balance of the militia were ordered to be armed, equipped and trained, so as to be ready for service." The legislature also adopted a manual of arms and militia training. It established an arsenal at Fredericksburg to manufacture muskets and other small arms. To pay for the various expenses of defense, the legislature issued £350,000 in paper money, along with an annual tax to redeem the issue.(201) In December 1775 the Virginia Convention authorized the formation of six additional regiments of the Continental Line, with each regiment consisting of ten companies of 68 men each. Drafts from the militia rolls were instituted. Having excluded blacks, whether free or slave, and indented servants from militia service, the Virginia Convention, in the summer of 1776, enlisted two hundred Amerindians in the state militia.(202) On 10 March 1776 Virginia dispatched two regiments of 650 men each to assist North Carolina, primarily against Tarleton's Loyalist forces. During the first three years of the war, England held no part of Virginia. The best the English could do was to attempt to wreak havoc and hope that they could lower provincial morale. The militia served three purposes in the early years. First, the general militia was regarded as a reservoir upon which the Continental Line could draw replacements. Second, along the seacoast the urban militia served to protect cities in a case of an invasion. The tidewater militia was especially trained for this service. Third, the militia from the Blue Ridge Mountains and westward fought in major engagements with the native aborigine. Many were enrolled in the frontier rangers. British agents and disgruntled adventurers had stirred up the natives who were still resentful over their defeat at Point Pleasant, supplied them with guns, and urged them to war by granting them gifts, money, and liquor. As John Page advised Jefferson, "have the militia completely armed and well trained as the time they can spare will admit of, and [then] . . . make draughts of it when men are wanted."(203) All militiamen were required to take the following oath. I, ------, do swear that I will be faithful and true to the colony and dominion of Virginia; and that I will serve the same to the utmost of my power, in defence of the just rights of America, against all enemies whatsoever. The Third Virginia Convention passed a new militia act. Because of the "present danger, it is adjudged necessary" that all free, able-bodied males between the ages of 18 and 50 be enrolled in the general militia. Companies of not less than 32, nor more than 68, members were to be formed in all counties of the state. The militia law required that "every militia man should furnish himself with a good rifle, or common firelock, tomahawk, bayonet or scalping knife, pouch or cartouch box, and three charges of powder and ball." Drills were to be held semi-weekly, along with two general county musters, to be held in April and October, with the minutemen providing training. The act provided for the exemption of two groups of religious dissenters, the Society of Friends and Mennonites. It also exempted bound apprentices, indented servants and several classes of professions. Clergy of the established church and those churches in communion with it were exempted. Those engaged in various trades adjudged to be vital to the war effort were also exempted.(204) Shortages of manpower required that the legislature remove certain exemptions. On 15 June 1776 the legislature passed an ordinance "to raise and embody a sufficient force for the defense and protection of the Colony" so overseers of plantations and millers on the eastern shore lost their immunity from militia duty. On 5 July 1776 the revocation of the exemption of millers was extended to the whole state.(205) On 24 June the Convention voted to "let the present Militia officers be chosen annually . . . by joint vote of both houses of the assembly." The governor was empowered to fill vacancies with the advice of his privy council.(206) On 29 June the Convention voted to allow the governor to "embody the Militia with the advice of Privy Council and when embodied shall alone have the direction of the Militia."(207) With Patrick Henry elected commander of the select Virginia militia, men began to appear in increasingly large numbers. Two regiments, destined to become continental regulars, soon formed. Henry described them appearing with various garb, from ancient militia uniforms to buckskins, to recently sewn uniforms, although most were dressed in "green hunting shirts." Many had the words, "Liberty or Death" inscribed somewhere on their clothing. Hats or caps were trimmed with buck-tails, and nearly all carried scalping knives or tomahawks. Most carried their own fowling pieces, which fired the widest possible assortment of round balls. Some carried flags or banners with the coiled rattlesnake motif and the words, "Don't tread on me."(208) The militiamen were organized in units of 76 men with four officers with halberds, one fifer, one drummer and one color bearer. The public treasury provided the fifes, drums, halberds and flags. By the time six regiments had been raised the legislature authorized creation of a post of drum major.(209) Philip Fithian described a militia muster in late 1775 or early 1776. The Drums beats & the Inhabitants of this Village muster each Morning at 5 o'clock . . . . Mars, the great God of Battle is now honoured in every part of this spacious Colony, but here every Presence is warlike, every sound is martial! Drums beating, Fifes & Bag-Pipes playing & only sonorous & heroic Tunes -- Every Man has a Hunting Shirt, which is the Uniform of each Company.(210) The select militia was given special training and organization. The state was divided into sixteen military districts and each district was to recruit 500 minutemen, to be divided into ten companies of 50 men each. Only "expert riflemen" need apply for membership in these select units, and the members were ordered to muster and train for 20 days in the month following organization, and then four days each month thereafter. Additionally, they would superintend training of the great militia at annual spring and fall musters, each of which was to last 12 days.(211) The Baptists approached the Virginia legislature, asking that their clergy be given the privilege of preaching among the troops. Many of its adherents had already enlisted in the patriot cause. The Church of England was the established denomination, but the legislature thought that since the Baptists had pledged loyalty to the patriot cause, the privileged status of one church should not present an obstacle, and thus granted permission. The privilege was then granted to all Protestant sects willing to support the cause. The Baptist pulpit, in repayment, became politicized in support of the cause of liberty. Colonel William Woodford, a Virginia militia commander, and a close friend of George Washington, had recently been commissioned and wrote to Washington for advice on selecting a manual for discipline if his troops. On November 10, 1775, Washington, writing from Cambridge, offered his opinion on military discipline. Washington provided him with a list of five military books for study: Sir Humphrey Bland's A Treatise of Military Discipline,(212) a book Washington noted as "standing foremost." Next he named An Essay on the Art of War which was the book written by Count Turpin de Crisse, and recommended to Washington by Forbes.(213) Third was Instructions for Officers.(214) The last two books were: The Partisan(215) and Young's Essays on the Command of Small Detachments.(216) One cannot but be struck with the excellence of this selection. They deal largely with infantry as he was writing to an infantry colonel. Two of the books, Bland's and Turpin's, were respectively the best military books of the period published in England and France. The Partisan covered the use and deployment of light troops and partisans, today's guerrillas, and was thus especially useful to militia commanders. Thomas Simes had published The Military Guide for Young Officers in Philadelphia in 1776 but this book was merely a reprint from an older English edition.(217) When Von Steuben arrived at Valley Forge he found only two military books were used, those of Bland and Simes.(218) These books constituted the substance of military knowledge upon which officers both of the regular army and the militia in all the states drew during the Revolution.(219) In the winter of 1775-76 Dunmore gathered a band of loyalists to supplement his army of two companies of the Fourteenth Regiment and moved through Norfolk and Princess Anne county. At the east branch of the Elizabeth River at Kemp's Landing Dunmore defeated the Princess Anne militia under Colonel Hutchings. Colonel Woodford gathered a few of the fledgling Continentals and a number of militia and pursued Dunmore's force. At Great Bridge on the Elizabeth River, on 9 December 1775, Woodford met and defeated Dunmore's 200 regulars and 300 loyalists and escaped black slaves, inflicting considerable losses on Dunmore while suffering only one man wounded. Woodford reported that "the deadly rifles of Captain Green's Culpeper [militia] men, every one of them a marksman, contributed greatly to this victory, as they had at Hampton." Dunmore retreated to the safety of his ships at Norfolk, leaving the slaves to make their own way out.(220) Patrick Henry ordered several companies of minute-men to encamp around Williamsburg to protect the city and its officials. The Committee of Safety ordered out several more companies of minute-men to guard other points, such as Burwell's Ferry, Jamestown, Hampton and York-town, where Dunmore might land his mixed force. The Virginia Convention met on 1 December 1775 at Richmond and soon adjourned to Williamsburg, where it remained in session until 20 January 1776. In cooperation with the Committee of Safety, it created seven additional regiments of regulars and called out a company of 500 riflemen. The latter were deployed in the counties of Accomack and Northampton to protect them from Dunmore's force. Colonel Woodford, as ranking military officer in the state,(221) pressed the Convention to supply better arms of standard military caliber. As many of the men, both regulars and militia, were armed with fowling pieces of various calibers, each man had to mold his own bullets. Moreover, fowlers were wholly unsuited to the use of bayonets. Woodford also complained of the poor quality of the arms received from the former colonial stores. Had "better arms been furnished in time for this detachment, they might have prevented much trouble and great expense to this Colony. Most of those arms I received the other day from Williamsburg are rather to be considered as lumber, than fit to be put in men's hands. . . ."(222) The Convention considered a number of principles upon which the new state should be based. The thirteenth point convened the militia. It declared that "a trained militia is the proper defense of a free state, that standing armies in times of peace are dangerous to liberty, and that the military must be in subordination to the civil power." The Convention made reference to the provisions in the English Bill of Rights that Protestants should be allowed to keep and bear arms and that there should be no standing army in peacetime without the consent of Parliament. The delegates agreed that these two sections were the natural conclusions of historical experience and of a true democratic tradition.(223) Amerindian problems beset the newly independent state almost immediately. Urged on by royal emissaries and white renegades the native aborigine carried out raids against isolated settlements along the Holston and Ohio rivers and in Kentucky. The Cherokees along the Holston were especially active so a large militia force was created made up largely of frontiersmen who were experienced in Indian fighting. The urban militia supplemented the backwoodsmen by occupying the few towns and forts in the path of the marauders. The militias from the counties of the Shenandoah Valley were able to sustain the Amerindian incursions from the north. The Virginia Bill of Rights of 1776 provided "that a well-regulated militia, composed of the body of the people, trained to arms, is the proper, natural and safe defence of a free State." It also rejected standing armies and ordered the subordination of the military to civil authority.(224) The Virginia Convention of 1776 put Thomas Jefferson to work on a draft of a new constitution for the newly independent state. His first draft of the fundamental document contained a provision for the militia and the right to bear arms based in classical political thought which tied human freedom to the right to keep and bear arms. The following shows Jefferson's original draft and changes made by deletion. No freeman shall ever be debarred the use of arms. No souldier shall be capable of continuing in there shall be no standing army but in the time of peace actual war(225) After the delegates considered and debated his initial draft, Jefferson made the following changes in his second draft. Deletions are shown. No freeman shall be debarred the use of arms within his own lands or tenements There shall be no standing army but in time of actual war(226) His third draft of this provision read exactly as the second had read.(227) The Constitution of 1776 also provided that the governor direct and command the militia and recommend commissions to the legislature. Militia officers commissioned previously were to be continued in grade provided only that they take the oath of loyalty.(228) In the summer of 1776 the citizens of Kentucky met at Harrodsburg and on 6 June 1776 appointed deputies to represent them at Williamsburg. They wished to secure Virginia citizenship for themselves and to associate their frontier militia with the state militia. The Harrordsburg gathering appointed Gabriel Jones and George Rogers Clark (1752-1818) to represent their interests and sent them on the 500 mile journey to Williamsburg.(229) By the time they reached Botetourt County, they learned that the Convention had adjourned. Jones joined Colonel Christian's expedition against the Cherokees while Clark continued on his journey. He met with Patrick Henry at his home and received a cordial reception. Henry recommended both the incorporation of the Kentucky militia and material support, especially with 500 pounds of gunpowder. On 23 August the Convention provided the gunpowder, sending it to Pittsburgh and then down the Ohio River.(230) This secured the loyalty of Kentucky to Virginia and drew its militia into the state's military organization. On 29 May 1776 the Virginia legislature decided to create three companies of Minute Men, to be stationed on the frontier. The main problem attending the deployment of these ranging units was securing rifles wherewith to arm them.(231) Rangers were to be skilled marksmen and thus be armed with rifled arms instead of muskets. Unlike muskets, rifles had not been standardized, but the legislature deemed uniformity of caliber highly desirable. They also required a greater effort and investment of time to manufacture. The law creating ranging units was strengthened to provide "the better defence of the frontiers of this Colony." Funds were appropriated for implementation of the Minute Men in June 1776.(232) On 20 June 1776 the legislature authorized the formation of a company of rangers in Fincastle, Botetourt and Augusta counties. Ranging companies were to be drawn from frontier companies because the men there were accustomed to the Amerindian way of fighting. Urban militia were essentially useless in the wilderness. Their special talents were wasted in urban settlements. Rangers were ordered to assist the militia in other counties as needed; and in return they could ask for assistance from other counties.(233) General Washington, writing from New York, supported the formation of ranging companies on the frontiers, believing this to be an effective use of frontiersmen. "With respect to [the use of] militia in the management of Indian affairs, I am fully persuaded that the inhabitants of the frontier counties in your colony are, from inclination as well as ability, particularly adapted to that kind of warfare."(234) In mid-June the Fifth Virginia Convention considered the revisions of the militia law to make it better meet the needs of a wartime state.(235) The Convention and the Governor then turned their attention to arming the militia. By the time of the Revolution, arms were extremely scare among the population. One of the primary problems confronting the militia was replenishment of supply of the once legally mandated privately owned and supplied firearms. The state sent impressment gangs through the countryside to confiscate (although they eventually paid for) firearms wherewith to arm both the Virginia Continental Line and the militia, although the former certainly had priority in the allocation of arms. Impressment of arms from private citizens was a primary source of supply, and was an extremely unpopular device. Moreover, they scrounging officers brought back a mixed bag of old, obsolete, obsolescent, worn out and damaged arms more frequently than they brought back current and useful arms. The guns were of many calibers and fired a variety of projectiles. On 2 October 1776 Captain Nicholas Cabell (1750-1803) delivered to Captain Samuel Higgenbotham the product of a week of impressment. These arms, which were to be consigned for militia use, included 22 rifles of 14 different calibres and 8 shotguns, a hunting weapon usually not considered useful or suitable for military use.(236) Arms shortages continued to plague Virginia throughout the Revolution. So destitute was the militia of firearms that the Committee of Safety ordered to issue muskets when available, but if none were available, to issue "speers or cutlasses." Several companies were issued tomahawks.(237) On 14 June 1776 General Francis Johnson wrote from Long Island to General Anthony Wayne, "I shall not continue 6 months longer in the Service without Arms," warning him that, as things were, he would have to defend various fortifications "with our People armed with Spears, or be compelled to leave the Camp. He also noted that "Howe and his Redcoats will pay us a Visit immediately . . . [and] we for our parts have nothing but damned Tomahawks."(238) Like other states, it had a need for arms greater than it could fulfill through any sources of supply. The state authorities were willing to accept whatever arms they could procure. On 13 September 1777 Edmund Pendleton wrote to William Woodford, "the length or form of Rifles or other guns I am inclined to think will make no great difference so long as the old sort of experienced hands use them."(239) To secure the arms from pilferage, the state ordered that "all arms delivered out of Publick Stores or purchased by Officers for use on the Continent, [are to be] branded without loss of time."(240) On 20 February 1781, John Bannister complained to Jefferson that Congress was remiss in supplying the state. "I cannot help observing how unjust it is in Congress not to assist us with arms when we have to contend singly with the greatest part of the British army."(241) In the late summer 1776 Governor Henry sent Colonel William Christian with a substantial company of militia to the relief of the frontier. He made his way through the southern Ohio territory, down the Tennessee River, into the lands of the Cherokees and Creeks. However, the enemy proved to be elusive because "the men retreat faster than I could follow." He reported to Henry that, "I know, Sir, that I could kill and take Hundreds of them, and starve hundreds by destroying their Corn, but it would be mostly the women and children." Unlike General John Sullivan later on, Christian refused to make war on the able-bodied men by starving the very old, very young and the children. "I shewed pity to the distressed and spared the supplicants, rather than that I should commit one act of Barbarity." Nonetheless, Christian captured 40 to 50 thousand bushels of corn and 10 to 15 thousand bushels of potatoes, along with assorted quantities of horses, fowl, cattle and hogs. The expedition also rescued a few white captives. Christian attempted to negotiate with the leaders, sachems and chiefs, but had little initial success. It is here that Christian first encountered a renegade chief he called Dragon Canoe, on whom more later. He warned the leaders with whom he did meet that he could easily command 2000 Virginia militia and that the Carolinas would supply another 400, all experienced Indian fighters. Eventually, some chiefs responded to his overtures of peace. Time also allowed for the gathering of intelligence and he learned that one Cameron, a British agent, had successfully seduced Dragon Canoe and a few others, and that Cameron had promised to produce large quantities of war materials at Mobile, to be given to such tribes as would ally with the English against the colonists.(242) Christian warned Henry that there he apprehended far greater from the English at Mobile than at Fort Detroit, and strongly recommended an expedition be undertaken against the southern renegade Indians.(243) A second militia detachment under General Rutherford attacked several Indian towns and killed a number of warriors, captured several Frenchmen and took prisoner several escaped slaves. The militia also captured a quantity of gunpowder and lead and provisions valued at £2500. These supplies had been destined for Mobile, to be used to attract Cherokees to the British cause. South Carolina militia under Colonel Williamson, after suffering considerable losses during an ambush, regrouped and routed the Cherokees, supposed to have been under British and tory leadership. Williamson joined Rutherford "destroyed all the Towns, the Corn and everything that might be of service" to the Cherokees in several of their villages. Despite being opposed by a "considerable body" of hostiles, Rutherford lost only three men.(244) In 1776 Virginia had far fewer problems recruiting soldiers for the Continental Line than it had in supplying them with arms and accoutrements. Congress had ordered on 16 September 1776 that Virginia supply fifteen battalions of the Line. So successful was the state in filling its initial quota that John Wood, governor of Georgia, on 20 August 1776, asked for, and received, legislative permission to recruit in Virginia in order to fill his own state's quota. In a letter to Richard Henry Lee, Henry complained bitterly about this allowance. "I write to the General [Washington] that our enlistments go on badly. Indeed, they are almost stopped. The Georgia Service has hurt it much."(245) Discipline was harsh and, at times, even bizarre. In 1776 Captain John Pegg, a vestryman in his church and militia captain, was fined, broken in rank and held up to public contempt for "drinking and making use of in his family the detestable East Indian tea." Pegg responded that the inquiry into his habits, practiced within the privacy of his own home constituted "an impertinent interference in his family affairs" and that he would not be bound by such inquiries. The state responded by listing him as "an enemy to the cause" in the Virginia Gazette.(246) Washington on 4 October 1776 had observed that there is an enormous, material difference between voting to raise companies of soldiers and actually recruiting, equipping, arming and discipling them. Responding to Washington's request for reasonable terms of service, on 16 November 1776 the legislature set enlistment terms at three years and made provision for recruiting, even drafting if necessary, men from the reservoir of trained militiamen.(247) In December 1776 the Virginia legislature authorized the formation of three additional battalions of regulars to serve under the command of the Congress, but in the pay of the state. It also authorized the creation of additional minute-men and volunteer companies in the exclusive service of the state. By December 1776, the legislature had to ask assistance in recruiting from "justices, members of county committees, and the other good people of this Commonwealth" in recruiting men to serve at all levels, from regulars with three year enlistment obligations to militia to minute-men to volunteer companies.(248) The question of the legality and legitimacy of the deployment of militia outside the state had never been resolved, dating from colonial days. Rather than resolving this problem, on 26 December 1776, Governor Henry issued a special call for volunteers "willing to engage in the defence of this State, or march to the assistance of any other, should the exigency of things demand it."(249) He described the volunteers to General Washington. "The volunteers will consist chiefly from the upper parts of the country, who would make the best of soldiers, could they continue so long in the service as to be regularly disciplined. He thought they would be "as respectable as such a corps can be expected, without training." They will find their own arms, clothes, and . . . be commanded by captains . . . of their own choosing." They would differ from militia in that "they will be subject to the Continental Articles of War."(250) By February 1777 it was apparent that Henry's call interfered with the enlistment of troops for long service in the Continental Line, so Henry suspended his call for volunteers until the enlistment of regulars was completed.(251) In March 1777, Governor Henry reported that "the recruiting business of late goes on so badly that there remains but little prospect of filling six new battalions from this State, voted by the Assembly." He was disappointed at the failure of the militia to serve, as hoped, as a reservoir of trained manpower for the army. "I believe you can receive no assistance by drafts from the militia."(252) Nonetheless, the legislature authorized a draft from the militia to complete enlistments in the Line.(253) In March Henry was forced to send militia to the Virginia frontier. He ordered militia from Botetourt and Montgomery counties to march to the relief of the settlers in Kentucky, primarily to escort the more distant settlers to convenient places of safety while the Indian menace loomed. Although he understood that there was a vast territory to scour for settlers, Henry was forced to inform the lieutenant of Montgomery County that his many commitments outweighed his resources. "The great variety of War in which this State is engaged," Henry wrote, "makes it impossible to spare such a number of men for this Expedition as I could wish."(254) Henry was much concerned for the defense of the western frontier. In March 1777 he asked Governor Thomas Johnson if Maryland was able to support Virginia with militia to defend Fort Pitt and to join in an expedition down the Ohio River to contain the hostile Cherokees.(255) More bad news concerning the Amerindians trickled in from the western frontier. Cornstalk had approached the Virginia garrison at Point Pleasant on the Ohio River to report that Colonel Henry Hamilton, the notorious "hair buyer," had achieved remarkable success among the northerly tribes. Cornstalk did not want to become involved in the "white man's dispute," but he might have "move with the stream." The commandant detained him along with his two companions. Cornstalk's son, worried about his father's failure to return, then came to the fort. Meanwhile, two men hunting for fresh meat not far from the fort were attacked and one was killed by Cornstalk's men. A relative of the dead man, one Captain Hall, advanced on Cornstalk and murdered him, his son and at least two other Shawnee. Even a vital portion of Cornstalk's message was lost since, at the time of his murder, he was performing a vital service to his friends, the Virginians, by drawing a map that showed the disposition and location of the various tribes between his own Shawnee villages and the Mississippi River.(256) The wanton murder of one of the most popular Amerindian leaders was the immediate cause of raids into the Greenbrier Valley. The militia and rangers contained the attacks, but the deprivations continued throughout the war, tying up many militiamen who might have served the patriot cause better by deployment elsewhere. Garrison duty at the many forts maintained along the frontier during the entire war proved to be the most unpopular duty assigned to the militia. Many Virginians objected to the drafting of militia into the army. The opposition was especially strong on the frontier where the loss of the male head of household might prove disastrous to the farms. Samuel McDowell of Rockbridge County, wrote to Governor Thomas Jefferson, complaining that the draft "must ruin a number of those whose lot is to march . . . their families and stocks must suffer, as they mostly have not any person behind them when they are gone from home to work their small farms." McDowell advised Jefferson that his friends and neighbors "would serve as militia but would not be drafted for 18 months as regulars." McDowell's neighbor George Moffet emphasized just how much they loathed the draft in his letter of 5 May 1781 to Jefferson. "Yet they would suffer death before they would be drafted 18 months from their families and made regular soldiers of."(257) Since Virginia was neither occupied nor greatly molested during the war, the state was able to function as a reservoir of troops for the Continental Line and as a base of supplies for the patriots. There is scant evidence of deployment of the militia in the north and only occasional use of it in the south during the first three years of the war. Thus, other than frontier duty, the militia was used almost exclusively as a source of semi-trained manpower for the army. In 1779 Clinton sent a fleet to harass the Virginia coast, ending the first phase of the revolution for the state. Urban militia were placed on coastal watch and a portion of them became minutemen, ready to act in defense of the seacoast. The basic militia law was re-enacted and slightly reconstituted by the General Assembly on 5 May 1777, as "An Act for Regulating and Discipling the Militia. All free white males between ages 16 and 50 were eligible for enlistment. Hired servants and apprentices, but not free black or slaves, were included. Excluded were the governor, members of the state council, members of Congress, judges, state officers, such as attorney general and clerks, ministers, postmasters, jail keepers, hospital personnel, millers, iron and lead workers and persons engaged in firearms production for the state. Enlisted officers and men serving in the Continental Line and state navy were also exempted from registration for the militia. Companies of not less than 32 nor more than 68 men were formed, with battalions being made of not less than 500, nor more than 1000 men. Each company had a captain, two lieutenants and an ensign; battalions had additionally a colonel, a lieutenant-colonel and a major. (258) With the continued scarcity of arms, Virginia could ill afford to lose arms through pilferage. On 8 June 1777 the legislature ordered that "all arms delivered out of the Public Stores, or purchased by officers for use on this Continent, to be branded without loss of time." The standard brand employed was "VA" or "Va Regt --."(259) By late winter 1777 Governor Henry had deployed 300 militia at Fort Pitt, primarily to guard against tory and Amerindian activity.(260) To stem the Amerindian menace, Henry conceived, and the legislature approved, an action against Pluggy's Town, an Indian village beyond the Ohio River. Henry dispatched scouts and emissaries to the Delaware and Shawnee, to ascertain if they had objections to Virginia sending militia across their lands. Having determined that these neutral tribes would not be drawn into combat were Virginia militia to enter their lands, on 12 March 1777, Henry began to lay specific plans for this militia action. On that date, Henry wrote to George Morgan, superintendent of Indian Affairs, and Colonel John Neville, commandant at Pittsburgh, laying out his scheme. Both men responded on 1 April, cautioning strongly against the action. They expressed the most grave concerns that a punitive action would be inconclusive and that it would most likely provoke a general, long, barbarous and expensive Indian war.(261) Despite the acute shortage of arms there was often considerable friction between artificers and military contractors and other military authorities. Despite the obvious and acute need for the arms, accoutrements, horseshoes and canteens to be made and repaired, local governmental authorities, facing increased quotas for replacements in the Continental Line, threatened to enlist the artificers in the militia. At Peytonsville, Spotsylvania County, William McCraw, commander of a small band of artificers, wrote the governor, reminding him that McCraw had promised, he assumed on the authority, and with the consent, of the governor, that his men would be exempt from other duties while performing their jobs at the forges. "Unless this be stopped, I can not furnish the canteens so much wanted by the Southern Army; the armourers will not be able to repair the damaged guns, nor can I have horseshoes made, now so much needed." The General Assembly therefore passed legislation which specifically exempted from the draft or militia or other military service any artificer assigned to military posts or privately employed by independent arms or military supply contractors.(262) As the war progressed, many Virginians expressed confidence in their state militia. Edmund Pendleton on 30 August 1777 wrote to Richard Henry Lee, "I think it no unimportant part of our late success that [the] Militia had a principal hand in it, for if they will stand six hours hard fighting with their officers and men falling by their sides, we can never be subdued, our resources in that way are infinite."(263) In August 1777, while Governor Henry was in Hanover preparing for his impending marriage, word was received that General Howe's army had appeared with the British Navy off the Virginia coast. Henry authorized General Thomas Nelson to muster and command 64 companies of militia for the defense of Williamsburg. Among those responding was a militia company of students at the College of William and Mary. Henry ordered Colonel Charles Harrison's regiment of artillery to remain at York-town on the pretext that "militia must in this case be chiefly depended on, and their skill in managing Cannon promises nothing effectual." He also ordered the militia to detain persons suspected of disloyalty on the pretext that they might aid the British.(264) As it was, the British fleet did not land until it reached the Head of Elk, and its mission on this occasion was to provide troops for the assault on Philadelphia, not for an attack on Virginia. To support Washington in this assault, Henry ordered one-third of the militia of the counties of Prince William, Loudoun, Fairfax, Culpeper, Fauquier, Berkeley, Shenandoah, and Frederick, to march toward Philadelphia.(265) Washington thanked Henry for dispatching militia, but noted again his disdain for the Virginia militia, offering a sharp contrast to the New York and New England militias. How different the case in the northern department! There the states of New York and New England, resolving to crush Burgoyne, continued pouring in their militia, till the surrender of that army, at which time not less than 14,000 militia . . . were actually in General Gates's camp, and those composed, for the most part, of the best yeomanry in the country, well armed, and, in many instances, supplied with provisions of their own carrying. Had the same spirit pervaded the people of this and the neighbouring States, we might, before this time, have had General Howe nearly in the situation of General Burgoyne. . . .(266) In May 1778, the legislature passed a series of acts designed to draft or recruit 2000 men to assist General Washington. Those enlisted, whether as volunteers or drafts from the militia, were to serve until 1 January 1779, or less than two years. Additionally, minute-men were to be recruited for the defense of the eastern shore from British raiders and on the west from Amerindian attacks.(267) By mid-summer 1778, enlistments of many Virginia Continentals were expiring. Their numbers had been diminished by desertion, casualties in battle and death and incapacity from smallpox, dysentery and other diseases. Word of plagues of smallpox and other contagion diminished whatever enthusiasm yet remained for the patriot cause. While the legislature authorized the payment of bounties and another draft from militia rolls, Henry found it nearly impossible to recruit even half of the assigned quota. The state currency had become so depreciated that neither bounty nor pay were meaningful. Looking forward, Henry could see that the enlistments of the first nine regiments of the Virginia Line were due to expire early in 1778. He wrote to Congress, expressing his deep concern, but without being able to offer any solution.(268) In May 1778 Governor Henry received a distressing report regarding the Northampton County and Norfolk city militias. Captain John Wilson, the militia commander, wrote, "I beg to observe that the militia of late, fail much in appearing at musters, submitting to the trifling fine of five shillings, which, they argue, they can afford to pay by earning more at home."(269) Immediately after reading this, Henry conveyed a message to Benjamin Harrison, Speaker of the House of Delegates, concerning the military. In a positive vein, he reported success in the campaign against the Cherokees. Regarding the militia, he had a mixed report. "Although the militia of this commonwealth are in general well affected, and no doubt can be entertained of the general good disposition of the people," he wrote, "I am sorry to say that several instances of refractory and disobedient conduct have, which, for the sake of example, called loudly for punishment." But, probably with Wilson's letter in mind, he also reported that "offenses against the Militia law are become common."(270) Having established relations with the settlers in Kentucky, Virginia felt somewhat obligated to undertake their protection. Henry also had men in that year engaged in other frontier areas of the West. The policy of appeasement and peace that Colonel Neville and George Morgan had recommended was evidently a failure. After a series of Amerindian outrages, the Supreme Executive Council ordered Colonel John Todd to enlist 250 militiamen to provide some relief.(271) Congress also thought to act on behalf of the western settlements and in the spring of 1777 ordered General Hand to enroll a large body of militia to move against the Amerindians in Ohio from a base at Pittsburgh. Hand called into serve the militias of the Virginia counties of Frederick, Yohogania, Ohio, Hampshire, Monongalia, Botetourt, Augusta and Shenandoah. Henry was still uncertain if he could deploy the militia beyond the state's boundaries, so he decided to call for volunteers. Colonel Skillern raised five volunteer companies in the counties of Greenbrier, Augusta and Botetourt and marched to Point Pleasant, where a fort had been created, to join Hand. Captain Arbuckle commanded Fort Randolph at Point Pleasant and he had engaged several important Amerindian leaders in negotiations, among them Red Hawk and Cornstalk. The latter, desiring to honor the treaty he made after Dunmore's War, had attempted to dissuade his tribesmen from entertaining the British representatives. Cornstalk was unsuccessful in his attempts to maintain neutrality, so he came to Fort Randolph to inform the Americans of the British entreaties. Arbuckle detained all the Amerindians who came to the fort to act as hostages to prevent a large scale Indian war. After a militiaman from Rockbridge County was killed, allegedly by one of Cornstalk's men, the militiamen of Captain Hall's company murdered the hostages, including Cornstalk, his son and Red Hawk. Hand arrived two days after the murder, having failed to recruit any militia volunteers in Pennsylvania.(272) Neither did Hand bring provisions, and there being none at the forest, the volunteers abandoned their mission and returned home. The murder of one of the great Shawnee leaders precipitated an Indian war as the whole Shawnee confederation sought to avenge Cornstalk's death. Concerned citizens of Greenbrier County sent an elaborate memorial to the state authorities, demanding help.(273) On 27 May 1778, Henry ordered a post to be set up at Kelly's in Greenbrier County, manned by militia from Botetourt County, to guarantee the communication and supply route between Williamsburg and Fort Randolph. He also dispatched militia from several counties to support Fort Randolph. And he offered a substantial reward for the capture and punishment of those responsible for the murder of Cornstalk and the others. Finally, he appointed Andrew Lewis and John Walker to serve as special ambassadors to the Delaware and Shawnee nations at a conference scheduled at Fort Pitt on 23 July 1778. The murderers, Captains Hall and Galbraith and others, were brought to trial in Rockbridge County, but immediately acquitted as no man was willing to execute a white man for an Indian's murder.(274) Disgruntled, more than 200 of the Shawnees laid siege to Fort Randolph in May 1778. Failing to capture the fort, the marauding band wreaked havoc throughout Greenbrier County until repulsed by Colonel Samuel Lewis and Captain John Stuart and the militias of several counties. Congress replaced Hand with an experienced Indian fighter from Georgia, General McIntosh, who was given command of a joint force of militia, volunteers and the Thirteenth Virginia Continental Line. McIntosh was to carry the war to Detroit where Henry Hamilton, known as the "hair buyer" for his purchases of white scalps, was headquartered. Congress ordered Governor Henry to provide 2000 men, whether militia or volunteers. Henry estimated the following items would be among the bare minimum supplies needed to carry out the orders of Congress: 30,000 pounds of lead; 1000 horse belts; 400 felling axes and 3000 hatchets; 100 kettles, tents, haversacks and suits of clothing; 500 horses; and a large supply of arms and gunpowder and money. Additionally, there would be the problems of "recruiting, arming, accoutring & discipling" of such a large body of militia. In a long letter to Congress, dated 8 July 1778, Henry begged off. There was no way, he said, could Virginia afford or supply all that Congress demanded. Congress, he wrote, seemed to have no idea of "the exhausted state of this Country," but seemed think the state's resources were unlimited. He certainly supported the scheme, and the elimination of Hamilton's scalp purchasing was certainly a worthy objective. Congress reluctantly accepted Henry's explanation and simply ordered McIntosh to what he could with what he had and to operate from Pittsburgh.(275) The expedition proved to be fruitless. In 1778 McIntosh set up a garrison of 150 militia at Fort Laurens on the Tuscarawas River in the Ohio territory, but abandoned it the next year. Where Hand and Mcintosh had failed, George Rogers Clark was destined to succeed. He had journeyed to Williamsburg in the autumn of 1777, carrying a petition from Kentucky which asked for relief from the Amerindian raids. Having failed to find other ways to relieve the pressures on the frontier, the legislature offered some token support and £1200, not a great sum in the depreciated Virginia currency.(276) It commissioned Clark a lieutenant-colonel and charged him with capturing Fort Detroit. It ordered "that the Governor be empowered . . . to order such part of the militia of this Commonwealth as may be most convenient . . . to act with any troops on an expedition that may be taken against any of our western enemies."(277) Clark had convinced Council that Kaskasia "was at the present held by a very weak garrison" and could be taken without great effort or cost. Moreover, "there are many pieces of cannon & military stores to a considerable amount." This proved to be an irresistible bait and Council ordered him "to procure the artillery and stores" to supply the army. Council suggested Clark raise "seven companies of 50 men each" who were "to receive the pay and allowance of the militia & to act under the laws and regulations of this State, now in force, as militia." Despite Hamilton's policy of buying scalps and the brutality of the attacks on the frontier, Council ordered him "to show humanity to such British Subjects, and other persons, as fall into your hands."(278) With the consent of Governor Henry, Clark offered 300 acres of land to any who would volunteer to serve on his mission. Henry had long harbored the dream of extending Virginia's boundaries west to the Mississippi River and Clark's mission, if successful, on behalf of the state would go a long way to establish that boundary.(279) Moreover, by claiming the Mississippi as the boundary, Henry was on safe legal grounds in deploying state militia on that frontier. Captain Leonard Helm of Farquier County, and Captain Joseph Bowman of Frederick County, each offered to raise a militia company to support Clark. They planned to meet at Redstone Old Fort [Brownsville, Pennsylvania]. He encountered great difficulties because many potential recruits in Western Pennsylvania regarded Clark's expedition as a way to promote Virginia over Pennsylvania interests. Few were willing to support the defense of Kentucky. The county commissioners of Farquier and Frederick questioned the legality of deploying their militiamen in the western territories. Eventually, in May 1778 Clark raised a small force and, with 175 volunteers and militia, moved down the Ohio almost to its juncture with the Mississippi River and then moved northwestward. On 4 July 1778 he captured Kaskasia and, with the support of French inhabitants, brought the surrounding area under control. On 17 December Hamilton, with a force of about 500, of which about one-half were Amerindians, took Vincennes, but on 6 February 1779, Clark recaptured it. By 25 February, after a super-human effort to cross flooded plains, he forced Hamilton's surrender and took the "hair buyer" prisoner. Patrick Henry, under whose orders Clark's militia had fought, reported with unconcealed delight to Richard Henry Lee. Governor Hamilton of Detroit is a prisoner with the judge of that country, several captains, lieutenants, and all the British who accompanied Hamilton in his conquest of the Wabash. Our brave Colonel Clark, sent out from our militia, with 100 Virginians, besieged the Governor in a strong fort with several hundreds, and with small arms alone fairly took the whole corps prisoners and sent them into our interior country. This is a most gallant action and I trust will secure our frontiers in great measure. The goods taken by Clark are said to be of immense amount, and I hope will influence the Indians to espouse our interests. . . .(280) By resolution of Congress of 25 July 1778, the planned combined national and state attack on Fort Detroit and other western British outposts was postponed. Instead, Congress adopted Governor Patrick Henry's suggested plan of attack on hostile Amerindian towns in the Ohio territory, especially several Shawnee towns along the Ohio River. Henry, after meeting with the Council of Safety, decided to deploy the frontier county militias of Washington, Montgomery, Botetourt, Augusta, Rockbridge, Rockingham, Greenbrier, Shenandoah, Frederick, Berkeley, Hampshire, Monongalia, Yohogania and Ohio. These counties, Henry argued should supply all the men that General McIntosh could use, and all should be experienced Indian fighters.(281) Before McIntosh could move, Henry learned that, while the eastern seaboard action had come to a standstill, a mixed British force of regular troops, Amerindian allies and tories, was moving against the small forts in Kentucky. British Colonel Henry Hamilton had decided that this move might preclude an American move against Fort Detroit. Governor Henry order the colonel of the Washington County militia, Arthur Campbell, to choose 150 select frontier rangers from the counties noted above to move to the relief of the settlers in Kentucky.(282) Perhaps because the George Rogers Clark expedition pressed the British, the expected attack on Kentucky never materialized. Meanwhile, the British forces in the south were pressing hard in South Carolina. Governor Henry expressed great admiration for "the brilliant John Rutledge [who] was Governor of the State. Clothed with dictatorial powers, he called out the reserve militia and threw himself into [the defense of] the City."(283) Henry decided to respond to Rutledge's call for aid by dispatching 1000 Virginia militia to relief of the Carolinas. His primary problems were with the commissary, which could not round up enough "tents, kettles, blankets & Waggons" to supply this force.(284) The British captured Savannah in December 1778, crushing the 1000 man militia force under General Robert Howe (1732-1786). By the spring of 1779, the British had crushed General Benjamin Lincoln's force at Stonoe Ferry, and the southern campaign seemed to be going well for the enemy. Patrick Henry thought Lincoln had done well, although he lost 300 men while inflicting only 130 casualties on the enemy. The British successes in the south created another danger. The enemy immediately sent emissaries to the Cherokee and other potentially war-like tribes, promising them weapons and other aid should they join the British cause. Henry moved to break up the alliance before it became a real, effective coalition that could over-run the frontier. He had learned that the most war-like of all the southern tribes had gathered in an area from the mouth of the Cickamauga River south some fifty miles down the Tennessee River. Led by a chief named Dragging Canoe (or Dragon Canoe), whom we have met before, these were the outcasts from many tribes and villages. They had welcomed into their towns various tories, bandits, escaped criminals, murderers, cut-throats, fugitives from justice and escaped slaves, bringing a grand total number of armed men of perhaps 1000. Dragging Canoe and his band of loosely associated allies had refused overtures of peace from Virginia sent via Colonel Christian. On 13 March 1779, Henry informed Washington that he had drawn on the select militias of the same counties he had called into a state of readiness, to be commanded by Colonel Evan Shelby. Shelby had served as quartermaster for the Virginia militia, so he was able to command all the supplies and arms that had eluded the militia had earlier planned to send to South Carolina. Henry reported to Washington that, "About 500 militia are ordered down the Tennessee River to chastise the settlements of the renegade Cherokees that infest our southwestern frontier and prevent our navigation on that river, from which we hope for great advantages." Soon after, North Carolina added 500 of its militia to Shelby's force. As it was, many of the North Carolina militia turned out to be displaced Virginians or men recruited into the North Carolina militia from Virginia.(285) Shelby's mission was an overwhelming success. His militia force, which actually consisted of 600 men, assembled near Rogersville, Tennessee, at the mouth of Big Creek. They enlisted the help of Colonel Montgomery's 150 men who had been on their way to aid George Rogers Clark. On 10 April 1779, the force began its journey by canoe, reaching Dragging Canoe's town by 13 April. Having captured an Indian, they forced him to guide them to the enemy's campsite. Shelby took the camp by surprise, killed 40 warriors, burned their supplies and captured British war materials valued at £20,000 sterling. The British dream of uniting the southern tribes with Colonel Hamilton's forces came to an abrupt end. In a single stroke, the power of the Chickamauga tribes was broken, and the Cherokees, seeing the power of Shelby's militia, soon withdrew from further negotiations with the English. Henry's two major deployments of militia on the far frontiers, under Shelby in Tennessee, and under George Rogers Clark in the west, had saved the frontier and precluded the necessity of Washington's having to divert regular troops from the eastern seaboard to fight on the frontier. Meanwhile, Virginia militia on the northwestern frontier came under pressures from British, tory and Amerindian troops. Ebenezer Zane (1747-1812) recruited his neighbors and formed a militia. His volunteers resisted attacks on Fort Henry at Wheeling, [West] Virginia, in 1777 and 1782.(286) No colony ever had sufficient regular forces to guard its seacoast from invasion. One primary responsibility of the militia remained standing coastal watch. In May 1779, as Shelby's army was mopping up in Tennessee, British troops landed in Portsmouth, embarking from a reported 35 ships, including Raisonable, Rainbow and Otter. This expedition, which had sailed from New York on 5 May 1779, consisted of 2500 men under Major-general Edward Matthew, conveyed on ships commanded by Commodore Sir George Collier, acting on the home government's explicit orders to Sir Henry Clinton. This force was to destroy American ships, especially privateers, disrupt the economy and prevent supplies reaching the southern states during the campaign being waged from Savannah, Georgia. The hundred regulars stationed in Portsmouth offered little resistance. These troops, like others assigned to similar coastal watch duty, might have been better deployed in the field. Having occupied Portsmouth so easily, the British army followed up quickly, marching on Suffolk. There they captured 1200 barrels of pork and looted and burned the town. They also destroyed ordnance and gunpowder, tobacco and various naval materials of war. Governor Henry called out the militia, which assembled too late to save Suffolk, but with 2000 to 3000 militiamen under arms marching Suffolk, the British withdrew. The British, before withdrawing completely, also burned and looted Portsmouth and Norfolk.(287) On the east coast of Virginia, the French came into contact with the Virginia militia for the first time in 1778. About this same time Virginia sent militia to assist South Carolina in its struggle against the British invasion. Whether prejudiced by Washington's views or on their own account, the French held a dim view of the Virginia militia. Of their value in the New Jersey campaign, Jean Baptiste Antoine de Verger, attached to the staff General Rochambeau, thought them cowardly in battle unless they had a clear advantage in numbers and position. They preferred having a clear avenue of retreat even when they had the upper hand. A competent commander could inspire them to perform brave deeds, but only for a short while. As de Verger wrote, "the persuasive eloquence of their commander aroused in them an enthusiastic ardor of which immediate advantage must be taken or lost."(288) Jefferson did not share this skepticism of the militia. He was quite proud of his state's militia, and especially its prowess with the rifle. He wrote to Marquis de Lafayette,(289) "the militia of Washington, Montgomery, Botetourt, Rockbridge, Augusta and Rockingham are our best Rifle counties."(290) Nonetheless, Jefferson was to hear more criticism of the citizen-soldiers in the months to come. Baron von Steuben wrote Jefferson on 2 January 1780, that "in case of the calling out a Body of Militia it will be highly necessary to adopt some measures to prevent numerous abuses and terrible destruction of the Country."(291) In 1780 the militia was mustered in large numbers both to assist its sister colonies to the south to repel Cornwallis' invasion and to contain the Amerindian incursions along the frontier. The Virginia militia's contribution to the Whig victory at King's Mountain on the border of North Carolina and South Carolina was significant. On 18 August 1780 the notorious tory Banastre Tarleton had defeated an American force at Fishing Creek, South Carolina, opening the way for the invasion of North Carolina. Sorely in need of a victory, Colonels Isaac Shelby (1750-1826) and William Campbell (1745-1781) recruited a force of backwoodsmen, mostly expert riflemen from the Carolinas, Kentucky and Virginia, and on 7 October, trapped and decisively defeated Major Patrick Ferguson's force atop King's Mountain. Ferguson himself was killed and nearly his entire command was killed or captured. Ferguson had served as Corwallis' screening force on his left flank and this loss was a serious one, forcing the British commander to retreat and establish winter camp at Winnsborough. General Nathaniel Greene was more optimistic than General Washington about the effectiveness and use of the militia. Perhaps this was because he had little choice in the matter since virtually no trained troops were available to him. Virginians had been sent to serve in the Continental Line both north and south. General Mathew's Virginia regiment had been mauled at Germantown, Pennsylvania, and most survivors were taken captive. General Buford's Virginians had been massacred by Tarleton's tories. More Virginia soldiers were held captive as a result of General Benjamin Lincoln's surrender at Charleston, South Carolina. Still, Greene held out considerable hope of success with the men he had at his disposal. Writing to Jefferson on 20 November 1780 from Richmond, soon after his appointment as commander of the Southern Army, Greene complimented the militia's devotion to duty, provided only that they be used properly. It Affords me great Satisfaction to see the Enterprize and Spirit with which the Militia have turn'd out lately in all Quarters to Oppose the Enemy; and this Great Bulwark of Civil Liberty promises Security and Independence to this Country, if they are not depended upon as a principal but employed as an Auxiliary but if you depend upon them as a principal the very nature of the War must become so ruinous to the Country that tho numbers for a time may give security yet the difficulty of keeping this order of Men long in the field and the Accumulated expences attending it must soon put it out of your Power to make further Opposition and the Enemy will have only to delay their Operations for a few months to give Success to their measures. It must be the extreme of folly to hazard our liberties upon such a precarious tenure when we have it so much in our power to fix them upon a more solid basis.(292) Writing from Hillsborough, North Carolina, on 30 August 1780, Edward Stevens complained to Governor Thomas Jefferson of the behavior of the Virginia militia who had come to his aid. First, they were poorly armed because political authorities had not permitted them to "Carry a single Muskett out of the State" so they had to be rearmed from Philadelphia. That may not have been the fault of the men, but they had deserted in great numbers in the face of the enemy in the action against Lord Cornwallis' army near Camden on 16 August. Stevens thought that a large measure of the blame from General Gates' army was due to the cowardice of the militia.(293) Shortages of arms and other materials of war continued to plague the Virginia militia. On 21 October 1780 Thomas Nelson, writing to Governor Jefferson from Hall's Mills, lamented "the Enemy will undoubtedly secure all the passes, there be no possibility of preventing it with the Militia . . . who are not armed at all."(294) On 22 October 1780 Jefferson was forced to inform General Horatio Gates that he had mustered the militia south of the James River and the volunteers from these units were "in readiness" and would join him "as soon as Arms can be procured." Likewise, volunteers from other counties would follow within the next eight months if they could find arms wherewith to equip them.(295) As autumn approached, the governor received intelligence that tories from the Carolinas under Major Ferguson were planning to raid the Greenbrier Valley and wreak havoc in southwestern parts of the state. The lead mines in Wythe County supplied a significant part of the patriots' needs for bullets and thus provided an attractive target for marauders. There were many tories in southwestern Virginia who might become politically and militarily active given some encouragement, but the death of Ferguson at King's Mountain ended the preparations. Meanwhile, Washington had dispatched General Muhlenberg from Pennsylvania to assist in the defense of Portsmouth against a major British landing party. With the help of local militia, the Continentals defeated British General Leslie and liberated the town. Benedict Arnold, now a British officer, appeared with a superior force of regulars and drove Muhlenberg's militia from Richmond. As the militias from additional counties swelled Muhlenberg's army, Arnold fell back to Portsmouth, burning and looting all the way. Muhlenberg's militia stood in his path and Washington dispatched Lafayette with 1200 of the Continental Line to capture the traitor and defeat his army. The British landed Colonel Phillips and his regiment at Portsmouth. Phillips seized Petersburg, but died almost immediately of some fever and his men joined Arnold's command. Steuben's and Lafayette's timely arrival prevented a second capture of Richmond and Arnold beat a quick retreat to Portsmouth and the British fleet. George Washington at his headquarters near Passaic, on 18 October 1780, prepared a Circular sent to Jefferson and the other state governors, the Continental Congress and others. In obedience to the Orders of Congress, I have the honor to transmit Your Excellency the present state of the Troops of your line, by which you will perceive how few men you will have left after the first of January next. When I inform you also that the Troops of the other Lines will be in general as much reduced as Yours, you will be able to judge how exceedingly weak the Army will be at that period; and how essential it is the States should make vigorous exertions to replace the discharged men as early as possible. Congress's new plan for a military establishment will soon be sent to the states with requisitions for their respective quotas. New levies should be for the war, as I am religiously persuaded that the duration of the war, and the greatest part of the Misfortunes, and perplexities we have hitherto experienced, are chiefly to be attributed to temporary inlistments. . . . A moderate, compact force, on a permanent establishment capable of acquiring the discipline essential to military operations, would have been able to make head against the Enemy, without comparison better than the throngs of Militia, which have been at certain periods not in the field, but on their way to, and from the field: for from that want of perseverance which characterises all Militia, and of that coercion which cannot be exercised upon them, it has always been found impracticable to detain the greatest part of them in service even for the term, for which they have been called out; and this has been commonly so short, that we have had a great proportion of the time, two sets of men to feed and pay, one coming to the Army, and the other going from it. Instances cited of the disasters and near-disasters caused by the constant fluctuations in the number of troops in the field. Besides, It is impossible the people can endure the excessive burthen of bounties for annual Drafts and Substitutes, increasing at every new experiment: whatever it might cost them once for all to procure men for the War, would be a cheap bargain. Not without reason, the enemy themselves look forward to our eventually sinking under a system, which increases our expence beyond calculation, enfeebles all our measures, . . . and wearies and disgusts the people. This had doubtless had great influence in preventing their coming to terms. Through infatuation with an error which the experience of all mankind has exploded, and which our own experience has dearly taught us to reject . . . America has been almost amused out of her Liberties. Those who favor militia forces are those whose credulity swallows every vague story, in support of a vague hypothesis. I solemnly declare I never was witness to a single instance, that can countenance an opinion of Militia or raw Troops being fit for the real business of fighting, I have found them useful as light Parties to skirmish in the woods, but incapable of making or sustaining a serious attack . . . . The late battle of Camden is a melancholy comment upon this doctrine. The Militia fled at the first fire, and left the Continental Troops surrounded on every side, and over-powered by numbers to combat for safety instead of victory. The Enemy themselves have witnessed to their Valour. Let the states, then, in providing new levies abandon temporary expedients, and substitute something durable, systematic, and substantial. . . . The present crisis of our affairs appears to me so serious as to call upon me as a good Citizen, to offer my sentiments freely for the safety of the Republic. I hope the motive will excuse the liberty I have taken. Washington added a postscript to Jefferson because the Virginia militia was in large responsible, in Washington's opinion as it had been in Stevens', for the disaster at the Battle of Camden. "The foregoing is circular to the several States. The circumstances of Your Line put it out of my power to transmit a Return."(296) In December 1780 Edmund Pendleton reported to James Madison that he was having great problems raising some of the militia units for duty. The Caroline County militia, in particular, became war weary very quickly after they had been mustered to resist a British invasion at Portsmouth in October. Pendleton told Madison that many "will rather die than stir again." The militia had been placed under the command of Major Charles McGill, aide-de-camp to General Horatio Gates, and a brutal disciplinarian. The men had become "very sickly and many died below, on their way back" because McGill had marched them through avoidable water hazards, had not allowed them to dry out their clothes afterward, failed to feed and rest them properly and committed all sorts of other atrocities. Many had died of "laxes and Pleurisies."(297) Sixteen hundred Virginia militia did march to General Greene's assistance. Daniel Morgan led these men to victory at Cowpens on 17 January 1781. There he was assisted by Colonel William Washington (1752-1810) and his mounted militia in one of the rare engagements involving these forces. Morgan's especially skillful disposition of his one thousand militiamen at that battle carried the day against the hated tory, Colonel Banastre Tarleton, inflicting 329 casualties on the enemy and capturing about 600 of his force. All available militiamen were marched to augment American forces at the Battle of Guilford Court House on 15 March. While this battle left Cornwallis in command of the field, his losses in men and material were so great as to seriously impede his future actions. Roving bands of militia under Francis Marion (1732-1795), Thomas Sumter (1734-1832) and Henry Lee (1756-1818) proved to be effective in delaying and diverting Cornwallis' planned march. Virginia militia assisted in these actions and in capturing a number of small, rural British outposts. So effective was these forces that Cornwallis did not arrive in Virginia until mid-June, by which time the small forces of Steuben and Lafayette had been reinforced by Anthony Wayne. The British authorities had become convinced that the lower southern colonies could not be pacified as long as Virginia remained a training ground for patriot warriors. So Charles Cornwallis led his 1500 men into Virginia, starting out from Wilmington, North Carolina, on 25 April 1781. The North Carolina and Virginia frontier militias remained important factors by harassing the British supply and communication lines. By the time Lord Cornwallis reached Petersburg, Virginia, he had added 4000 men to his depleted command of 2000 men capable of performing their duty. General William Phipps and turncoat Benedict Arnold added their troops, bringing his total strength to about 7500 men. The British troops not only outnumbered the Continental Line under von Steuben and Lafayette, but they were better trained, disciplined and equipped than their provincial brethren. The British army pursued Lafayette's inferior force to the Rapidan River which the Americans crossed at Ely's Ford. Cornwallis sent a raiding party under Colonel Simcoe to harass the Whigs, and it succeeded in destroying American gunpowder and other supplies at the mouth of the Rivanna River. Another raiding party under Tarleton proved to be such a formidable force that on 4 June it almost captured the state legislature and Governor Thomas Jefferson. It was repulsed as it turned south toward Staunton by local militia who turned out to control the mountain passes. Lafayette recrossed the Rapidan River at Raccoon Ford and secured a strong position behind Meechums River where he was soon joined by General Wayne's army and other forces from the north. Cornwallis, under orders from Clinton, then turned toward the sea, leading to his eventual entrapment, defeat and, on 18 October, capitulation of this British army. The militia played little, if any, role in the final reduction of Cornwallis' army. With Cornwallis' surrender British plans for reestablishing its colonial rule over America ended.(298) By war's end, Virginia had furnished more troops and militia to the patriot cause than any other colony, save Massachusetts. This was not surprising to General Washington, who, in a letter to Governor Henry, written early in the Revolution, had commended the martial spirit of the men of his home state. "I am satisfied that the military spirit runs so high in your colony, and the number of applicants will be so considerable, that a very proper selection may be made."(299) In 1776, in response to the call from Congress, Virginia furnished 6181 men; in 1777, Congress assigned a quota of 10,200, of which number, 5744 enrolled in the continental line and the state retained 5269 militia. In 1778 Congress assigned a quota of 7830, which Virginia filled as follows: 5230 continentals; 600 guards for prisoners at Saratoga; and 2000 state militia. In 1779 the state had a quota of 5742, of which 3973 were continentals; 600 served as guards for enemy prisoners; and 4000 served in the militia.(300) In order to secure the blessings of liberty which only a well-regulated militia could provide the Virginia Constitution provided, That a well-regulated militia, composed of the body of the people, trained to arms, is the proper, natural and safe defense of a Free State; that standing armies, in time of peace, should be avoided, as dangerous to liberty; and that in all cases the military should be under strict subordination to, and governed by, the civil power.(301) Richard Henry Lee lauded the state for its passage of a new militia law in 1785. His comments are noteworthy for his statement on the proper place for the militia in a state. I am told our Assembly have passed a new Militia Law, of a mor [torn] nature than former - I have not seen it, but am of Opinion that [torn] the meetings for exercise are made more frequent, it will pro [duce] mischief rather than good, as I never discovered other fruits from those meetings, than calling the Industrious from their Labour to their great disgust and the Injury of the community, and affording the idle an opportunity of dissipation. I rather think that in time of peace, to keep them enrolled and oblige them to meet once a year to shew their Arms and Ammunition - to provide Magazines of those, and in case of a War to throw the Militia into an Ar- rangement like our minute Plan, for defence until a regular Army can be raised, is the most Eligible System, leaving the people at liberty to pursue their labour in peace, and acquire wealth, of great service in War.(302) The Virginia convention, called to consider the ratification of the proposed U. S. Constitution, considered the role of the militia in the new republic. Rather naturally, the debate quickly focused on the role the militia had performed, and how it had, or had not, fulfilled it obligations in the War for Independence. On 9 June 1788 Henry Lee rose to offer his opinion on the subject in response to comments made by Edmund Randolph. Here, sir, I conceive that implication might operate against himself. He tells us that he is a staunch republican, and adores liberty. I believe him, and when I do I wonder that he should say that a kingly government is superior to that system which we admire. He tells you that it cherishes a standing army, and that militia alone ought to be depended upon for the defence of every free country. There is not a gentlemen in this house -- there is no man without these walls -- not even the gentleman himself, who admires the militia more than I do. Without vanity I may say that I have had different experience of their service from that of the honorable gentleman. It was my fortune to be a soldier of my country. In the discharge of my duty I knew the worth of militia. I have seen them perform feats that would do honor to the first veterans, and submitting to what would daunt German soldiers. I saw what the honorable gentleman did not see our men fighting with the troops of that king which he so much admires. I have seen proofs of the wisdom of that paper on your table. I have seen incontrovertible evidence that militia cannot always be relied on. I could enumerate many instances, but one will suffice. Let the gentle. man recollect the action of Guilford. The American troops behaved there with gallant intrepidity. What did the militia do? The greatest numbers of them fled. The abandonment of the regulars occasioned the loss of the field. Had the line been supported that day, Cornwallis, instead of surrendering at York, would have laid down his arms at Guilford.(303) In replying to the argument of Patrick Henry, that the states would be left without arms, Lee said he could not understand the implication that, because Congress may arm the militia, the States could not do it. The States are, by no part of the plan before you, precluded from arming and disciplining the militia should Congress neglect it. He rebuked Henry for his seemingly exclusive attachment to Virginia, and uttered the following sentiment: In the course of Saturday, and in previous harangues, from the terms in which some of the Northern States were spoken of, one would have thought that the love of an American was in some degree criminal, as being incompatible with a proper degree of affection for a Virginian. The people of America, sir, are one people. I love the people of the North, not because they have adopted the Constitution, but because I fought with them as my countrymen, and because I consider them as such. Does it follow from hence that I have forgotten my attachment to my native State? In all local matters I shall be a Virginian. In those of a general nature I shall never forget that I am an American.(304) The Reverend Mr Clay, priest in the established church, on 13 June, led the objections to granting power to the national government to call out the state militias under the Militia Clause. James Madison responded, using much the same argument he developed in his contributions to the Federalist Papers. Madison was followed by Mason, who denounced it as not sufficiently guarded, in an able harangue, which called forth an elaborate reply from Madison. Clay was not satisfied with the explanations of Madison. "Our militia," he said, "might be dragged from their homes and marched to the Mississippi. He feared that the execution of the laws by other than the civil authority would lead ultimately to the establishment of a purely military system. Madison rejoined, and was followed by Henry, who exhorted the opponents of the new scheme to make a firm stand. "We have parted," he said, "with the purse, and now we are required to part with the sword." Henry spoke for an hour, and was followed by Nicholas and Madison in long and impassioned, but reasoned, speeches. Henry replied, and was followed by Madison and Randolph. George Mason rejoined at length, and was followed by Lee, who threw with great oratorical skill several pointed remarks at Henry. Clay rose, evidently motivated by great passion. He said that, as it was insinuated by Randolph, he was not under the influence of common sense in making his objection to the clause in debate, his error might result from his deficiency in that respect; but that gentleman was as much deficient in common decency as he was in common sense. He proceeded to state the grounds of his objection. and showed that in his estimation the remarks of the gentleman were far from satisfactory. Madison rejoined Clay, and passing to the arguments of Henry, spoke with great vigor, refuting them. Clay asked Madison to point out an instance in which opposition to the laws of the land did not come within the idea of an insurrection. Madison replied that a riot did not come within the legal definition of an insurrection. After a long and animated session the House adjourned.(305) The debate then turned in other directions and Virginia eventually ratified the new frame of government without demanding that changes be made to the militia system therein constructed. The Virginia revolutionary militia had one more duty to perform. On 17 July 1794, President George Washington mustered the Virginia Militia, calling it into federal service to suppress the Whiskey Rebels in western Virginia and Pennsylvania. The President of the United States, having required a second detachment of Militia from this Commonwealth, amounting to 3000 infantry and 300 cavalry, inclusive of commissioned officers, to be prepared for immediate service, the commander in chief accordingly directs the same to be forthwith appointed.(306) The Virginia militia was of great importance in the seventeenth century, so much so that one might easily conclude that without it, the Amerindians might easily have destroyed the colony at almost any early stage of its development. As either the most populous or second most populous, colony everything that happened in Virginia was of consequence to the other colonies. Since its nearest rival, Massachusetts, was severely circumscribed in territory for growth, Virginia would continue to be a colonial leader. It was a truly southern colony which nonetheless had some very cosmopolitan characteristics. It boasted no cities to compete with New York, Philadelphia or Boston, yet it established the first major gun manufactory in the nation and produced many of its outstanding military and political leaders and political philosophers. As we have seen, the Virginia militia fell on hard times largely because it was heavily populated by poorer farmers and tradesmen spread out over vast areas. Once the frontier advanced to and beyond the Shenandoah Valley, the inhospitable terrain and sparse population made it very difficult for a militia to assemble or function. Still, it performed very well when pressed by the French and their Amerindian allies. It was the mainstay of Braddock's auxiliaries and may have saved what could be salvaged from his ill-fated expedition. It also protected much of the frontier after the remnants of Braddock's army fled to the protection of the eastern seaboard. And it contained parts of Pontiac's conspiracy. In the American War for Independence, it successfully kept the native aborigine and Tories at bay, and bore the main share of defense of the colony. That the British army did not choose to operate much in the state may be credited in large to the Virginia militia. And it performed well as a reservoir to fill the Continental Line. In 1629 Sir Robert Heath was granted a patent to settle the area between 31 and 36 degrees north under the name of New Carolina. The following year Heath conveyed this land to Samuel Vassal and others who explored it and made an ineffectual attempt to settle the area. By 1632 Henry Lord Maltravers claimed the area as the Province of Carolana under an alleged grant from Heath and by the Harvey Patent issued by the governor of Virginia, John Harvey. The Harvey Patent established Maltravers' claim to the area south of the James River known as Norfolk County. No effective settlement was established. The Albemarle settlement was the first permanent caucasian habitat and was created about 1653 by Virginians moving through the Nansemond Valley and Dismal Swamp into the area of Albemarle Sound and Chowan River. Shortly after, a group of London merchants and disaffected New Englanders established a settlement at Cape Fear, but it was abandoned about 1663. In 1644, the Proprietors of the Carolinas ordered the Governor of North Carolina to "constitute Trayne bands and Companys with the Number of Soldiers [necessary] for the Safety, Strength and defence of the Counteys and Province." The Proprietors agreed to "fortifie and furnish . . . ordnance, powder, shott, armour, and all other weapons and Habillaments of war, both offensively and defensively."(307) Every newly arrived "freeman and freewoman . . . shall arrive in ye said countrie armed." The "master or Mistress of every able-bodied servant he or she hath brought or sent . . . each of them [is to be] armed with a good firelocke or Matchlocke."(308) To whom these orders applied is unclear, based upon the settlement timetable noted above. Perhaps the orders were only theoretical, promulgated in the case an actual settlement was established. Sir John Colleton, a wealthy planter from Barbados, and William Berkeley, former governor of Virginia, in conjunction with a colonial promoter, Anthony Ashley Cooper(309), approached Charles II about developing the Carolina colony. On 3 April 1665, the king granted a new charter to eight proprietors, including the above captioned promoters, Earl of Clarendon, Duke of Albemarle, William Berkeley's brother John Lord Berkeley, Earl of Craven and Sir George Carteret. Maltraver's heirs, the Duke of Norfolk and Samuel Vassal all filed counter-claims on 10 June, and on 6 August the Cape Fear Company added its name to those contesting title. On 22 August 1665 the Privy Council confirmed Charles II's more recent grant and declared all previous grants to be null and void. In October 1664 William Berkeley, authorized to name the first governor of the Province of Albemarle (as North Carolian was then called) nominated William Drummond. Berkeley head a board to draw up the Concessions and Agreements of 1665, granting basic rights, including liberty of conscience and creating a representative assembly of freeholders. The Charter of Carolina of 1663 required that the proprietors build whatever fortifications were necessary for the protection of the settlers and to furnish them with "ordnance, powder, shot, armory and other weapons, ammunition [and] habilements of war, both offensive and defensive, as shall be thought fit and convenient for the safety and welfare of said province." The proprietors were to create a militia and appoint civil and military officers. Because "in so remote a country and scituate among so many barbarous nations," to say nothing of pirates and Amerindians, the crown ordered the proprietors "to levy, muster and train all sorts of men . . . to make war and pursue the enemies."(310) The eight Lord Proprietors of Carolina in 1663 ordered that the governor "levy, muster and train all sorts of men" as a militia.(311) The second charter, issued just two years later, contained many of the same instructions.(312) The proprietors ordered that a militia be formed and allowed it to march out from the colony to assist other colonies in times of crisis. They ordered that there be a constable's court, consisting of one proprietor and six others, who assumed command of the militia. This body was to provide arms, ammunition and supplies and to build and garrison forts. Twelve assistants were to be lieutenant-generals of the militia. In war the constable was to act as field commander.(313) In 1667 the governor ordered the officers of the counties to train the colonists in the art of war.(314) The Fundamental Constitutions of North Carolina of 1669 required "all [male] inhabitants and freemen" between the ages of 17 and 60 to bear arms in service to the colony.(315) The governor was to "levy, muster and train up all sorts of men of what conditions soever." The language was much the same as the two earlier Carolina charters including a provision that the militia might be deployed "without the limits of the said province." With a properly ordered militia the colony might "take and vanquish" its enemies, even "to put them to death by law of war, and to save them at their pleasure."(316) Tradition has ascribed the Fundamental Constitutions to John Locke, the noted and influential political theorist, in collaboration with Anthony Ashley Cooper. It was an unusual blending of aristocratic conservatism with the liberalism of the Enlightenment. While permitting freedom of conscience and religion and creating a citizens' militia, it also established a hierarchical aristocratic rule with classes based on land ownership. For example, a lord of a manor must own no less than 3000 acres whereas landgraves owned no less than 12,000 acres, and freeholders were recognized only if they owned a minimum of 50 acres. The eight proprietors constituted the Palatine Court which had the power of disallowance of laws and appointment of governors. The provincial council seated all landed hereditary nobility and popularly elected members who must own at least 300 acres. In 1675 the total population of North Carolina was less than 5000; and it had increased to less than 6000 by 1700.(317) It was not only inconvenient and impractical to muster and train the militia in the first century, but even dangerous.(318) Thus, the militia could hardly have been a formidable force in the seventeenth century. By 1680 Moravian and other Calvinist religious dissenters had begun to move into the Carolinas. They were as opposed to military service as their Quaker brethren in Pennsylvania, and in 1681, decided they had sufficient strength and support to oppose reenactment of the North Carolina militia law. As a period history of the colony said, they "chose members [of the legislature] to oppose whatsoever the Governor requested, insomuch as they would not settle the Militia Act" even though "their own security in a natural way depended upon it."(319) Another contemporary history confirmed that the dissenters were "now so strong among the common people that they chose members to oppose . . . whatsoever the Governor proposed [especially] the Militia Law."(320) Many non-dissenters simply opposed the militia law because they wished to not serve in the militia; or because they were naturally opposed to the governor, government and British rule. The result, of course, was to emasculate the militia and destroy most of the colony's ability to defend itself. By 1693 the legislature had become bicameral. The larger baronies initially recognized never were established and in fact no seignory of more than 12,000 acres was ever created. Eventually, the governor's council became the Grand Council. The proprietors revised the Fundamental Constitutions on 12 January 1682, but the revisions were voided the next year, and revised again in 1698, but never accepted by the assembly. Although religious toleration had included non-establishment, the Church of England became the established church in 1670. No changes were made in the fundamental militia system, probably because the proprietors had no interest in bearing (or raising by taxation) the cost of a standing army. On 2 October 1701 Governor Nicholson of North Carolina reported to the Lords of Trade in London that the citizens under his charge "do not put themselves in a state of defence by having any regular Militia, arms or ammunition."(321) That neglect cost the colony dearly during the Tuscarora Indian War of 1711-12. The Tuscarora surprised the colonists in large because they were able to create a war confederation with four neighboring tribes with whom they had never before cooperated, and they kept the negotiations completely secret from the whites. The unsuspecting colonists had not prepared and after the first attack Governor Edward Hyde could find only 160 militiamen ready to muster. The best that he could do was to order these men to herd the surviving settlers into fortified positions and protect them while begging for help from South Carolina.(322) Militia training paid occasional dividends. When installed as governor, Alexander Spotswood committed to a strong militia. In 1711 when the Tuscarora were menacing the frontier, Governor Spotswood decided to impress on them the power of the colonists using his best trained militia from three counties. As he reported to Lord Dartmouth, "I brought into discipline the body of Militia . . . upwards of 1600 men. So great an appearance of armed Men in such good order very much surprised them" and aided in avoiding a great Indian war.(323) Had Spotswood not paraded the militia before the Tuscarora the damage might have been more intense. Indian trouble in the South was not ended. Irritated by dishonest and self-seeking traders and the establishment of a new colony of Swiss at New Bern, the Tuscaroras had risen against the northern Carolinians in September, 1711, and killed about 130 inhabitants. On 22 September an Amerindian force estimated at 1200 Tuscarora warriors, with some additional support from other tribes, massacred settlers along the Chowan and Roanoke Rivers. Only the timely arrival of militia forces from South Carolina saved the colony from annihilation. A wealthy Irish planter, Colonel John Barnwell, with thirty-three militia and five hundred Yamasees and Catawbas, struck back at the Tuscaroras and defeated them. Returning the favor, the Tuscarora in January 1712 fortified their village very effectively. The stockade had a trench, portholes, a rough abatis and four round bastions. Barnwell learned that a runaway slave had taught the Tuscarora the art of fortification.(324) They kept a sullen peace for two years and then were fighting again. Colonel James Moore, Jr., of South Carolina moved against them in March 1713, with about 100 militia and eight hundred Cherokees, Catawbas and Creeks. He killed 800 warriors, while suffering only 58 killed and 84 wounded. This action so overwhelmed the Tuscaroras that they began moving up to New York in waves, seeking protection among their ancient brethren. The Oneidas adopted and domiciled them, but the Iroquois never quite granted them equal status. As members of the Iroquois Confederation they were never to become significant actors in the drama on the New York-Canada frontier. On 20 September 1712 Lord Carteret reported to the Lord Proprietors in London that "we obtained a law that every person between sixteen and sixty years of age able to carry armes" is to be enlisted in the militia.(325) With the assistance of the South Carolina militia on 28 January 1812 the colonial forces defeated the Tuscarora and killed about 300 of the Amerindians along the banks of the Neuse River. So destitute was the North Carolina for muskets that it was forced in 1712 to borrow some from the South Carolina militia.(326) The legislature reported to Lord Carteret in London that as a result of that embarrassment, "we obtained a law that every person between 16 and 60 years of age able to carry arms" be armed at his own expense.(327) Hyde demanded that the militia be upgraded and better organized. The legislature considered Hyde's requests and then debated the militia and discussed its importance. On 15 October 1712 Colonel A. S. Spotswood reported to the Lords of Trade that a militia was indispensable because it had three vitally important functions: first, it was the first line of defense against the Amerindians; second, it protected the colony against the ravages and outrages of pirates and smugglers, as posse comitatus; and, third, it defended the colonists against slave insurrections.(328) On 15 October 1712 Alexander Spotswood reported to the Lords of Trade that the colonial legislature had agreed to maintain the militia for three purposes. It would maintain the peace with the Amerindians. It would assist in repressing piracy and smuggling. And it would be on guard against slave insurrections.(329) By 1713 the war was over and the once proud Tuscarora left their southern home forever, went north and joined the Iroquois Confederation, becoming the sixth nation in that political entity, thereafter known as the Six Nations. The lesson of the Tuscarora War was clear enough. A better armed and regulated militia was imperative to secure the peace. In 1715 the legislature enacted the militia law that remained in effect for the duration of the colonial period. The governor was the principal officer of the militia and he was authorized to appoint other officers to order, drill, discipline and inspect the militia. All freemen between 16 and 60 years of age were enlisted and enrolled. Any captain who failed to maintain his militia list was subject to a fine of £5. Each citizen-soldier had to supply at his own expense a "good gun, well fixed," a sword, powder, bullets and accoutrements.(330) The act provided exemptions for the physically disabled, Church of England clergy and a host of local and colonial public officials. However, all men had to provide arms and ammunition and the exemption was voided in times of grave emergency.(331) Militiamen who were killed or wounded while doing militia duty were to be cared for at the public expense. A permanently disabled man, and the family of a dead militiaman, received a "Negro man-slave" as compensation to assist in various household and farming duties.(332) Within fifteen years the militia law was forgotten. The colony was at peace and no one cared much about enforcing an unnecessary, burdensome and unpopular law. [W]e learn from Experience that in a free Country it [the militia] is of little use. The people in the Plantations are so few in proportion to the lands they possess, that servants being scarce, and slaves so excessively dear, the men are generally under a necessity there to work hard themselves . . . so that they cannot spare a day's time without great loss to their interest. . . . [A] militia there would become . . . burthensome to the poor people . . . . Besides, it may be questioned how far it would consist with good Policy to accustom all the able Men in the Colonies to be well exercised in Arms.(333) The situation had changed little over the next decade. Governor George Burrington (served 1731-1734) had little use or respect for the militia and did nothing to train, equip or muster it. However, when Gabriel Johnson was appointed governor in 1734 he reassessed the militia and in 1735 introduced legislation to "put the militia on better footing."(334) In 1740 a new and only slightly modified version of the 1715 act was passed. A new piece of legislation, the Militia Act of 1746, placed servants as well as freemen on the militia rolls. Millers and ferrymen were added to the exemption list. There were to be at least two types of militia musters. The regiment was to muster annually and the companies were to muster four times a year. One drew militiamen to their local companies, while the second muster was general. The law allowed the militia to act in concert with the militias of Virginia and South Carolina, but no other province. The new law also made provision for mounted troops. The act also allowed the governor to call out the militia to march to the assistance of either Virginia or South Carolina, provided that these colonies should bear the costs of such assistance. Many militiamen resisted this provision, arguing that it was, or at least should be, unlawful to deploy the militia outside the colony.(335) In 1749 the militia law was again changed since the 1746 act was given only a three year life. Company musters were reduced in number to two per year. The death penalty was no longer permitted in court martial cases.(336) In 1754 the French and Indian War began and the colony returned to more frequent militia company musters.(337) On 17 July 1754 Governor Sharpe approved a loan of some militia muskets to the province of North Carolina.(338) The legislature established greater control over the militia budget and demanded a greater role in the appointment of militia officers. Militia lists from the French and Indian War show that several blacks and mulattoes were members of militia companies. Since only race and not status was noted it may be assumed that these were freemen and that slaves were not armed.(339) Meanwhile, the political situation had not improved. Unanswered raids by the Amerindians, adjunct to the French and Indian War, proved that the colony's militia was unprepared. On 15 March 1756 Arthur Dobbs, then Governor of North Carolina, reported that the militia law had failed in the colony in his charge. He reported that "not half of the Militia are armed as no supply of Arms can be got although they would willingly purchase them . . . ."(340) During the French and Indian War the North Carolina militia became a reservoir on which the British command drew for enlistments for the Canadian campaign. In November 1756 Loudoun reported to Cumberland that "I had great hopes of the North Carolina Regiments . . . [but] the Carolina Troops would not Submit to be turned over [to English command] without force; which I thought better avoided . . . [recently] I have got a good many of them enlisted in [the Royal] Americans."(341) In 1756 British assigned a quota of 1000 men to be raised in North Carolina as a part of 30,000 man force the English hoped to raise in the colonies to join with the British troops in an invasion of Canada.(342) In 1759 the war with the Cherokee Indian nation spilled over into the colony. The militia, sensing danger at home, refused to march outside the colony's borders, arguing that the North Carolina militia was suitable only for home defense. Governor Arthur Dobbs reported that 420 of 500 militiamen sent against the Cherokee had deserted. Many militiamen and officers interpreted the law as being permissive, but not compelling. They chose to not leave the province.(343) The Militia Act of 1759(344) increased fines for desertion and insubordination and allowed the Governor, with the consent of the legislature, to send the militia to the aid of South Carolina and Virginia to fight against the Cherokees. In 1760 the legislature passed an act which provided that it would pay bounties on Indian scalps in order to encourage enlistment and participation in the militia. And for the greater encouragement of persons as shall enlist voluntarily to serve in the said companies, and other inhabitants of this province who shall undertake any expedition against the Cherokees and other Indians in alliance with the French, be it enacted by the authority aforesaid, that each of the said Indians who shall he taken a captive, during the present war, by any person as aforesaid, shall and is hereby declared to be a slave, and the absolute right and property of who shall be the captor of such Indian, and shall and may be possessed, pass, go and remain to such captor, his executors, administrators and assigns, as a chattel personal; and if any person or persons, inhabitant or inhabitants of this province, not in actual pay, shall kill an enemy Indian or Indians, he or they shall have and receive ten pounds for each and every Indian he or they shall so kill; and any person or persons who shall be in the actual pay of this province shall have and receive five pounds' for every enemy Indian or Indians he or they shall so kill, to be paid out of the treasury, any law, usage or custom to the contrary notwithstanding. Provided, always, that any person claiming the said reward, before he be allowed or paid the same, shall produce to the Assembly the scalp of every Indian so killed, and make oath or otherwise prove that he was the person who killed, or was present at the killing the Indian whose scalp shall be so produced, and that he hath not before had or received any allowance from the public for the same; and as a further encouragement, shall also have and keep to his or their own use or uses all plunder taken out of the possession of any enemy Indian or Indians, or within twenty miles of any of the Cherokee towns, or any Indian town at war with any of his majesty's subjects.(345) The experience of the province in the French and Indian War prompted yet another series of changes in the provincial militia law. This time most of the benefits were given to the citizenry. The legislature sought to entice, rather than to force, compliance with the law. No militiaman could be arrested on his way to muster. Militiamen paid no tolls on bridges, highways or ferries while on their way to muster. The number of musters was reduced from five to four annually, and later to one annual muster. Officers in the various units had to come from the same county as the enlisted militiamen.(346) The legislature, seeking novel ways to assist the militia and increase its enthusiasm if not efficiency, passed a law placing a bounty on Indian scalps and to allow for the enslavement of hostile Amerindians.(347) By 1762 the exemption list had grown. Presbyterian and Anglican clergy were wholly exempted from any service, although they might choose to serve as chaplains. By 1774 the law covered all Protestant ministers. Overseers of slaves were indispensable to the maintenance of order on the plantations and indeed were fined if they did attend militia muster. Schoolmasters who had ten or more students were charged to remain in their classrooms except in dire emergencies. Pilots and road supervisors and overseers were also exempted. By the time of the Revolution probably half of the able-bodied freemen in North Carolina were exempted.(348) On 3 November 1766 the provincial legislature passed a new militia act. All freemen and servants between sixteen and sixty years of age were obligated to serve, with no exemptions noted, but no mention of any kind was made of slaves.(349) In 1768, with the French menace permanently removed, the policy of arming blacks was clarified, and slaves were specifically excluded. Overseers and/or owners of slaves were subject to fines if they allowed slaves to appear at militia musters.(350) North Carolina passed legislation that was designed to prevent slaves from using guns even to hunt unless accompanied by a caucasian.(351) North Carolina approached the American Revolution under this basic militia law. The only significant change was in the creation of ranging units. These select units were authorized to "range and reconnoiter the frontiers of this Province as volunteers" at no cost to the public. The rangers provided an outlet for militiamen who lived too far distant from urban areas to be able to muster with standard militia units. Most rangers were experienced woodsmen and Indian fighters. They were delighted with their orders to kill any Amerindian they encountered since most had experienced , or at least seen, some Indian atrocity committed against the settlers.(352) During the various Indian wars, ranging units frequently made a substantial profit in Indian scalps at rates as high as £30 per scalp. The rangers were authorized to take the scalps of any "enemy Indian," and it is obvious that a public official could not determine, in paying the bounty, which scalps were of hostiles and which were of friendly or allied Indians.(353) At the same time the legislature moved to relieve some burdens from the poor. Initially a fine was imposed on any militiaman who failed to provide appropriate weapons and accoutrements. A company's officers could now certify that a man was too poor to provide his own equipment and a gun would then be provided through a company's militia fines.(354) In 1771 the militia was tested against insurrectionary forces. The Regulators resisted British authority. By 1768 the Regulators had formed a militia under the leadership of Herman Husbands (1724-1795). They protested a failure of the legislature to grant full representation to the Piedmont, charging the more eastern section with "extortion and oppression." The Johnston Bill ("Bloody Act"), passed on 15 January 1771, was specifically designed to repress the Regulators. On 16 May 1771 some 1200 militia under Governor William Tryon (1729-1788) defeated them at the Battle of Alamance near Hillsboro. Although there were about 2000 Regulators present, many had no arms. Husbands fled, James Few was executed on the spot on 17 May, 12 others were condemned to death and six men were actually executed. Tyron forced some 6500 inhabitants of the Piedmont to sign an oath of loyalty to the crown.(355) Silas Deane, writing to James Hogg, on 2 November 1775, observed, "Precarious must be the possession of the finest country in the world if the inhabitants have not the means and skill of defending it. A Militia regulation must, therefore, in all prudent policy, be one of the first" preparations made by the colonists in North Carolina.(356) The North Carolina Constitution of 1776 provided "That the people have a right to bear arms for the defence of the State . . . ." It also denounced the practice of maintaining armies in time of peace and of allowing the military to subordinate the civil authority. The provisional government enacted a temporary militia law, which was followed by a permanent law enacted by the state legislature.(357) Until 1868 each North Carolina county was divided into one or more militia districts, with each unit being commanded by a captain, who was usually a county official, such as deputy sheriff or justice of the peace. They were required to enroll all able-bodied males between 18 and 60, with attendance at quarterly musters being mandatory. Free blacks were also required to attend militia musters, although they were rarely accorded the right to keep and bear arms.(358) The Committee of Safety ordered that the local authorities confiscate the arms belonging to the tories and issue these to militia or members of the army.(359) The militia officers who were willing to swear allegiance to the new nation were retained in rank.(360) In April 1776 the North Carolina Provincial Congress set standards for muskets to be made for militia use. The Congress wished to purchase good and sufficient Muskets and Bayonets of the following description, to wit: Each Firelock to be made of three-fourths of an inch bore, and of good substance at the breech, the barrel to be three feet, eight inches in length, a good lock, the bayonet to be eighteen inches in the blade, with a steel ramrod, the upper end of the upper loop to be trumpet mouthed; and for that purpose they collect from the different parts of their respective districts all Gunsmiths, and other mechanicks, who have been accustomed to make, or assist in making Muskets. . . .(361) The Congress also resolved on 17 April that, No Recruiting Officer shall be allowed to inlist into the service and Servant whatsoever; except Apprentices bound under the laws of this Colony; nor any such Apprentices, unless the Consent of his Master be first had in writing; neither any man unless he be five feet four inches high, healthy, strong made and well-limbed, not deaf or subject to fits, or ulcers on their legs.(362) The legislature created an arms manufactory at Halifax known as the North Carolina Gun Works, under the superintendency of James Ransom. On 24 April 1776 the legislature ordered Ransom, Joseph John Williams and Christopher Dudley to bring all of the state's energies to bear in the manufacture of muskets in conformity with the direction of Congress and state law, that is, to be made with 44 inch barrels and 18 inch bayonets. They were to recruit ""gunsmiths and other mechanicks who have been accustomed to make, or assist in making, muskets." An unknown, but presumably small, number of arms was produced at the manufactory before the legislature closed it in early 1778. North Carolina found, as did its sister colonies, that it was cheaper and more expeditions to contract with gunsmiths for arms that the state needed than to run its own manufactory. When the manufactory closed, and tools and machinery ordered sold at public vendue, there were 36 muskets nearing completion. These were issued to the Halifax militia.(363) Between 3 and 27 February 1776 in a campaign in the area of Fayetteville, the North Carolina militia of about 1000 men engaged English and Tory forces of 1500 to 3000 men. The militia carried the field and captured military equipment sufficient to equip the militia for months to come.(364) Among the treasures that greatly aided the depleted patriot commissary were: 1500 rifles, all of them excellent pieces; 350 guns and shotbags; 150 swords and dirks; £15,00 sterling; 13 sets of wagons and horses; and two medicine chests, one with medicine and surgeon's tools valued at £300. After the completion of the campaign the militia swelled to 6000 men. By year's end there were 9400 men enlisted in the North Carolina militia.(365) From 3 to 27 February 1776 North Carolina militia engaged British regulars supplemented by Tories from Fayetteville to New Bern, and on the 27th about 1000 patriots defeated an enemy force variously estimated at from 1500 to 3000 strong at Moore's Creek Bridge near Wilmington. This victory caused General Henry Clinton to abandon his planned incursion into the Carolinas with a combined force of his own regulars supplemented with local Tories.(366) The spoils of war were nearly as valuable to the arms-hungry patriots as the victory itself. 1500 Rifles, all of them excellent pieces, 350 guns and shotbags, 150 swords and dirks, two medicine chests immediately from England, one valued at £300 sterling, 13 sets of wagons with complete sets of horses, a box of Johannes and English guineas, amounting to £15,000 sterling, and [the arms and accoutrements of] 850 common soldiers, were among the trophies of the field.(367) On 19 March 1778 North Carolina created a new constitution which made the governor the commander of all military forces. The legislature appointed officers above the rank of captain. The military power was subordinated to the state.(368) After Charleston, South Carolina, fell to British forces on 12 May 1780 Charles Cornwallis (1738-1805)(369) decided to move his force across the Carolinas, retaining the city as his base of supplies, occupied by a largely tory militia force. The minutemen of North Carolina were soon to demonstrate the same prowess with their rifled arms that the British observed with other colonial militias and units of the Continental Line which had been recruited from among backwoods militias. Lord Cornwallis' greatest victory was the patriot's most humiliating defeat. It occurred on 16 August 1780 near Camden, South Carolina. Horatio Gates, who commanded at least 1400 regulars and 2752 militia, advanced against Cornwallis with 2239 veterans, including such tory units as Banastre Tarleton's Legion; Volunteers of Ireland, consisting entirely of ethnic Irish deserters from the American army; and the Royal North Carolina Regiment. Gates had only 3052 men fit for duty and most militia had never faced (or used) a bayonet. Gates had no battle plan, issued no comprehensible orders and quickly joined the routed militia in wild retreat. For his part, Cornwallis proved to be a superior leader who took advantage of the weakness and inexperience of Gates' army. Johann DeKalb, commanding the Continental Line, fought bravely until mortally wounded and captured. The remaining militia fled into North Carolina, and Gates had no viable army.(370) With no apparent patriot army to slow his advance, Cornwallis sent his agents into North Carolina to prepare for its return to the fold, which had been his objective in moving north. But Cornwallis found few recruits for a tory supporting force. That he blamed on the tyranny of the whig government. He hanged several men who had cross-enlisted as examples to turncoats, but this did nothing to increase his popular support.(371) Cornwallis did little to take advantage of the situation. He did not resume his march into North Carolina until 8 September, and he paused at Waxhaw for another two weeks. As Cornwallis moved toward Charlotte, militia rose to harass, if not to directly face, his army. Militia from Rowan and Mecklenberg counties moved out under the command of Colonel William L. Davidson and Major William Davie. Primarily, the militia reported on the movement of Cornwallis' troops and interrupted communications and captured stragglers and deserters. Gates drafted orders to avoid direct military confrontation for his force was too small and too weak to accept full battle. Davie's militia, 100 strong, struck the left flank, slowing the enemy advance. On 20 September they captured a Tory outpost near Waxhaws. Davie's riflemen, acting as sharpshooters, so harassed Cornwallis' army that he was unable to occupy Charlotte until 25 September.(372) Few loyalists enlisted in his adjunct militia, and he found few willing to sell him badly needed food and supplies. He paused again to await a supply convoy from the south. Colonel John Cruger at Ninety-Six and Major Patrick Ferguson at Gilbert Town had the same experiences. Meanwhile, Cornwallis learned that patriot forces were on the verge of liberating Georgia, destroying one of his principal achievements. During September 1780, a formidable force of backwoods militia gathered in North Carolina to oppose Cornwallis's army of the south. Colonel Campbell (1745-1781) of Washington County, Virginia, brought 400 militiamen. Colonel Isaac Shelby (1750-1826) recruited 240 militia from Sullivan County, North Carolina. From Washington County, North Carolina, Colonel John Sevier brought the same number of militiamen. Burke and Rutherford counties, North Carolina, sent 160 militiamen under Colonel Charles McDowell. By the end of the month, Colonel Benjamin Cleveland and Major Joseph Winston brought 350 militia from Wilkes and Surrey Counties, North Carolina. One author described this militia force vividly. "The little army was mostly well armed with the deadly Deckard rifle, in the use of which every man was an expert."(373) By early October, the band of militia companies was joined by 270 militia under Colonel Lacy; and by another group of 160 volunteer backwoodsmen. On the eve of the major confrontation with Major Patrick Ferguson's British army, they numbered at least 1840 militia and volunteers. The men, in truly democratic fashion, selected William Campbell as their commander. This force initially had in mind more harassing Cornwallis's British army than confronting its strong left wing. Cornwallis withdrew to Winnsboro between Ninety-Six and Camden. British intelligence, which at this point seemed to be good and reliable, reported a major gathering of American forces to the west. Ferguson dismissed them as mere untrained and undisciplined militia and looked forward to meeting and defeating them. Reportedly, Ferguson had released a captured American so that he could carry a message to the backwoodsmen. If they did not desist from their treason, he warned, "I will march my army over their mountains, hang their leaders and lay their country waste with fire and sword."(374) Whether the message emanated from Ferguson or not, it was accepted as true by Campbell's force. Americans hurried toward Ferguson at Gilbert Town, while Ferguson took up position on King's Mountain, waiting to slaughter the country bumpkins. The Battle of King's Mountain of 7 October 1780 pitted Tory and patriot militias against one another in a fight among relatives and neighbors. The Tory force of 1100, led by Major Patrick Ferguson, encountered a patriot force of frontier militia, then numbering about 910.(375) The long hunters, armed with at least 600 rifles, decimated the tories' lines with deadly and accurate rifle fire. Ferguson represented Cornwallis' left wing, and it was destroyed by the American militia. Campbell did not await the arrival of the remainder of his van. He encircled Ferguson's troops and his skilled riflemen rained deadly rifle fire upon the British lines. After Ferguson was morally wounded, his army was thoroughly disheartened. The Americans lost 28 killed and 62 wounded while killing or capturing nearly the entire opposing force, 1105 in all. As the principal historian of that battle wrote, The fatality of the sharpshooters at Kings Mountain almost surpassed belief. Rifleman took off rifleman with such exactness that they killed each other when taking sight, so instantaneous that their eyes remained, after they were dead, one shut and the other open, in the usual manner of marksmen when leveling at their object. . . . Two brothers, expert riflemen, were seen to present at each other, to fire and fall at the same instant . . . . At least four brothers, Preston Goforth on the Whig side, and John Goforth and two others in the Tory ranks, all participated in the battle and all were killed.(376) This action may have turned the tide of the war in the south. It certainly purchased precious time for the American regular army to regroup and plan its campaign. Cornwallis, who had advanced beyond Charlotte, on the road to Salisbury, after King's Mountain, decided to retreat back into South Carolina and set up for winter at Winnsborough. His army was racked by disease and fatigue and was in no condition to confront a major American force. Most of all, Cornwallis had become discouraged that so few tories had come to his aid and had come to doubt that truth of the fundamental assumption that American loyalists were waiting in large numbers for their liberation. He thought then to continue to march northward and receive any loyalist support that might come his way. Sir Henry Clinton had sent Major-General Alexander Leslie with 3000 men to Portsmouth, Virginia, with orders to move south and join with Cornwallis as he marched northward. Cornwallis asked Leslie to attempt to move south and create a diversion that might free his army to move northward to join Leslie. Events in South Carolina changed Cornwallis' mind. The patriot militia rose everywhere, harassed his communication and supply lines, captured isolated patrols and quieted the loyalists. These disruptions, combined with the defeat of Ferguson's force at King's Mountain, compelled Clinton to order Leslie to embark on ships and move to Charleston, South Carolina, to re-enforce Cornwallis. On 14 October 1780, Congress appointed the very capable General Nathaneal Greene (1742-1786)(377) to relieve Horatio Gates (1727-1806)(378) as American commander in the south. He headed a force of about 2000 men, over half of which were militia. Additionally, there were the various partisans, irregular volunteers and militia and guerrillas, operating largely outside his direct command. They served to harass the enemy, slow his progress, disrupt his supply lines and deplete his ranks. They forced Cornwallis to divert many men to guard his supplies and lines of communication. In December 1780, General Greene, too weak to confront Cornwallis' army directly, moved from North Carolina to Cheraw, South Carolina. As Greene wrote to Thomas Jefferson, "Our force is so far inferior, that every exertion in the State of Virginia is necessary. I have taken the liberty to write to Mr. [Patrick] Henry to collect 1400 or 1500 militia to aid us."(379) Working hard to create a substantial force, Greene made an unorthodox command decision: he divided his already outnumbered force into two commands. One division was commanded by Brigadier-general Daniel Morgan (1736-1802)(380) with 600 regulars of the Continental Line and General William Davidson's North Carolina militia, moved against the left flank of the British army. Realizing the shock value of partisan warfare, Greene ordered Daniel Morgan and his 800 riflemen to move west and join with Henry Lee to harass the British as guerilla forces. Retaining command of the second force, Greene moved against the right flank. Cornwallis responded by sending Tarleton's loyalists against Morgan. On 17 January 1781 Morgan's militia confronted a loyalist force of about 1100 men ordered out by Cornwallis and commanded by Colonel Banastre Tarleton.(381) Morgan's force had grown to about 1000, with the addition of mountain militiamen, volunteers and frontier sharpshooters. Morgan positioned his men well and invited Tarleton to attack. Morgan's riflemen defeated the Tory militia and regulars at the Battle of Cowpens, inflicting over many casualties by using his skilled riflemen to great advantage.(382) Morgan successfully combined militia and regulars at Cowpens.(383) His great contribution lay in utilizing the militia properly, in open field combat against regulars. He positioned them so that they complemented, not substituted for, the Continental Line. Morgan placed a line of hand selected men across the whole American front. The sharpshooting frontiersmen were ordered to advance 100 yards ahead of the main line. When the British force was about 50 yards away they were to fire and then retreat back to the main line. Approximately halfway between the skirmish line and the main American line Morgan placed 250 riflemen, mostly raw recruits from the Carolinas and Georgia. Morgan expected them to fire twice and then retire to the main line. A small but significant feature of Morgan's strategy was his order given to the main line of the Second Maryland Continental Line. He assured them that they must not misinterpret the planned withdrawal for a retreat which might cause general panic among the men.(384) Tarleton escaped, but his much diminished force was never again a major factor.(385) Morgan lost only 75 men, while Tarleton lost 329 men and 600 more were captured. Angered by this loss to undisciplined and unwashed militiamen, Cornwallis himself set out after Greene and Morgan who had combined forces after Cowpens. The patriots retreated across the Dan River into Virginia before Cornwallis could catch them and force another major battle. Cornwallis had hoped to force one all-out battle with Greene and to defeat him as he had defeated Gates at Camden. He failed to confront Greene before he crossed the river and because he had no boats, and his supplies were running very low, Cornwallis had to abandon the chase. Cornwallis attempted one last ruse. On 20 February 1781, he moved his army south, from just below the Virginia border, to Hillsborough, announcing that the mission had been successful and that North Carolina was officially liberated. Just three days after Cornwallis issued his proclamation North Carolina and other militia destroyed Colonel John Pyle's 200 loyalists. Greene took advantage of the situation to replenish his army by adding more volunteers and militia, bringing his total strength to about 4400 men. On 15 March 1781, Nathaneal Greene's force confronted Cornwallis with his mixed force of militia and regulars at Guilford Court House. The militia and a Continental Line of fresh militia recruits broke and Greene's army seemed to be on the verge of ruin. At that critical juncture, with the first two lines breached and with the fate of the Southern Department in jeopardy, the Maryland and detached Delaware regulars plugged the gap and held the line. Cornwallis ordered his artillery to fire indiscriminately on the mixed mass of troops, but still both forces stood ground. Greene then withdrew the American army to fight again another day. Cornwallis lost one-fourth of his army in winning the day, but still had not defeated the southern rebel army. Following this battle, Greene's force was now superior in numbers to that of Corwallis. It was to be the last major confrontation between Cornwallis and Greene.(386) The militia could not, or at least would not, stand against artillery fire and bayonet charges of seasoned British regulars. Greene had to find another role for his militia. Weakened by the loss of 100 killed and 400 wounded, Cornwallis had retreated to Wilmington. He then decided to move into Virginia to join the British force of the Chesapeake commanded by General William Phipps. Greene gave battle at Hobkirk's Hill, but lost, on 19 April; originated a siege at Ninety-Six from 22 May to 19 June; and lost again to Cornwallis on 8 September at Eutaw Springs. No British victory was decisive for Greene knew when to withdraw, and these actions bled the dwindling British army. Throughout this final campaign in North Carolina, Greene used his militia most effectively. Militia and regular troops commanded by Francis Marion (1732-1795),(387) Andrew Pickens (1739-1817) and Thomas Sumter (1734-1832) managed to capture a number of seemingly minor British outposts. Marion's ranging militia units tied up numerous British patrols with his elusive tactics, diverting British troops so that the patriots had time to regroup after the Battle of Camden. His militia also ambushed a train of British regulars and tories at Horse Creek and killed 22 British troops and captured several Tory militiamen. More important his command rescued 150 regulars of the First Maryland Continental Line who had been captured at Guilford.(388) Again, it was the cumulative effect of massive militia action that served to wear down the British army. At the outbreak of the war, Marion had initially served in the South Carolina Provincial Congress, but decided he could better serve patriot cause by accepting militia command. Known widely as the "Swamp Fox," Marion and his volunteer irregulars almost single-handedly kept the patriot cause alive in the South in 1780-81. With many loyalists active in the area,(389) Marion roamed the coastal marshes, attacking isolated British and Tory commands and patrols and disrupting communications and supplies. In 1781 he commanded the militia at the Battle of Eutaw Springs. After the war he returned to politics, serving on the state constitutional convention and in the state legislature.(390) Corwallis turned south after making one final call for the loyalists to rise up to his aid on 18 March. As was to be expected, no tory militia came to his aid so Cornwallis left North Carolina, having accomplished nothing. By the fall British control dwindled to the immediate Charleston area. Fundamental British strategy underwent change as it was obvious that the countryside was far more hostile than hospitable to the interlopers. Corwallis' proclamation of 18 March was the last attempt the British command made to rally the tories. As he left North Carolina, Cornwallis found South Carolinians no more helpful than their brethren to the north, and his army suffered as he received neither aid nor comfort in his retreat.(391) Nineteenth century historian Francis Vinton Greene(392) argued that had General Nathaneal Greene had failed to crush the British forces under General Cornwallis because the militia would not fight in the campaign in Virginia and the Carolinas, 1780-81. He argued that had Nathaneal Greene had several regiments of regular troops such as Colonel Henry Lee's Legion or the First Maryland Continentals he would have crushed the British in one great all-out battle. A more recent author has argued that "under the leadership of Nathaneal Greene and Daniel Morgan, the service of militia was essential to the success of the campaign against Cornwallis, a campaign which could easily have resulted in disaster but for the action of these irregular troops." The value of militiaman must be understood against in his proper function, and not in cases where commanders insisted on setting him "to military tasks for which he was not trained, equipped or psychologically prepared." One clear case of misuse involved placing him before bayonet charges, which, he argued, makes no more sense than Braddock's insistence that his army maintain proper firing positions twenty years earlier at the Battle of the Wilderness. The militia accomplished one main mission, and that was to divert the British from their bases in South Carolina, altering their course northward into North Carolina, where they were able to harass them almost at will. The militia cowed the loyalists who were undecided on what course to pursue. They struck at Cornwallis' foragers and scouts.(393) Two recent historians have blamed much of the failure of Cornwallis's mission on his decision to abandon the Carolinas and move northward into Virginia. They argued that had he remained among the numerous tories in the Carolinas, he might have met far greater success.(394) This may be unfair to Cornwallis, for he certainly tried, but failed, to attract loyalist support during his occupation of the Carolinas. After resting at Wilmington for two weeks, Cornwallis suddenly made his last fateful decision to gather the remnants of his army about him, abandon the Carolinas completely and, without any orders or authority to do so, to move boldly into Virginia. There was no loyal regime established, for which enormous credit must be given to the activities of the southern militia. Their constant harassment of the British and loyalist forces and omni-presence in the hinterland precluded real recruitment and placement of British troops. The North Carolina militia performed its functions with great efficiency and success. It was generally among the best in the nation and was especially effective in the early Amerindian wars and during the American Revolution. North Carolina's borders were among the msot secure in the nation and much credit for that can go to the militia. It served relatively well as a reservoir to supply troops to the Continental Line. Settlement in South Carolina centered on Charles Town, which was foundewd in April 1670 by a party of English settlers under the leadership of Joseph West who settled at Albemarle Point. Some 140 settlers arrived at Albemarle Point and there threw up an earthen and log structure to serve as a fort and mounted it with a dozen cannon imported from England. As soon as their supply ships departed to bring additional settlers and fresh supplies, the Spanish from St. Augustine appeared offshore. Simultaneously, some Amerindians, Spanish allies, appeared in the nearby woods, but the test-firing of the cannon frightened them away. The little fort managed to hold out against the Spanish ships until their own ships reappeared.(395) In 1680 the settlers moved to the junction of the Ashley and Cooper Rivers, leaving Old Charles Town behind. The colony soon gathered 5000 whites and a much larger number of Amerindians and slaves. Few settlers made inroads into the interior in the first decades of settlement for fear of the Spanish to the south and Amerindians to the west. Spanish priests were especially active among the Amerindians and the Spanish until 1670 claimed territory as far north as Virginia. Consequently, social cohesiveness within the ruling strata was greater in this colony than others because of common interest bonds formed between planters of large estates and merchants of the town. Between 1671 and 1674 several additional groups of colonists arrived, including those led by Sir John Yeamans from Barbados and another sizable band from New York. The first governor, William Sayle, died on 4 March 1671, and was succeeded by Joseph West. Among West's achievements was the summoning of the first session of the legislature on 25 August 1671. Yeamans claimed that because he owned considerably more land than West, indeed was the only cacique.(396) Yeamans' commission arrived in April 1672, but West replaced him in 1674. Soon after South Carolina's first settlers stepped ashore, they organized a militia for their defense. Their action was unavoidable: unfriendly Spanish outposts lay close to their settlement, and Indians surrounded it. At first, the militia simply protected the settlers from invasion and Indians, but in 1721, it was charged with the administration of the slave patrol as well. Between its inception and the beginning of Reconstruction in 1868, the militia changed little. Its numbers swelled, and its organization became more elaborate, but it remained what it had always been~an institution requiring the registration of all able bodied male citizens; an institution that administered limited universal military training; and an institution that controlled insurrectionists, outlaws, and the slave and Indian populations. Its men were neither equipped nor trained to wage full-scale war and the militia behaved poorly when it was misused in that way. The militiamen were paid only if the government called them for duty. The governors commonly mustered them to suppress insurrection, fight Amerindians and to defend against invasion. They could be called out only for fixed periods of time, and usually only for service within the colony. During the colonial period the government paid volunteers -- men on temporary leave from the militia, men recruited from the other colonies, and transients -- to fight its wars. Just as Massachusetts bore the brunt of French attacks in the north, so South Carolina was the buffer against Amerindian, Spanish and, to a degree, French, ambitions from the south.(397) During the first phase of South Carolina history, that is, during the Proprietary period, and extending a few years into the Royal period, the South Carolina militia was the sole protection in the south of the English North American colonies. The militia proved to be a most effective defense force. Its importance declined significantly with the establishment of Georgia early in the Royal period. By 1740 the British government took the heat off the South Carolina militia by placing a company of regular troops in Georgia to contain the Spanish ambitions and buttressed them with some white Georgia militiamen. In 1763 the Peace of Paris yielded Florida into English hands. After that cession the South Carolina militia as guardian of the southern gate ended forever.(398) The Charter of Carolina of 1663 required that the proprietors build whatever fortifications were necessary for the protection of the settlers and to furnish them with "ordnance, powder, shot, armory and other weapons, ammunition [and] habilements of war, both offensive and defensive, as shall be thought fit and convenient for the safety and welfare of said province." The proprietors were to create a militia and appoint civil and military officers. Because "in so remote a country and scituate among so many barbarous nations," to say nothing of pirates and Amerindians, the crown ordered the proprietors "to levy, muster and train all sorts of men, of whatever condition, or wheresoever born, to make war and pursue the enemies."(399) The second charter, issued just two years later, contained much the same instructions.(400) The first law in South Carolina dealing with the subject of the militia was incorporated into the Fundamental Constitutions of the colony. That law placed control of the militia in the hands of a Constable's Court, which was comprised of one of the proprietors, six councilors called marshals, and twelve lieutenant-generals. It was intended to direct all martial exercises. As it was, the laws drawn in England proved to be a practical impossibility after the first settlers arrived in South Carolina. The first actual control of the militia was vested in the Governor and Grand Council. The first order of this body was to enroll and enumerate all caucasian inhabitants, free or servants, between the ages of sixteen and sixty years. The colony was divided and two militia companies were established.(401) In 1671 South Carolina enacted a new militia law. It reaffirmed the enrollment of all able-bodied males between ages 16 and 60. All such persons, excepting only members of the Grand Council, were to exercise under arms on a regular basis. Those who failed to attend muster were to be punished at the discretion of the Grand Council, which usually translated to a fine of about five shillings. Poor men who could not pay the fine, usually newly arrived settlers and indented servants, could be subjected to physical punishment. Physical punishment consisted of running the gauntlet or riding the wooden horse. Militiamen were required to furnish their own guns, although the government might provide arms to poor men. Masters supplied arms to their indented servants. Arms varied considerably in quality, ranging from imported muskets to common fowling pieces. In addition to the basic arm, men were to provide a cover for the gunlock, a cartridge box with 12 rounds of ammunition, a powder horn and utility pouch, a priming wire, and a sword, bayonet or hatchet. Most men wore a belt over the left shoulder to support his belt with cartridge box. The first law said nothing of blacks or slaves. A system of providing notification in the case of attack was established, following surprise attacks by Westoe and Kussoe Indians.(402) The first of several wars with the native aborigine occurred in 1671, when authorities at Charles Town accused the local tribe, known as the Kusso, of conspiring with the Spanish. There is little documentary evidence extant that sheds light on this little war. The tribe was evidently quickly subdued since the war did not extend beyond 1671. Those natives who did not perish in the war were enslaved marking the beginning of the experiment in using the Amerindians as involuntary servants. Additional natives were added in 1680 as a result of the abortive Westo revolt. Dr Henry Woodward had been appointed agent to trade with the Westo tribe for hides, peltries and slaves, and the chiefs had objected to the inequitable terms of that trade and attacked the traders. Following a few engagements in April 1680, the South Carolina militia was successful in subduing all tribes along the Savannah River [now in Georgia]. Thereafter, political authorities put forth considerable effort to maintain friendly relations with the neighboring Amerindian tribes. European encroachment on tribal lands was a principal cause of inter-racial friction so in June 1682 the Lords Proprietors issued an order to "forbid any person to take up land within two miles, on the same side of a river, of an Indian settlement." Those who did take up lands near Indian settlements were to help the Indians "fence their corn that no damage be done by the hogs and cattle of the English."(403) The Lords Proprietors considered the Anglo-Indian society as a whole, but by 1690 real political authority in the colony had passed from the old guard and into the hands of a group of merchants who were primarily interested in commercial profits to be earned in the fur trade. Thus, in regard to the question of Amerindian relations cordiality became of paramount importance and the occasions for friction greatly decreased the profitability of the Indian trade. South Carolina vacillated on the creation of its militia organization. After creating two companies initially, it had formed six companies by 1672. In 1675 the number of companies was reduced to three. There were militia laws passed in 1675 and 1685 but copies of the texts are no longer available.(404) In 1710 there were two militia regiments of foot, divided into sixteen companies of about fifty men each, which enrolled a total of 950 whites. The governor's own guard enrolled forty select militiamen. There was an equal number of blacks, primarily slaves, since each company captain was to enlist and train one black man for each white militiaman. Few blacks were given firearms, but most were trained to use the lance or pike.(405) The scarcity of arms in the colony and the economy of the proprietors caused the colony to take radical steps in times of emergency. The legislature allowed the impressment of arms, military supplies, gunpowder and other military necessities to meet the Spanish threat in 1685.(406) It also created a public magazine wherein to store the colony's supply of gunpowder.(407) The law required that all free inhabitants between ages 16 and sixty were to be enumerated and their names recorded on the militia enrollment lists. The governor served as commander-in-chief of all the provincial armed forces. He signed officers' commissions, issued warrants for failures to perform militia service, created courts martial, authorized the collection of fines, and impressed food and supplies in times of emergency. With the consent of the council he could proclaim martial law. He appointed regimental colonels and company captains and announced the dates of regimental musters. Company captains appointed sergeants who made arrests for violations of the militia law and inspected the men to make certain they had the proper arms and equipment.(408) In 1677 South Carolina's time of troubles occurred. A group of citizens claimed that Governor John Jenkins had become a dictator and acted against the best interests of the people and prorpietors. Calling itself the proprietary faction, and headed by Thomas Miller, the new party combined the offices of governor and customs collector. The so-called anti-proprietary faction captured Miller, using militia loyal to the governor, and imprisoned him on a charge of high treason. Miller escaped to England and laid his case before the Privy Council. Earl of Shaftsbury defended Miller and sought to mediate the matter. Miller and co-leader John Culpeper were acquitted and the Privy Council ruled that both Miller and Jenkins had exceeded their respective legal authority. In 1685 the Grand Council received several petitions from some newly arrived settlers in which they complained that they had been compelled to serve in the militia before their farms had been cleared and made ready for planting. Their land was "fallen" and hard to work and required their full attention until the first crops were harvested. The proprietors in England agreed suggesting to the council that settler should be exempted from militia duty "for the first year or two."(409) The Spanish attacked the southern border of the colony in 1686. Said to number about 153, the marauders consisted of persons of mixed racial heritage, Spanish regulars, and some allied Amerindians. They destroyed the Scottish settlement at Port Royal and plundered outlying plantations along the North Edisto River. The settlers appealed to the Grand Council which, in turn, appealed to the proprietors in England. Council resolved to attack the Spanish in Florida by appropriating £500 for an expedition. The proprietors thought it unwise to provoke the Spanish and offered the opinion that perhaps the raiders were pirates operating illegally under Spanish colors. The proprietors reminded the council that the colony's charter did not permit it to attack enemies outside its borders except in hot pursuit of raiders. They suggested that Governor Joseph Morton inquire of the Spanish at Havana and St. Augustine if they had authorized the raid. Were South Carolina to attack the Spanish in Florida, retaliation would certainly follow and England was not in a position to enter into war at that time. After the council decided to accept the will of the proprietors, they informed the colony that, had it gone ahead with the planned invasion of Florida, Governor Morton would have been held responsible and may have forfeited his life.(410) Slave patrols were increased dramatically following several slave revolts. Militia slave patrols had been established, under law separate from the other militia acts, as early as 1690.(411) Each militia captain, under the act of 1690, was to create and, when needed, deploy, a slave control and runaway slave hunting unit which would be ready to act promptly upon notification from proper authorities.(412) The General Assembly passed the second important militia law in 1696. It provided for the creation of officers at all grades and for the enumeration of all male inhabitants between the established ages. Each man was required to provide his own firelock and this additional equipment: a gunlock cover, a cartridge box holding a minimum of 20 rounds of ammunition, a gun belt, a worm for removal of a ball, a wire for cleaning the touch-hole, and either a sword, bayonet or tomahawk. A freeman who could not furnish this equipment could be indentured for six months to another person who would buy his equipment. Freemen who owned indented servants had to buy the same equipment for their servants. The act also provided for the creation of cavalry, with the "gentlemen" who could provide their own horses, tack and appropriate equipment qualifying for such service.(413) Failure to attend muster could result in a fine of £0/2/6 for each unexcused absence. Failure to pay a fine could result in the seizure of one's property and/or confinement in a debtor's prison. Those who owned servants were responsible for the appearance of their charges under the same penalty. A man who moved muster either continue to attend muster with his usual unit or obtain a certificate of removal, showing that he had signed up with the proper unit of his new area. Local companies were to drill once in a two month period, "and no oftener." A general regimental muster was held annually, and failure to appear at that time would result in a fine of 20 shillings. Local fines were used to offset expenses of local companies and fines paid for absences at a regimental muster went into the colonial treasury. The act also created the interesting principle that no militiaman could be arrested while going to, attending, or returning from a militia muster. The protection extended for a full day after a militiaman returned to his home. Civil officers who violated that principle could be fined £5 and any civil papers or warrants served in violation of this principle were nullified. Members of the Society of Friends were exempted only if they paid the militia fines for non-attendance.(414) In 1690 the legislature also created a militia watch system on Sullivan's Island.(415) In 1698 the legislature created an Act for Settling a Watch in Charles Town and for Preventing of Fires. It required that town officials make a list of the names of all men over 16 and under 60 to use as a basis of militia, slave patrol and watch duty. The constables of the town were to summon six men at a time "well equipped with arms and ammunition as the Act of Militia directs, to keep watch" from 8 P.M. to 6 A.M. in the winter and 9 P.M. to 4 A.M. in the summer. The patrols were also to detain and arrest slave and free blacks "who pilfer, steal and play rogue."(416) South Carolina's first elected Indian agent, Thomas Nairn, wrote a description of the colony in 1710 in which he gave provided an insight into the colonial militia. In England, Nairn wrote, tradesmen thought that militia service was beneath their status and that they should not be bothered with such mundane intrusions on their time. But in South Carolina every man from the governor down to the poorest indented servant thought it his duty to prepare himself as fully as possible for militia duty. British troops excelled at coordinated movements, but the militiamen were much better at making aimed shots, especially when equipped with rifles. He attributed this skill to their habit of hunting game in the forest. Even trusted slaves were commonly enrolled, and, despite provisions of the law to the contrary, occasionally trusted slaves were armed. In his work as Indian Commissioner Nairn observed British officers working with and equipping allied Amerindians. If the colony was invaded the British officers would "draw the warriors down to the Sea Coast upon the first news of an Alarm." The colony liked to use their Amerindian allies because they cost little. He described the natives under his care as "hardy, active, and good Marksmen, excellent at an Ambushcade."(417) During the entire colonial period South Carolina used friendly Amerindians as auxiliaries to the militia. They proved to be especially well adapted to tracking down runaway slaves and indentured servants, reporting enemy activity, and scouts for militia operating in the backwoods. The natives liked fancy clothing so, as the bulk of their pay, they received scarlet and bright green waistcoats, ruffled shirts, and bright white breeches. The more costly and dangerous gifts were swords and cutlasses, guns, gunpowder, lead and bullets.(418) The administrations of Governors Archdale and Blake were generally peaceful and prosperous. Their ambitious successor, James Moore, who came to office in 1700, adopted an aggressive policy toward both the Spaniards and the Indians. The rupture of relations between England and Spain on the continent led to a Carolina invasion of Florida. The invasion was a disaster. Nevertheless, Moore followed the ill-fated invasion with a somewhat more successful campaign against the Indians. Between 1712 and 1717 Moore undertook two major Indian campaigns, against the Tuscaroras and against the Yamassees. While the outcome of the battles were usually favorable to the colonists, the continued presence of the Spanish on the southern border presented a constant danger. Four hundred blacks, mostly slaves, fought in the Yamassee Indian War with whites so most slave owners supported the law which mustered and trained men "of what condition or wheresoever born."(419) The latter measure was to prove to be unwise later. On 30 August 1720 the king sent instructions to Francis Nicholson, governor of South Carolina, regarding the militia. "You shall take care that all planters and Christian servants be well and fitly provided with arms," the monarch wrote, and "that they be listed under good officers." The militia was to be mustered and trained "whereby they may be in a better readiness for the defence of our said province." He warned that the frequency and intensity of the militia training must not constitute "an unnecessary impediment to the affairs of the inhabitants."(420) By 1703 the colony had enrolled 950 militiamen and a cavalry troop of 40 men. In 1703 the South Carolina legislature enacted a comprehensive militia law because "the defense of any people, under God, consists in their knowledge of military discipline." There were very few changes made in the subsequent militia laws. All free, able-bodied white men between ages 16 and 60 were liable to militia service. This age requirement was not changed until 1782. Exemptions to service were made for members of the council, legislature, clerks thereof, various other colonial officers, sheriffs, justices of the peace, school-masters, coroners, river pilots and their assistants, transients and those who had not yet resided in the colony for two months. In case of an alarm even those otherwise exempted might be required to serve in the militia, in which case they also had to provide their own arms. The law allowed the formation of mounted units, with subsequent exemption from regular militia duty for those so serving. Cavalry men had to supply their own horses, arms and equipment. This law was inspecific as to the description of the horses, arms and equipment, although later laws gave more detailed descriptions of what was required. The legislature had authorized the enlistment of slaves before 1703 for the militia law of that date assumes that slaves were to be enrolled as in the past. Beginning in 1703, it was lawful for the owner of slaves, when faced by actual invasion, "to arme and equip any slave or slaves, with such armes and ammunition as any other person" was issued. No corps was to have more slaves than one-third of its number. A slave who fought bravely was to be rewarded. A slave who killed or captured an enemy while in actual service was to be given his freedom, with the public treasury compensating the owner. Whereas, it is necessary for the safety of this Colony in case of actual invasions, to have the assistance of our trusty slaved to assist us against our enemies, and it being reasonable that the said slaves should be rewarded for the good service that they may do us, Be it therefore enacted . . . That if any slave shall, in actual invasion, kill or take one or more of our enemies, and the same shall prove by any white person to be done by him, shall for his reward, at the charge of the publick, have and enjoy his freedom for such his taking or killing as afore said; and the master or owner of such slave shall be paid and satisfied by the publick, at such rates and prices as three freeholders of the neighborhood who well know the said slave, being nominated and appointed by the Right Honourable Governor, shall award, on their oaths; and if any of the said slaves happen to be killed in actual service of this Province by the Enemy, then the master or owner shall be paid and satisfied for him in such matter and forme as is before appointed to owners whose negroes are sett free.(421) Each man had to provide his own arms, which, in this act specifically meant, "a good sufficient gun, well fixed, a good cover for their lock, with at least 20 cartridges of good powder and ball, one good belt or girdle, one ball of wax sticking at the end of the cartridge box, to defend the arms in the rain, one worm, one wire [priming wire], and 4 good spare flints, also a sword, bayonet or hatchet." These specifications changed very little over the decades because the initial law well covered the equipment of the times and few improvements were made over the next eighty years. South Carolina was the only colony to require the lock cover, commonly called a cow's knee because that was the source of the material for the cover, and the ball of wax. The militia units had to muster and train once every two months, with regimental musters being held from time to time. Officers had to supply themselves with a half-pike "and have always, upon the right or left flank, when on duty or in service, a negro or Indian, or a white boy, not exceeding 16 years of age, who shall, for his master's service, carry such arms and accoutrements as other persons are appointed to appear with." Masters had to provide the same arms, ammunition and accoutrements for all servants who were eligible for militia duty, although these remained the property of the master. When a servant had completed his term of indenture he had to provide the same arms, accoutrements and ammunition within a space of one year. Those who had just moved into the colony also were granted twelve months to acquire the requisite arms and supplies. The grace period was granted on the assumption that newly freed servants and some immigrants would be too poor to provide their own arms immediately. Failures of masters and servants to provide the required equipment was penalized by a fine of ten shillings. Unlike some colonies which allowed arming poor citizens from the public treasury, usually in exchange for performance of some civic duties, South Carolina merely set the requirement and assumed that even its poorest citizens would comply with the law in some way or another. In times of emergency, the law allowed the impressment of supplies, vessels, wagons, provisions, supplies of war, ammunition and gunpowder and such other items as the militia might require. If ships of any description were required, their pilots and sailors could be impressed. When the militia was called into service those who sold liquor were especially enjoined against serving intoxicants to militiamen. Men might be drafted out of their militia units to serve on seawatch duty, although those assigned to this responsibility were paid.(422) The enlistment of slaves in the militia was, to say the least, a very controversial matter. Nonetheless, the legislature thought that it was imperative to swell the ranks of militia available for emergencies. The legislature enacted the law of 23 December 1703 which applied only to the City of Charleston and allowed slaves who, in war, killed or captured an enemy were to be freed and the master compensated from the public treasury.(423) In 1704 the legislature authorized the enlistment of "negroes, slaves, mulattoes and Indian slaves" into the militia. Militia officers were to ascertain which of the foregoing were trustworthy and those should be enumerated, trained, mustered, marched and disciplined along with free whites and indentured servants. Any master who thought that one or more of his slaves should be exempted could appear before the militia officers to explain his opinion. The master was required to arm his best slaves with lance, sword, gun or tomahawk at his expense. Failure to comply with the law could result in a fine of £5 for each offense.(424) Later, those slaves entrusted with arms were given their weapons at public expense, some of which came from militia fines.(425) The slave containment act was strengthened in 1704 when militia units were ordered to patrol the boundaries of their district on a regular basis, with special attention to be given to the apprehension of runaway slaves. Indeed, all slaves found to be away from their owners' plantations were subject to militia arrest. Each militia patrolman had to furnish his own horse and equipment. Officers of the slave patrol received £50 per annum and enlisted men were paid £25 a year. Militia units were only very rarely deployed as full units, except in cases of grave emergency, and in any event not as full units outside their own counties. Volunteers and the usual specially trained Rangers were drawn from all militia companies, using the militia as a reservoir for recruitment. During Queen Anne's War [or War of Spanish Succession, 1702-1713], South Carolina Governor James Moore decided to order the militia to leave the colony and attack the Spanish at St. Augustine, Florida. He first tried to recruit rangers and they refused to leave the colony. He issued a call for volunteers from among the militia and there was only minimal response. He then ordered the regular militia to march, but it too refused to go, arguing that it was not required to leave its county of origin except in case of grave emergency or when martial law was in force. Moore called the legislature into session, but it refused to concur in Moore's judgment and pass the enabling legislation he sought. Thereafter there was no question that it was universally held that it was unlawful to march the militia out of the colony. Any militia so deployed had to be volunteers selected from among the reservoir the militia offered.(426) During Queen Anne's War faced with the threat of invasion and Indian war in 1703 South Carolina authorized the arming of specially selected slaves and free blacks. They would be used only as a last resort and only if the regular militia proved to be insufficient to handle the emergency.(427) In 1704 the legislature ordered masters to draw up a list of "reliable" slaves and provide it to local militia officials who would then summon such slaves as might be needed. If a slave was used, wounded or killed his master would be compensated out of the public treasury. A master who refused to allow, or make certain that, his "trusty" slaves muster could be fined £5.(428) In 1708 the legislature again considered emergency measures and allowed that trusted slaves in times of grave emergency might be armed from the public stores with a lance or hatchet, and, if absolutely necessary, with a gun. A slave who killed or captured an enemy soldier would be freed. A slave rendered incapable of work after being wounded in battle would be maintained at public expense.(429) In the later years of the seventeenth century and the earliest years of the eighteenth century, South Carolina thought itself threatened by an incursion of wild beasts. Initially, bounties applied only to citizens, but that proved to be insufficient to contain the vicious beasts of prey. So the legislature authorized slaves, Amerindians and militia were to kill any "wolfe, tyger or beare" which marauded around settlements. The legislature offered bounties of up to ten shillings for each large animal killed.(430) As Queen Anne's War dragged on the British home and colonial authorities decided to put some pressure on the Spanish enemy in Florida as they had on the French in Canada. In October, 1702, Governor James Moore of Carolina, a planter and adventurer, gathered 500 militia and 300 Amerindian allies, mostly Yamasees, and sailed southward from Port Royal. Their goal was to take Fort San Marcos at St. Augustine before it could be strengthened with French forces. As an inducement to volunteer, the militiamen were promised plunder. The squadron turned in at St. Johns River, and the force captured several outposts on the approach to St. Augustine. It ransacked deserted towns, burning many houses, but the moated stone fort containing the garrison and 1400 inhabitants was more than the Carolinians bargained for. Moore sent to Jamaica for cannon, but they failed to arrive. Governor Zuñiga withstood a siege of seven weeks, and when two Spanish warships appeared on Christmas day, Moore decided to retreat to his relief ships at St. Johns River. The expedition cost £8500 for which Carolina issued paper currency. A year later, having lost the governorship, Colonel Moore proposed a second expedition, against the Apalachee settlements west of St. Augustine. The Assembly gave reluctant approval but specified that the force must pay its own way. Moore could enlist only fifty militia, but he raised about a thousand Indians and after a long march won a pitched battle. Although he did not attack Fort San Luis [Tallahassee], he broke up thirteen dependent missions, which were never restored, and carried off nearly a thousand mission Indians as slaves. Another 1300 were resettled along the Savanna River as a buffer. Moore lost only four whites and fifteen Indians, and the expedition more than paid for itself in booty and slaves. Florida's jealousy of the nearby French changed to alliance against a common enemy, the English. France in turn saw Florida as Louisiana's bastion. There would be a day of reckoning with Carolina. In the summer of 1706 the war came to life again in the South. Iberville had left Mobile for the West Indies and already captured the islands of Nevis and St. Kitts in April. Before he could extend his conquests he died of fever. Spain and France Were devising measures to revenge themselves for the attacks on St. Augustine and Apalachee. Five French privateers were engaged to carry Spanish troops from Havana and St. Augustine to attack Charleston, South Carolina. Anticipating such a raid, Charleston had called out militia and built stronger fortifications. Even so, the town might have been sacked by a more determined enemy under a better commanded. The Spaniards were poorly led, their landing parties were repulsed, and two hundred and thirty of them were taken prisoner. Then Colonel William Rhett, with an improvised squadron, drove off the French ships. Only with eventual help from North Carolina and Virginia did the South Carolina militia under Governor Nathaniel Johnson repulse the Spanish filibustering expedition in 1706.(431) Aroused and encouraged, the Carolinians decided on an offensive against the centers of Spanish and French power. The colony raised several hundred Talapoosas from Alabama to join with militia volunteers to attack Pensacola in the summer of 1707. The attackers killed eleven Spaniards and captured fifteen, but failed to take Fort San Carlos. In November Pensacola was hit again and siege begun. It did not prosper, and the invaders were ready to give up when Bienville brought relief from Mobile to the garrison and hastened their decision. South Carolina also had its martial eye on Mobile, but was unable to rouse the neighboring Indians or unify its own leaders of the enterprise. On both sides the southern offensive expired. In 1707 the legislature renewed the militia act of 1703 with few changes to its substance. For those who were too poor to provide their own arms as the law had required, a new tact was taken. The officers could "put out" persons who failed to supply their own arms "as servants, not exceeding six months, unto some fitting person (himself not finding one to work with), for so long a time as they shall think he may [require to] earn one sufficient gun, ammunition and accoutrements, as directed." While a servant, such a poor person would use his master's equipment, and the law seems to have allowed the master to pay the servant by exchanging arms for services. In times of actual service, the militia law also allowed for corporal punishment, with forfeiture of life or limb only excepted, for disobedience to officers, failure to show for duty, cowardice before the enemy, rebellion or insurrection. General officers had the responsibility for discipline and for administering punishment. The right of appeal from company discipline to the regimental commander were guaranteed by the law.(432) As in other colonies, especially in the early colonial years, the South Carolina militia was widely dispersed, following patterns of settlement. Only Charles Town [Charleston] could truly be said to have possessed an urban militia. This urban militia was small and, on several occasions, nearly collapsed before the scattered rural militia was able to muster. By 1712 South Carolina had created a substantial militia, consisting of all able-bodied men between the ages of sixteen and sixty years. The militia was to be used, on orders of the London Board of Trade and Lord Proprietors, to suppress piracy and smuggling, restrain the slaves and guard against slave revolts as well as to contain the Spanish.(433) Governor James Moore was much disposed to allow slaves to be armed, thus augmenting the very meager white militia of the colony, for Moore believed that the French and Spanish and the Amerindian allies were a far greater threat to the colony than the slaves. In 1708 and again in 1719 the legislature again ordered the principal militia officers to "form and compleat a list of such negroes, mulattoes, mustees and Indian slaves, as they, or any two of them, shall judge serviceable for the purpose. . . ." All three acts required that upon receiving an alarm the slave militiamen were report immediately to the rendez-vous as with free militia, there to be armed with "a good lance, hatchett or gun" from the public stores. Masters might supply the slaves with arms, and if such privately owned arms were lost, captured or damaged, the public was to replace the arm or bear the cost. Masters and overseers who failed to supply slaves in a timely manner were to be fined £20. Officers who refused to enlist any slaves were to be fined £5. The public treasury would pay a fair market value to the owners of slaves killed in militia service or wounded so that they could not again serve their masters.(434) The neglect of the militia in neighboring North Carolina cost that colony dearly during the Tuscarora Indian War of 1711-12. Only the timely arrival of militia forces from South Carolina and Virginia saved the colony from annihilation. South Carolina sent Captain John Barnwell with several militia companies and a large number of Amerindian allies from Cape Fear. Barnwell knew that he had to depend on the Indians to swell his numbers, and he knew well how to play on the ancient tribal animosities, but he was dismayed at the savage behavior of these allies. He complained to the legislature that he had to give "them ammunition & pay them . . . for every scalp, otherwise they will not kill many of the enemy."(435) The colony provided protection against slave insurrections was in three ways. First, it legislated limitations and restrictions which were especially designed to prevent slaves from congregating and thus planning and executing revolution. Second, by importing indentured servants it provided a higher proportion of white men to black slaves than would have been otherwise possible. Larger numbers of able-bodied militiamen translated to a trained and ready force sufficient to defeat slave conspiracies or seditions. As a remedy, in 1711 Governor Gibbes suggested the importation of whites at the public charge. Bills "for the better security of the Inhabitants of this Province against the insurrections and other wicked attempts of negroes and other Slaves" (436) as well as those "for the better securing this Province from Negro insurrections & encouraging of poor people by employing them in Plantations"(437) were regularly proposed by both the governor and the legislature. Third, the government attempted to limit the number of slaves imported into the province. In 1711 Governor Gibbes asked the House of Assembly to "consider the legal quantities of negroes that are daily brought into this Governt., and the small number of whites that comes amongst us, and how many are lately dead, or gone off. How insolent and mischievous the negroes are become, and to consider the Negro Act doth not reach up to some of the crimes they have lately been guilty of."(438) No person after the ratification of the 1712 act "Shall Settle or manage any Plantation, Cowpen or Stock that Shall be Six Miles distant from his usual Place of abode and where in Six Negroes or Slaves Shall be Imployed without One or more White Person Liveing and Resideing upon the Same Plantation, upon Penalty or Forfeiture of Forty Shillings for each Month so Offending."(439) In 1712 South Carolina created a comprehensive code covering all aspects of slave life. One provision of this act was that masters every fortnight were to search all slave quarters, and all other dwellings on their premises occupied by persons of color, for weapons of all sorts, including guns, knives, swords and any other "mischievous" weapons.(440) An act of 7 June 1712, was designed to increase the importation of indentured servants either directly or indirectly through the full support of the colonial government. The first article of the act empowered the "publick Receiver for the time being . . . dureing the Term of Four Years, after the Ratification of this Act, [to] pay out of the publick Treasury of this Province, the Sum of Fourteen Pounds Current Money to the Owners or Importers of each healthy Male British Servant, betwixt the Age of Twelve and Thirty Years, as soon as the Said Servant or Servants are assigned over into his Hands by him or them to whom they belong." The second article authorized the Public Receiver to dispose of these servants to the inhabitants of the Province "as much to the publick Advantage as he can, either for Money paid in Hand, or for Bonds payable in Four Months," and drawing interest at ten per cent thereafter. Article four provided that "in Case it so happen that there remains on any Occasions some Servants, whom the Receiver can neither dispose of in any reasonable time, nor employ to the Benefit of the Publick, he shall with the Approbation of Mr. William Gibbon, Mr. Andrew Allen and Mr. Benjamin Godin, or any two of them, sett these Servants Free, taking their own Bonds, or as good Security as he can get, for the Payment of the Sum or Sums of Money, as the Publick has expended in their behalf. The sixth article prohibited the importation of any who "were ever in any Prison or Gaol, or publickly stigmatized for any Matter criminal by the Laws of Great Britain."(441) The political authorities felt that those who had had experience fighting in the various European wars would make good militiamen for the American frontier. It did not make any difference which side they had fought on in Europe, for they expected that in America all Europeans would stand side by side against the Amerindians. The most exposed colonies were therefore constituted the most suitable place to settle the disbanded soldiery of Europe.(442) In May 1715, upon the recommendation of Governor Charles Craven and John Lord Cartaret, the legislature passed an arms confiscation act.(443) It allowed the government to "impress and take up for the publick service" ships, arms, ammunition, gunpowder, military stores and any other item "they shall think to employ and make use of for the safety and preservation of this Province." The Indian War had severely taxed the resources of the province and the government was desperate for arms, ships and supplies. Impressment seemed to be the only alternative to "standing naked against the Indian Enemy and their Confederates." The public treasury was required to make restitution, and officers were required to give receipts for the reasonable value of confiscated materials. The act also allowed the militia officers to "seize and take up such quantities of medicines, spices, sugars, linen and all other necessaries" required by both the poor and wounded militiamen. The governor planned to send a ship "northward" to trade peltries for arms and ammunition and, since it was expected that Craven would have to bargain for arms and supplies, he was authorized to seize furs with a value not to exceed £2500, giving receipt for such seizures. The militia also rose to the challenge during the Yamassee Indian War in 1715, although these aborigine were ill armed and poorly organized and in large defeated themselves through ineptitude. There was essentially no defense against surprise attacks except constant vigilance and the Yamassee and their allies worked surprise attacks very effectively. The colony found that forts were too far apart to support one another so the colony built additional forts to complete a strong chain across Yamassee territory. The forts never were quite large enough to shelter all who sought refuge during Indian attacks.(444) Nonetheless Governor Robert Johnson declared that the militiamen had acted bravely and said that in terms of competence in the art of war they compared favorably with the very best professional soldiers from Europe.(445) More responsible citizens, realizing the true state of affairs, and seeing that the population would suffer significantly from major losses of tradesmen and farmers in militia service, appealed over Johnson's head, asking that London dispatch regular troops. After much correspondence the parties compromised and formed a primary defensive unit comprised of British troops and volunteer colonial rangers, all paid by the colony.(446) During the Yamassee War, Governor Charles Craven ordered "about two hundred stout negro men" to serve in the militia. Since there were less than 1500 able-bodied whites in the colony, Craven felt justified in enlisting the slaves.(447) In 1717 the South Carolina militia consisted of 700 white men able to bear arms.(448) The legislature decided in that year to renew and revise slightly the colony's basic militia law. No substantial changes in the law were noted.(449) The Assembly reaffirmed the use of slaves in the militia in 1719, requiring that slaves serve in the militia if ordered to do so. It diminished the reward for slaves who captured or killed enemies in action, offering only £10 instead of freedom. All slave owners had to submit a list of able-bodied slaves between ages 16 and 60, who might be drawn upon in case of serious emergency.(450) The act also provided for the deployment of great guns in Charleston, the care and maintenance of the cannon and for the training of an artillery company within the militia.(451) South Carolina organized its militia units on a territorial basis. These geographic areas were commonly called beats. The colony supported two types of companies: the ordinary line companies and the elite volunteer companies, known variously as minutemen and frontier rangers. Mounted troops and cavalry were also considered elite volunteers. The law required all non-exempt male citizens to serve in a line company but gave them the option of serving in a volunteer company instead. Since only males with means could afford to serve in the mounted volunteer companies, most males served in the line, regular, or ordinary companies. Within each beat, every resident, service-age male who was not in a volunteer company belonged to a line company. The line companies were infantry units, literally the people in arms. The law required free white males to provide their own muskets and accoutrements. Free black males and slaves acted as fatigue men (laborers) and musicians. Membership of each line company varied over the years, but the maximum complement was sixty-four men and officers and the minimum was thirty. Companies could maintain full membership only if the number of males living within a beat remained level. Since such stability of population was rare, beat boundaries were redrawn whenever the numbers in a beat exceeded, or fell below, the reasonable limits provided by law. The line companies were the focal point for registration and training. Each company elected its own beat captain, and inductees registered with him when they turned eighteen. The beat captains used their rolls to organize the slave patrol and to see to it that all members attended training musters. The militia held musters for one full day every two months; four of those musters were company musters, one was a battalion muster, and one a regimental muster. During these musters, the inductees drilled, marched, and practised musketry, although with mixed results. The line companies were the building blocks of the militia structure. Line companies combined to form battalions, battalions combined to form regiments, regiments combined to form brigades, and brigades combined to form divisions. Companies, battalions, and regiments assembled for musters. Units larger than regiments, that is, divisions and brigades, seldom assembled. Instead, division and brigade staff members adjusted beat boundaries and inspected training musters at periodic intervals. These superior officers were appointed by the governor with the nominal approval of council and the legislature. Before the American Revolution, the militia had organized its line companies loosely into a number of regiments. These regiments covered vaguely defined areas, and the governor acted as the nominal commander-in-chief. Two regiments had formed in 1721, in which year the militia took on the administration of the slave patrol. The Southern Regiment was made up of the line companies in Granville and Colleton counties. The North West Regiment was comprised of the line companies in Berkeley and Craven counties. As the population of the colony increased, seven regiments had been formed by 1758, and twelve had been formed by the time of the American War for Independence. After the American Revolution, the state passed several pieces of legislation that organized militia companies into a more complex structure and placed regiments, brigades, and divisions within judicial districts. By 1787, twenty-three regiments had combined to form four line brigades.(452) After the suppression of the Jacobite Rebellion of 1715 the British decided to send some of the rebels to America. South Carolina had a policy of long standing that encouraged the importation of white convict and indentured servants to increase the number of whites in order to contain, indeed overwhelm, the black slaves. It is not surprising that the Lords Proprietors, Board of Trade and governor all wanted to receive as many of these condemned rebels as possible. On May 10, 1716, the Lords Proprietors advised Governor Craven, We having received two Letters from Mr. Secretary Stanhope signifying his Majtys pleasure in relation to such of the Rebels who were taken at Preston and are to be transported to his plantations in America that as soon as any of the Rebels shall land in any port of our province of Carolina you shall appoint a sufficient Guard for securing them till they are dispos'd of according to the Terms of the Indentures they have enter'd into here and such of the Rebels who have not enter'd into Indentures here you are to offer to them that they enter into the like Indentures with the others, Vizt. to serve for the space of seven years and in case of their refusal to enter into such Indentures you are to give proper certificates to those that purchase them that it is his Majesty's pleasure that they shall continue servants to them & their assigns for the term of seven Years, which certificates you are to cause to be recorded for the satisfaction of those who purchase them, lest they should attempt to make their Escape not being bound. We do hereby strictly require & command you to Obey these orders in every particular. . . ."(453) The government itself purchased some of these rebels. On 1 August 1716, Deputy Governor Robert Daniell sent a message to the Commons House of Assembly, explaining that the danger from Indian attacks was so imminent that he had taken it upon himself to purchase "thirty of the Highland Scots rebels at thirty Pounds per head to be paid for in fifteen days." He added that he, "would have contracted for the whole number, but that I could not persuade the commissioners that they had powers enough."(454) On the fourth an "act to Impower the Commissioners appointed to Stamp Fifteen Thousand Pounds in Bills of Credit to Pay for Thirty Two White Servants Purchased by the Honourable the Governor" was ratified.(455) In 1718 the legislature authorized the use of militia against the enemies of the Cherokees because "the safety of this Province does, under God, depend on the friendship to this government, which is in daily danger of being lost to us by the war now carried on against them by divers nations of Indians supported by the French."(456) To recruit Amerindians to the assistance of the provincial militia the South Carolina legislature placed a bounty on enemy Amerindian scalps. The law provided that "every Indian who shall take or kill an Indian man of the enemy shall have a gun given him for the reward."(457) In 1719 more specific legislation in this area was directed at the Tuscarora tribe. "Any Tuscarora Indian who shall . . . take captive of any of our Indian enemies, shall have given up to him, in the room thereof, one Tuscarora Indian Slave."(458) Slaves received similar rewards under a laws of 1706 and 1719.(459) If any slave shall, in actual invasion, kill or take one or more of our enemies and the same shall prove by any white person to be done by him, [he] shall receive for his reward, at the charge of the public, have £10 paid him by the public receiver for such his taking or killing every one of our enemies, as aforesaid, besides what slaves or other plunder he shall take from the enemy.(460) In the same year the king sent a substantial quantity of arms for the militia, so the legislature passed a substantial act which provided for a public magazine, a public armourer, care and maintenance of the arms, and penalties for private conversion of such arms. The arms were to remain in the magazine and were to be issued only upon authorization by the governor and in case of emergency. In 1719 the colony had 6400 white inhabitants, suggesting a militia potential of at least 500 men; in 1720 the governor reported 9000 white inhabitants, probably expanding the militia to 1000. In 1721 the report showed the same number of white inhabitants and 12,000 blacks.(461) On 12 January 1719 Colonel Johnson, on behalf of the governor, reported Amerindian population at the same period to the Lords of Trade.(462) Charles Town Name Villages Men Total 90-S.W. Yamasses 10 413 1,215 130-S.W. Apalatchicolas 2 64 214 140-W. Apalachees 4 275 638 150-W. by N. Savanas 3 67 283 180-W.N.W. Euchees 2 130 400 250-W. by N. Creeks 10 731 2,406 440-W. Abikaws 15 502 1,773 430-S.W. by W. Albamas 4 214 770 390-W.S.W. Tallibooses 13 636 2,343 200-N.N.W. Catapaws 7 570 1,470 170-N. Sarows 1 140 510 100-N.E. Waccomassus 4 210 610 200-N.E. Cape Fears 5 76 206 70-N. Santees 2 43 125 100-N. Congarees 1 22 80-N.E. Wensawa 1 36 106 60-N.E. Seawees 1 57 W & Ch 57 Mixt. wth ye English: Itwans 1 80 240 Settlement. Corsaboys 5 95 295 450-N.W. upper settlement 19 900 390-N.W. middle settlement 30 2,5000 11,530 320-N.W. lower Settlement 11 600 1,900 640-W. Chikesaws 6 700 In 1720 the Assembly reported to the Board of Trade that it possessed 2000 "bold, active, good woodsmen" who were "excellent marksmen." The principal obstacle to the development of a good militia was the sparseness of the population outside the few urban areas such as Charleston. By 1721 the militia rolls showed over 2000 men in two infantry regiments and one troop of cavalry. This militia was spread out through the colony, with lines of communication as long as 150 miles.(463) Following a major slave revolt in Charleston, the legislature passed the Militia and Slave Patrol Act of 1721,(464) which expanded the militia patrol system.(465) The principal result of the act was the creation of additional patrols with more men. Following the Yamassee War, the colony successfully moved away from proprietary government. Democratic sentiments for democratic government and choice of their own government propelled this largely peaceful revolution. In 1719 and 1720 the Speaker of the Assembly, along with eight other legislators, assumed political control of the colony. The assembly elected one of its own, James Moore, to serve as governor. Although the colony was in a deep economic depression following destruction of crops and frontier enterprises in the war, the leaders never allowed popular discontent to disrupt the main functions of government or to allow change to become radicalized. Moore immediately sent a letter of explanation of grievances to the crown. The colony had suffered from the Yammassee and Tuscarora wars and also from the repeated threats from the Spanish and the pirates to whom they had given protection. Sir Francis Nicholson arrived in May 1721 to become the new governor, captain-general and commander-in-chief. He carried several instructions relating to the military situation in the colony. The crown assured the colonists that it would supply ample arms, gunpowder and flints to enable the "Planters, Inhabitants, and Christian Servants" to defend themselves. The crown issued specific orders that the training of the militia was never to interfere with the ordinary business of the citizenry. Nicholson was charged with providing good officers for the militia. Under the cooperative leadership of Nicholson and the Assembly the colony once again prospered. One of the main objects of executive and legislative attention was the rebuilding of the militia to respond to the "constant alarms" from French, Spanish and Amerindian attack.(466) There were essentially four types of armed bands available to the colony. The great militia had provided most of the colony defense during the early years. As we have seen, Amerindian allies provided a low cost alternative to the militia. Provincial regiments had aspects of both a standing army and a select militia. The King's Independent Companies were recruited and trained in England. In 1721 the legislature again reenacted the basic militia law, making few changes, as had been the case with the last two militia acts. Because the lack of coordination among militia units had been a problem, the law required the three militia units closest to one another to muster annually and practice as a single unit. Ministers were added to the exemption list. Retired militia officers were exempted from militia musters, but had to serve in case of an alarm. Company captains could choose their own sergeants, clerks and corporals. The law now allowed seizure and forfeiture of personal property and goods to satisfy unpaid militia fines. The law also expanded the powers of impressment of goods and services in times of emergency or alarm. The 1721 militia act made more specific provision for armament of mounted troops who were now required to supply a good horse, a brace of pistols, carbine, sword, and proper saddle and mountings. Mounted troops could no longer be impressed into infantry. With reference to acts of 1712 and other years, the militia was charged with containment of slaves and white indentured servants. Militiamen could search the dwellings of slaves and confiscate any offensive weapons located therein. The grace period for former servants and other poor men to be armed and accoutred was reduced from twelve to six months. Militiamen could drafted into slave patrols as well as seacoast watch duty. Militiamen could hold, even imprison, slaves or indentured servants absent from their masters' plantations without a pass or sufficient cause. Penalties for neglect of duty and failure to appear at musters were increased.(467) After 1725 the professional military organizations of the provincial and independent companies assumed the primary defense of the colony and the militia was reduced to controlling the slaves and defending against surprise Indian attacks. The king's companies, having been comprised of professional soldiers from an urban environment, were essentially useless on the frontier. They especially resented being divided into smaller units, such as companies, for assignments. A judicious Indian policy then eliminated the necessity of using the South Carolina militia in Indian wars.(468) Thereafter it became increasingly difficult to convince militiamen to muster and, in turn, political leaders expressed less confidence in the usefulness and discipline of the militia. The county militia units were uniform only in their resistance to disciple and order. A few select militia units, notably the Charleston artillery, were well practiced. Neglect of militia discipline reduced many urban units to the position of being mere social clubs. In the year 1727 South Carolina had grave reason to prepare, arm and reorganize its militia. English attention was diverted to the War of the League of Hanover with Spain. Spanish attention was focused the Carolinas. The colony expected a Spanish attack to be launched from their foothold in St. Augustine. The restless Amerindian tribes also expected Spanish help and the authorities in South Carolina worried that a Spanish attack would herald a general Indian uprising. Perhaps the Spanish would also be prepared to precipitate a slave revolt to assist in achieving their design. One interesting act passed in anticipation of invasion required that all slave holders retain no less than one servant, often a purchased indentured immigrant, for each ten slaves. The price of indentured servants rose and male servants were included in the militia.(469) One important reason for importing indentured servants into the southern colonies was self-protection. The Spanish in Florida had long looked upon the constant southerly extension of English settlements in the late seventeenth century with as jealous an eye as they had viewed the French attempts of a century earlier. While actual hostilities did not break out until the opening of the eighteenth century, the loss of runaway servants and Negroes, rivalry in the Indian trade, and the unsettled state of affairs in their respective home nations all contributed to the suspicion with which the Carolina and Florida settlements regarded one other. Moreover, danger was always to be apprehended from the Indians, whether incited by Spanish intrigue or going to war for their own reasons. The colony was so desirous of having settlers on the frontier that it even went into the business of recruiting and importing servants. When the colony imported servants it demanded immediate placement so that the services of the new militiamen could be utilized. Moreover, the government wished to be reimbursed for the expenses involved in importing them.(470) Importation of white servants and convicts remained an important concern in South Carolina. The servants were really more important as a defense against possible slave insurrections than as a defense against the Amerindians or Spanish. As the culture of rice increased, the demand for slaves grew. More and more they furnished the vast majority of the colony's agricultural labor. The growth in number of slaves created a new demand for servants. In 1726 a committee of the Assembly reported that it was their opinion that "it will greatly reduce the charge for manning the said Forts if five servants be purchased for each and in order to procure the same we propose that Captain Stewart or some other person be treated with to transport such a number which we believe maybe agreed for at £40 or £50 per head indented for four years."(471) In January 1741, Lieutenant Governor Bull suggested the plan of purchasing sufficient single men to man the forts.(472) South Carolina purchased and hired servants as they were needed, for, as privately owned servants were liable to service in the militia and patrols. At this same time the reorganized militia had to be used to maintain internal orders. The South Carolina economy had become heavily dependent on the export of tar and pitch, vital commodities used by the British navy. Initially, the home government paid bounties, but in 1726, dissatisfied with what it considered inferior shipments of these pine tree extracts, it discontinued the subsidies. Extraction of tar and pitch were labor intensive, and required large stands of pine and some buildings and equipment. All of these were taxable items. As long as the economy was strong there had been few complaints about the taxes. The economy declined, and exports dropped to half their previous value, but taxes and expenses of keeping slaves and facilities did not decline. The arrest of plantation owner and tax protestor Thomas Smith, Jr., in April 1727 brought the situation to a head. A private militia, more like a mob, assembled and Smith was released. The legislature mobilized the militia. The tax protestors called a meeting at Smith's plantation. The legislature ordered the arrest of Smith's father on the charge of treason. With the leader gone, a strong and loyal militia in place, and the threat from the Spanish and their potential, if not real, Amerindian allies, still a reality, the revolt ended.(473) In April 1728 in a clash with the Yamassee, the South Carolina militia killed 32 of the enemy. The Yamassee retreated to Florida and took refuge in a Spanish castle. The militia demanded the surrender of the Yamassee, but the Spanish retorted that these Amerindians were subjects of the king of Spain. The militia retreated, unable to take the fortress because they lacked siege equipment and cannon. They did take fifteen Yamassee prisoner. The force consisted of a hundred Amerindians and one hundred militiamen.(474) In the fall of 1739, the Negroes made an insurrection which began first at Stonoe (midway betwixt Charles Town and Port Royal) where they had forced a large Store, furnished them Selves with Arms and Ammunition, killed all the family on that Plantation and divers other White People, burning and destroying all that came their way. The militia engaged one armed band of liberated slaves which consisted of no less than 90 armed men. In this engagement the militia killed 10 and captured four. They offered a reward of £50 for each insurrectionary captured alive, and £25 for each killed. The South Carolinians were certain that the Spanish played a role in seducing the slaves into revolt, "promising Liberty and Protection to all Slaves that should desert thither from any Part of the English Colonies, but more especially from this." Previously, "a Number of Slaves did from Time to Time by Land and water desert to St. Augustine."(475) The governor reported, In September 1739, our Slaves made an Insurrection at Stono in the heart of our Settlements not twenty miles from Charles Town, in which they massacred twenty-three Whites after the most cruel and barbarous Manner to be conceived and having got Arms and Ammunition out of a Store they bent their Course to the southward burning all the Houses on the Road. But they marched so slow, in full Confidence of their own Strength from the first Success, that they gave Time to a Party of our Militia to come up with them. The Number was in a Manner equal on both Sides and an Engagement ensued such as may be supposed in such a Case wherein one fought for Liberty and Life, the other for their Country and every Thing that was dear to them. But by the Blessing of God, the Negroes were defeated, the greatest Part being killed on the Spot or taken, and those that then escaped were so closely pursued and hunted Day after Day that in the End all but two or three were [killed or] taken and executed. That the Negroes would not have made this Insurrection had they not depended on St. Augustine for a Place of Reception afterwards was very certain; and that the Spaniards had a Hand in prompting them to this particular Action there was but little Room to doubt, for in July preceding Don Piedro, Captain of the Horse at St. Augustine, came to Charles Town in a Launch with twenty or thirty Men . . . .(476) The Georgia militia restrained the slaves who attempted to cross that province to gain freedom in Spanish Florida. Caught in a pincer between South Carolina and Georgia militias, the slave revolt was crushed and the leaders executed and other slaves mutilated or deported.(477) Afraid of the consequences of another slave revolt, the slave owning militiamen thought of the containment of the blacks as their first obligation. No militiaman who owned slaves was willing to leave his plantation to go off hunting down Indians when his slaves might rise up and massacre his family. Since there were three able-bodied blacks for every able-bodied white man in the colony, it made a great deal of sense to use the militia to contain the slave menace. Slavery had become "a source of weakness in times of danger and . . . a constant source of care and anxiety."(478) After the slave revolts those blacks who were mustered were less well armed than had been the case heretofore and were deployed primarily to scout and forage. In 1733 a conspiracy had been formed between slaves and Amerindians who were already ravaging the frontier. It was betrayed accidentally by an Indian woman who bragged of the impending alliance and expected resulting insurrection. The Assembly investigated and interrogated the woman who claimed that all the Indian nations were about to unite in one final, great, all-out battle to drive the whites from their shores, and that they would be aided by the slaves. The Assembly then considered what would happen if the French should aid the Amerindians while simultaneously infiltrating the slave population. It concluded that there were "many intestine dangers from the great number of Negroes" and that "insurrections against us have often been attempted and would at any time prover very fatal if the French should instigate them by artfully giving them an Expectation of Freedom." Finally, on 10 November 1739, the colonial legislature enacted a law which required, that every person owning or intitled to, any Slaves in the Province, for every 10 males slaves above the age of 12, shall be obliged to find or provide one able-bodied white male for the militia of this Province, and do all the duties required by the Militia Laws of this Province . . . that every owner of land and slaves . . . who shall be deficient herein, his sons and apprentices above the age of 60 years, to be accounted for and taken as so many of such white persons to serve in the Militia.(479) By 1730 there were "above 3000 white families" in South Carolina, suggesting a militia potential of 2500 or more men.(480) By 1736 the number of white men in South Carolina exceeded 15,000.(481) In 1731 the legislature attempted to reduce the Amerindian potential for war by limiting the amount of gunpowder available to them. No trader was to trust any Indian with more than one pound of gunpowder or four pounds of bullets at one time.(482) In 1730 less than half, perhaps 40%, of the slaves in the province had lived there for more than ten years, or had been born there. By 1740 the slave population of the colony was 39,000, of whom 20,000 had been imported over the past decade. In the five years preceding the Stoenoe insurrection more than 1000 slaves had been imported into St. Paul's Parish, nearly all from Angola or the Congo. There was a certain cohesiveness among this group which had been living together only five years earlier. They generally spoke the same language, which was incomprehensible to whites and most, if not all, slaves who had lived there for some time past. Those responsible for the maintenance of the slave system were concerned with new conspiracies and had given little or no thought heretofore about the possibility of past associations leading to insurrection. Contemporary accounts suggest that the uprising was primarily an Angolan event. In 1734 the legislature passed a new act for regulating militia slave patrols in South Carolina.(483) The county militia officers were to appoint one captain and four militiamen in each county to serve as slave patrols. "Every person so enlisted shall provide for himself and always keep, a good horse, one pistol, and a carbine or other gun, a cutlass [and] a cartridge box with at least 12 cartridges in it." Patrols were to survey all estates and roads within their counties at least once a month for the purpose of arresting any slaves found beyond their masters' lands without a permit. Should a patrol locate a band of slaves too large to contain it was to send word to the officer in charge of the county militia who would then assemble whatever force was necessary to contain the slaves. "It shall be lawful for any one or more of the said patrol to pursue and take the said slave or slaves, but if they do resist with a stick or any other instrument or weapon, it shall be lawful for the patrol to beat, maim or even to skill such slave or slaves." Masters hiding runaway slaves could be taken also by the militia patrol, with a minimum penalty of £5. Penalty for refusal or failure to serve was £5. In 1734 several important acts passed the South Carolina legislature. First, the legislature reenacted the basic militia act which continued to provide for the enrollment of all able-bodied, free, white males between ages 16 and 50 was enacted.(484) Next, the legislature created legislation for setting patrols which would look for Indian activities and runaway slaves and indentured servants.(485) In early 1735 the legislature ratified legislation providing for better regulation of slaves, which included responsibility for the militia to assist in containing "evil designs" of slaves.(486) In 1737 the Assembly debated a law allowing slave patrols to "kill any resisting or saucy slave." In 1737 the legislature appropriated £35,000 for the defense of the colony, including the arming and equipping of the militia.(487) The legislature also authorized the creation of several forts as buffers against the Amerindians. The militia volunteers assigned to guard duty were to be paid out of public funds.(488) In 1739 the militia was organized into companies, regiments and battalions, with battalions being formed when any three or more companies could be formed within three miles of one another. In 1738 the act relating to slaves keeping guns was amended to bring it into conformity with the Negro Act.(489) With the Spanish threatening invasion and Indian problems on the frontier the militia was already overextended and could not provide adequate slave patrols. The colony was also beset by the ravages of an outbreak of smallpox. When the plotters realized that it could not provide for all its responsibilities they decided that the time was ripe to move against the white masters. As early as 8 February 1739 the provincial secretary of Georgia heard from slaves who had escaped from South Carolina that "a Conspiracy was formed by Negroes in Carolina to rise and forcibly make their Way out of the Province." The Stoenoe insurrection occurred in September 1739 when slaves killed about 25 whites and destroyed considerable property. Before the revolt ended the slaves had killed about sixty persons of all races. The South Carolina Militia engaged 90 slaves in a single body. They killed all but four. The militia commander then posted a reward of 50 livres for each insurrectionary taken alive and 25 livres for each taken dead. Those not taken where believed to be headed for Georgia, as the earlier report had suggested they would. The militia commander blamed the Spanish for inciting the revolt by offering freedom to all slaves who sought asylum in Florida. On 13 September 1739 an eyewitness described the insurrection at Stoenoe. Negroes had made an insurrection which began at Stoenoe, midway betwixt Charles Town and Port Royal, where they had forced a large store, furnished themselves with Arms and Ammunition, killed all the family on that Plantation and divers other White People, burning and destroying all that came in their way.(490) The legislature of South Carolina posted a reward for those insurrectionist slaves who escaped to Georgia. Men were valued at £40, women at £25 and children under 12 brought £10, if brought in alive. If killed adult scalps with two ears brought £20. One party of four slaves and a Catholic Irish servant killed a man as they headed for anticipated asylum in Spanish Florida and were pursued by militia acting as posse comitatus. Amerindians killed one slave and received the £20 reward, but the others reached St. Augustine where they were warmly received. Two runaway slaves were displayed publicly. One who apparently had no hand in the insurrection, but had merely used the confusion to try to escape, was publicly whipped. The other, branded an insurrectionist, was induced to make a confession of his errors and crimes before a large group of slaves. Contrition did him no good for he "was executed at the usual Place, and afterwards hung in Chains at Hangman's Point, opposite to this Town, in sight of all Negroes passing and repassing by Water." Slaves remained at large as late as November 1739, when rumors spread that the remaining insurgents were planning another revolt. The Assembly requested that the governor muster the militia. In December the militia captured several slaves. In March the Assembly arranged for the interrogation of several others captured by militiamen acting as posse comitatus whom the militiamen suspected of plotting insurrection. In June 1740 the slave patrols in neighboring St. John's Parish, Berkeley County, arrested a large group, perhaps as many as 200 slaves, who were charged with conspiracy to foment insurrection. In Charleston in 1741 a slave insurrection was suspected in a series of arson fires and the militia was mustered. In 1742 the militia also investigated an alleged slave conspiracy being planned in St. Helena Parish.(491) The Stoenoe insurrection prompted the legislature on 11 April 1739 to rewrite the slave patrol code. All caucasian males between ages 16 and 60, and all women who owned 10 or more slaves, were liable for containment of the slaves, who comprised a significant portion of the colony's population. Since the primary protection of the colony had, since 1725, been entrusted to a standing army and ranging companies, slave patrol had become the primary militia obligation. No less than one-fourth of the militia was to be retained in all situations to control the slaves. All citizens, whether slave holders or not, were subject to service in the militia slave patrol. County militia captains were required to establish regular patrol beats. Militiamen in actual slave patrol service were enlisted for two months at a time. The law permitted hiring substitutes provided that the individual paid the substitute 30 shillings per night and outfitted him. The militia officers chose the patrol officers.(492) This law remained in effect until 1819. Between 1737 and 1748 South Carolina, like its sister colonies, was embroiled, first, in the War of Jenkins' Ear (1739-1744), and, second, King George's War (1744-1748). During the January 1739 term of the South Carolina House of Commons the legislators debated two major amendments to the basic militia law. First, it decided that no man need carry arms to church on Sundays if he chose to go disarmed; and that the owners of slaves who did not wish to carry arms need not bear arms if they chose to go unarmed.(493) The legislature's explanatory act for "better regulating the militia of this Province" emphasized more integrated regional militia training. Training at the company level was set at six musters a year, "but not oftener." Greater provision was made for company and regimental implementation of discipline and for appeal from courts martial.(494) In 1740 the governor and legislature approved a new manual for military discipline "calculated for the use and very proper and perusal not only of the officers, but of all Gentlemen of the Militia of South Carolina . . . according to the improvement made for Northern Troops."(495) It was an abridgement of General Humphrey Bland's book first published in London in 1727.(496) The financial burden of war fell heavily on the colony for it was forced to pay for militia and volunteers to guard the southern border and to contain the Amerindians on the frontier. The legislature summoned large numbers of militia. Legislative-executive cooperation was good, thanks largely to the able administration of governors William Bull (served 1737-44) and James Glen (served 1744-56). Still, it was the legislative committee system that had assumed control of the militia following the revolution against proprietary leadership in 1719 and 1720. It planned expeditions, approved appointment of officers, levied taxes and paid military expenses. It even decided to deploy the militia when the colony was engaged in military action in Florida.(497) The South Carolina Assembly intended to cooperate fully with Georgia's Governor James E. Oglethorpe who commanded the joint Georgia-South Carolina expedition against the Spanish in Florida in 1739. The Assembly estimated that its share of Oglethorpe's planned expedition would be £100,00 South Carolina currency. Speaker Charles Pinckney argued that such a sum was beyond the ability of the colony to bear. Rice prices had fallen on the international market, the treasury had nowhere near that amount and increased taxation would fall heavily on everyone. With the war underway, Pinckney argued, the colony already was heavily committed to military expenses. It would have to increase the watch, especially in the area of Charleston, inspect various fortified sites and public and private arms, repair and garrison forts along the frontier, buy arms and supplies, set up magazines and repair the colony's arms which were reportedly in bad shape. The real shock came when the Assembly received Oglethorpe's estimate of costs, which he placed at £209,492. The Assembly was unprepared to appropriate more than £120,000, with £40,000 being taken from the treasury and the remainder funded by a bond issue. The estimate included many categories of projected costs: pay to slave owners for the use of slave labor, gifts for various chiefs and supplies for 1000 Amerindians, munitions, militia pay, transportation from Charleston to St. Augustine, medical supplies and surgeons, provisions and food for the men.(498) Additionally, gubernatorial Indian policy had been founded upon good diplomacy, regulation of the Indian trade, sending agents among the various tribes and the offering of relatively expensive gifts to key Amerindian leaders. With the coming of war, Governor Bull informed the legislature on 13 February 1740 that more gifts and more agents would be required among all the tribes. Continuation of the policies that had worked, Bull argued, would save the lives of numerous militiamen in a pointless Indian war.(499) The South Carolina Assembly ordered that masters prepare a list of trusted slaves who might be enlisted in the militia. In case of a general alarm these selected slaves would be provided with a gun, hatchet or sword, powder horn, shot pouch with bullets, 20 rounds of ammunition and six spare flints. Once again the legislature held out the promise of manumission for such slaves as might kill or capture an enemy. Slaves who fought well might be rewarded with gifts of clothing, such as "a livery coat and pair of breeches made of good red Negro cloth turned up with blue, and a pair of new shoes." They might also be rewarded with being granted annually for life a holiday on such a day as they had performed bravely.(500) While the legislature entered debate concerning the colony's participation in the expedition to destroy St. Augustine, Oglethorpe suggested that 1000 slaves be enlisted as volunteers in the militia. About two hundred were to be armed while the other 800 would act as porters and servants. Masters were to be paid £10 per slave per month of service, masters assuming all risks except death. If a slave were killed his master would be compensated for his actual value, not to exceed £250.(501) The idea went untested because in late 1739 a major slave insurrection occurred at Stoenoe, followed by a second insurrection nine months later in Charleston.(502) The legislature, having discovered that slaves had secreted a rather substantial supply of weapons, ordered that no slaves be armed for any reason whatsoever.(503) Oglethorpe protested that four months had passed without action and that the Spanish were undoubtedly preparing for a possible expedition against their Florida stronghold. Word then reached him that the Assembly of his ally had cut £100,000 from his request. Oglethorpe protested and Bull took a full month to deliver his message to the Assembly, which for its part responded by setting up yet another committee of inquiry. On 15 April the Assembly passed the military appropriation bill and appointed one of its own and a member of the military appropriations committee, Colonel Alexander Vander Dussen to command its militia. There was additional significance attached to the Assembly's passage of the military appropriations bill, for with it the lower house asserted its right to pass in final form all appropriations and denied the power of the upper house to make any changes to such money bills.(504) Oglethorpe's mixed force of British regulars, Amerindians and militia landed in Florida on 20 May 1740. It enjoyed a few initial successes, burning the town of St. Augustine. Fort San Marcos withstood the siege. South Carolina found Oglethorpe's leadership lacking in most areas of command. Specifically, he alienated both the South Carolina militia and the Amerindians. Whether Oglethorpe's fault or not, the men suffered grievously from heat, disease and excessive rainfall. Well protected inside the fort the Spanish waited for the appearance of a relief force. Dissensions remained and indeed grew in intensity as the siege showed no progress. On 19 July Isaac Mazyck, a leader of the Assembly from Charleston, delivered a preliminary military committee report to the legislature. The expedition, he said, was a "lost cause" and South Carolina should withdraw its militia as soon as possible.(505) By August Oglethorpe agreed and ordered the expedition with withdraw to the north. In South Carolina the Assembly was shocked and then reacted by seeking a scapegoat. The new speaker, William Bull, II,(506) son of the lieutenant-governor, appointed a committee to "inquire into the Causes of the Disappointment of success in the Expedition against St. Augustine." The upper house followed suit. The lower house also created a committee to seek assistance from the home government. Bull appointed the most important legislative leaders to serve of these committees, excusing them from all other duties until the work of the committees be finished.(507) A thorough report of more than 150 pages, the final document contained no less than 139 appendices with extracts from various journals, reports and letters from those who had served with the South Carolina contingent. Not surprisingly, by 1741 the house had issued a final report which was highly critical of Oglethorpe and the expensive mission because they had failed to achieve any important part of the mission.(508) The second committee used the report to justify its requests for money and troops from England. Despite its size and documentation the report failed to take into account the long delays in mounting the expedition caused by legislative wrangling, the failure to surprise the Spanish in Florida and the inadequate supplies. Simply put, the legislature had not supplied materials for a full siege and the militia had not been trained or equipped for that type of warfare. The fundamental militia law which had been reenacted on 11 March 1736 was again extended on 22 January 1742.(509) On 7 July 1742 the legislature enacted a law which was designed the enroll in the militia frontiersmen, especially Indian traders, who were unenumerated on tax lists. The purpose of the law was to provide a first defense line of frontiersmen who were familiar with the terrain and with local Amerindian customs. As the legislature wrote, the war was designed to "secure the assistance of people who are unsettled that they may be encouraged . . . [to] enlist in the service of this Province before any draughts are made of the [urban] militia."(510) In 1742 the legislature ordered the recruitment of militia volunteers and militia rangers to "repel his Majesty's enemies and to contribute the utmost of our Power to the defence of the Colony of Georgia and this Province." Governor William Bull asked for and received legislative authorization to issue £63,000 in paper currency to pay for the expedition to defend Georgia.(511) In 1743 the South Carolina legislature passed an act which required that citizens to go armed to church and other public places. Whereas, it is necessary to make some further provisions for securing the inhabitants of this province against the insurrections and other wicked attempts of negroes and other slaves within the same, we therefore humbly pray his most sacred majesty that it may be enacted, and be it enacted by the Hon. William Bull, Esq., lieutenant-governor and commander-in-chief in and over his majesty's province of South Carolina, by and with the advice and consent of his majesty's honorable Council, and the Commons House of Assembly of this province, and by the authority of the same, that within three months from the time of passing this act every white male inhabitant of this province (except travelers and such persons as shall be above sixty years of age) who, by the laws of this province, is or shall be liable to bear arms in the militia of this province, either in times of alarm or at common musters, who shall, on any Sunday or Christmas day in the year, go and resort to any church or any other place of divine worship within this province, and shall not carry with him a gun or a pair of horse-pistols, in good order and fit for service, with at least six charges of gunpowder and ball, and shall not carry the same into the church or other place of divine worship as aforesaid, every such person shall forfeit and pay the sum of twenty shillings, current money, for every neglect of the same, the one-half thereof to the church-wardens of the respective parish in which the offense shall be committed, for the use of the poor of the said parish, and the other half to him or them who will inform for the same, to be recovered on oath before any of his majesty's justices of the peace within this province in the same way and manner that debts under twenty pounds are directed to be recovered by the act for the trial of small and mean causes.(512) As late as 1765, a grand jury at Charleston, South Carolina, presented "as a grievance the want of a law to oblige the inhabitants of Charleston to carry arms to church on Sundays, or other places of worship."(513) In 1744 Governor James Glen requested that the legislature provide new taxes to strengthen the militia and build and repair magazines and fortifications. He was greatly concerned that the war with Spain would invite privateers, pirates and Spanish forces from Florida to invade the Carolinas. He wished also to protect the lucrative trade that Spain coveted.(514) By this time full power over fiscal matters had passed to the Assembly, the beginning of a long process which, by 1760, had eliminated Council and the upper house of virtually all their powers over the purse. The legislature moved to consider Glen's requests at snail's pace. As we have seen regarding Oglethorpe's expedition against St. Augustine, this was simply the price for the increasing democratization of the decision making process. The legislature referred gubernatorial requests to committees which held hearings, considered their constituents' viewpoints and wrote reports. Executive requests in the vital areas of Indian affairs, military appropriations and provincial defense were delayed by the workings of the emerging democratic process. Control of the militia through legislation and appropriations was among the most important applications of the popular legislative power.(515) Meanwhile, the home government demanded an accounting of the colony's business and an enumeration of its population. In 1745 the governor reported that number of whites in South Carolina exceeded 10,000 with more than 40,000 blacks, primarily slaves. In 1749 the number of whites reported by the governor had grown to 25,000 while the black population had declined slightly to 39,000. The militia could count at least 2000 men, with a few trusted armed slaves and others enlisted as porters and musicians.(516) In 1747 the legislature modified the basic militia law, noting that "the safety ad defence of this Province, next to the blessings of Almighty God, and the defence of our most gracious Sovereign, depends on the knowledge and use of arms and good discipline." Where three or more militia companies co-existed within a distance of six or fewer miles, a regiment was to be formed and periodically jointly exercised. Ideally, each county would form a regiment comprised of its various militia companies. Each company was to muster at least six times a year. Other than a reduction in the minimum required number of cartridges from 20 to 12, the that aspect of the law dealing with arms and accoutrements was unchanged. It chose to ratify what had long been considered, by custom and tradition, to be a primary power of the governor, that of appointing all commissioned and non-commissioned officers in the militia. No one could refuse to accept a gubernatorial militia appointment. The law continued to authorize a troop of cavalry, but limited its number to 200 men. The law authorized formation of an artillery militia, with these men being exempted from additional duties. In cases of insurrection the governor, lieutenant-governor or president of council was required to command the militia in person. All citizens between ages 16 and 60 were to be enrolled, excepting only strangers residing in South Carolina for less than three months and a small list of others. Those exempted from militia duty remained the same as in earlier laws, although the law reduced substantially the number of those exempted in various professions such as millers, ferry operators and sailors. The law also required that those exempted be required to muster in case of emergency. The law now required masters to arm apprentices in the same way that it had required masters to arm indentured servants. Those apprentices who had served their terms were granted six months to supply their own arms. Those citizens who had moved from their homes were to be carried on muster rolls, and expected to continue to serve in the militias, of their old homesteads until they joined another militia unit at their new homes. When raising militias to repel invasions or suppress insurrections, the governor's power to call out companies and regiments was essentially unlimited. Still, the law charged the governor with retaining in each county and city sufficient militia to control slaves. Fines for failure to muster were increased, with £50 being the minimum penalty for those who refused to muster in time of alarm. Superior officers could levy fines up to £500, and impose corporal punishment less than loss of life or limb, for various offenses under the act. The act increased the size and frequency of slave patrols. Superior officers could muster militias through the regimental level provided they received reports of insurrection, invasion or Indian attack from reliable witnesses or informers. Masters were to provide a list of reliable slaves who might be marched with the militia. These slaves could be armed with "one sufficient gun, one hatchet, powder horn and shot pouch" and ammunition and accouterments, although they could not possess the arms until they were marched with the militia. If a slave served on militia duty the province paid his owner for his time. If the slave was killed the state paid his master for his market value; and if he was disabled, the colony compensated his master for the loss of his services in proportion to his disability. Slaves who showed conspicuous acts of bravery under fire, or who killed or captured an enemy, were to be rewarded with clothing annually for life. If the slave was freed as reward for his bravery, his master received public compensation. Slaves serving in the militia who failed or neglected their duty were subject to corporal punishment. If a poor man or servant was injured while serving in the militia, he was to be paid an annual stipend according to his loss. If a poor man or servant was killed, his family was to receive public support at a rate of £12 a year. The province supported the dead man's children only through age 12 and the widow only so long as she remained single. Indentured servants who acted bravely in combat or who killed or captured an enemy was to be freed, with his master receiving public compensation for loss of his services.(517) The act of 13 June 1747 was continued by an act of 1753 for two years, revived and reenacted in 1759 for a period of five years. From 1749 through at least 1764 there were constant conflicts between the lower house of the legislature over military and other appropriations and taxation, with each of the several governors trying to reestablish executive prerogatives and the legislature resisting. Likewise, the upper house attempted to interject its authority only to receive similar resistance. The lower house gradually gained control over political patronage, local administration, finance, and the militia. In times of war or threatened conflict the legislature would demur until the governors agreed to the erosion of their power as the price they must pay to accede to pressures from the home government to make war or prepare for war. For example, when Glen wanted to build and garrison Fort Loudoun in what is now Tennessee the lower house agreed only upon condition that the governor was willing to recognize the power of the assembly over the budget. The interests of at least the militia officers were well protected because many of the elected assemblymen during this period were simultaneously officers in the militia. George Gabriel Powell, for example, held an assembly seat for over 20 years while serving as a colonel in the militia. He frequently chaired committees on military appropriations and militia affairs.(518) The only area concerned with the militia directed by the executive was Indian affairs. Ably assisted by Edmund Aiken, Governor Glen and other governors conducted a model of colonial Indian policy. Glen enjoyed considerable legislative support, especially from the committee on Indian affairs in the lower house of the Assembly. By keeping the various tribes either pro-British or at least neutral the administrations reduced the burden upon the militia.(519) Glen's policies worked well, preventing any Indian war. When Glen's successors William Henry Lyttleton and Thomas Boone chose to ignore the Assembly, the legislature was vocal in its disagreement. These clashes the with Indian affairs committee over the direction of Indian policies resulted in the bloody war with the Cherokees. The Cherokee War was the greatest challenge mounted on South Carolina's soil since the Yamassee War of 1716-17. The causes of the war were many, including failures of gubernatorial Indian policy, the duplicity of the Indian traders, aggressive expansion into Indian lands and the successful intrigues of the French. In early 1759 the Cherokees overran Fort Loudoun and burned many isolated farms and frontier settlements. By early 1760 they threatened Prince George and Ninety-six. The governor mustered the militia, but a smallpox epidemic struck hard at those gathered at Charleston. Low country planters withheld their militia after threats of a slave insurrection spread. Those militia initially deployed suffered several defeats, primarily from well executed ambuscades. Lyttleton, perhaps using his political influence to secure the post, received a promotion from the Board of Trade to become governor of Jamaica. William Bull II assumed the high executive post and immediately took certain bold steps. He asked for and received legislative support to increase the number, training and supplying of additional ranging companies. He recruited his rangers heavily along the frontier, offering various bonuses, an opportunity for revenge and appeals to patriotism. The men he chose, after proper training and outfitting, proved to be the correct force for the job. As all colonial politicians discovered, urban militia were essentially useless in the deep forests and were not even especially suited for garrison duty in isolated areas. Some British regulars assumed responsibility for garrison duty in some forts. The Amerindians of course had made no real provision for a war of some length by laying in food and supplies. The provincial rangers simply ground them down in a series of small clashes, none of which was especially noteworthy; and by destroying their homes and crops and dispersing their families. The Assembly had to raise money to pay the militia and to offer assistance to the frontiersmen who suffered from the ravages of Amerindian attacks. Only slowly did the Assembly realize the full extent of the depravations suffered on the frontier. The Assembly then set up a committee to investigate the causes of the war and the reasons why damages were so great. The committee report found fault with Bull's handling of the situation and criticized him for moving too slowly.(520) The Assembly had to deal with a second problem. Indian defeats were inevitably accompanied by cession of lands to the provincials and the Cherokee War was no exception. While new settlers arrived in large numbers to take up homesteads on the cession, there were other undesirable elements who followed. Some militiamen had deserted as their companies returned home, returning to the abandoned homes to loot. Others turned bandits and stole the supplies sent to the relief of the frontier families. These men were soon joined by escaped slaves and indentured and criminal servants, Indian traders and criminals. Since the new land had essentially no constables or sheriffs, the Assembly once again had to muster the militia to bring law and order to the frontier. The task of the militia was complicated by the emergence of kangaroo courts and vigilantes. A circuit-riding minister, Charles Woodmason, is credited with having drawn up a petition to the Assembly, signed by over 1000 backwoodsmen, asking for greater law enforcement and true justice. Rumor followed that regulators were planning to march on Charleston. The legislature passed legislation designed to bring a permanent peace, law and order to the frontier, culminating in the Circuit Court Act of 1768.(521) Between 1748 and 1764 the Assembly worked on legislation designed to prevent slave insurrections. Enforcement of these regulatory acts fell heavily on local governmental units led by committees dominated by planters. Committee members had to supply their own arms, and go armed everywhere, and were authorized to arrest (or kill) any slave suspected of illegal or ever "suspicious" activity.(522) In 1751 the Negro Act(523) made provision for the militia patrols to apprehend, confine and punish or maintain, or deport any slave involved in insurrection or "that may become lunatic."(524) The legislature ordered the militia to pursue at the expense of the owners any runaway slaves who were likely to ferment insurrection. When slaves ran away and when three or more of the runaways gathered, the slaves were considered to be forming a conspiracy. The law required the militia to collect them and return them to their owners and if the slaves refused to submit, the law mandated that the militia kill them. Owners were required to pay into the militia fund £5 for each of their slaves captured or killed. Slaves were forbidden to "carry a gun or any other firearm, with ammunition, to hunt, or for any other purpose" upon penalty "of being whipt, not exceeding 20 stripes." No "free negro, mulatto or mestizo" was permitted to loan or give a firearm to any slave, upon penalty of fine, physical punishment or imprisonment. In 1756 the home government appointed a new governor, William Henry Lyttleton, who served until 1761. In his first year in office Lyttleton reported to his superiors that the militia of the province included 5000 to 6000 men, ages 16 to 60, enrolled according to the muster rolls.(525) In 1756 British assigned a quota of 2000 men to be raised in South Carolina as a part of 30,000 man force the English hoped to raise in the colonies to join with the British troops in an invasion of Canada.(526) The quota was reduced in order to deploy the militia to defend the colony. Lord Loudoun complained to Cumberland that "the great Number of Troops that are employed in Nova Scotia and South Carolina . . . robs the main body" of his force mustered to invade Canada. In "South Carolina I think there is more Force there than [is] necessary." He asked that the quota be reinstated and that South Carolina be ordered to send the men to his army.(527) The governor sent a mission to Indian territory in the autumn of 1756 to discover how the Amerindians were receiving firearms wherewith to conduct their raids on the outlying settlements. Daniel Pepper reported that a minor chief named the Gun Merchant had, in the past, procured arms from the French agents who were urging the tribes to rise up and drive out the English. Since the French had withdrawn Gun Merchant was procuring arms from the various Indian traders working their territory. Pepper warned that since the French had sold rifled guns instead of trade muskets the Indians wanted no other arms and that they had become exceedingly proficient in the use of rifles, regularly hitting targets at 200 yards.(528) Having decided to put an end to hostilities with unfriendly Amerindian tribes, and to give an incentive to its provincial militia, the South Carolina legislature decided to follow the pattern set by other provinces and place a bounty on Amerindian scalps. By 1757, in response to the emergency of the French and Indian War, the militia had seven infantry regiments and three cavalry troops with over 6500 men.(529) In 1760 the legislature passed an act to specifically authorize the formation of an artillery militia in Charleston. Noting that the men had "taken great pains in learning the exercise of artillery" it thought this authorization was long overdue. Those serving in the company were exempted from other militia duties. They had the same power of impressment of supplies and portage as other militias. The company, like the mounted militia, was obviously a highly select band, comprised of the sons of wealthy merchants, planters and tradesmen and placement was difficult.(530) On 31 July 1760 the legislature appropriated £3500 to pay for Cherokee "and other hostile" Amerindian scalps.(531) In 1761 the colony received its new governor, Thomas Bone, who served until 1764. In 1770 Charleston had 5030 white and 5830 black inhabitants. The total number of white inhabitants in the colony was not provided, but there were 75,178 blacks, mostly slaves, in South Carolina just a few years before the beginning of the War for Independence.(532) On the eve of the American Revolution there were over 12,000 men in a dozen infantry units and a cavalry regiment.(533) News of the clash between the patriots and the British army at Lexington and Concord reached Charleston, South Carolina, within ten days, via courier dispatched by the Massachusetts Committee of Safety. A gentleman from Charleston wrote to a friend in London of the militia preparations in Charleston. "Our companies of artillery, grenadiers, light infantry, light horse, militia and watch are daily improving themselves in the military art. We were pretty expert before, but are now almost equal to any soldiers the King has." Men in the rural areas were ready also and the colony planned to raise a "company of Slit-Shirts immediately."(534) In February 1776 the South Carolina Provincial Congress considered the military needs of the state. It began with the premise that "it is absolutely necessary that a considerable body of Regular Forces be kept up for the service and defence of the Colony in this time of imminent danger." The Congress decided that the "Regiment of Rangers be continued" and that the number of men be increased. The rangers "shall be composed of expert riflemen who shall act on horseback or on foot, as the service may require." It also ordered that there be created another "Regiment of expert Riflemen, to take rank as the Fifth Regiment." All riflemen were to provide themselves at their own expense with "a good Rifle, Shotpouch and Powderhorn, together with a Tomahawk or Hatchet." The public would supply them with "a uniform Hunting-shirt and Hat or Cap and Blanket." All riflemen would be tested for their skills by the commanding officer.(535) The Congress sought to contract for arms for the militia. The Commissioners for purchasing Rifles . . . are hereby authorized and empowered to agree with any person to make a Rifle of a new and different construction . . . . to contract for the making, or purchasing already made, any number . . . of good Rifles with good bridle locks and proper furniture, not exceeding the price of £30 each; the barrels of the rifles to be made not to weigh less than 7 1/2 pounds or to be less than three feet, eight inches in length; and carrying balls of about half an ounce weight; and those new ones already made not to be less than three feet four inches long in the barrel. Also for the making or purchasing already made . . . good smoothbored Muskets, carrying an ounce ball, with good bridle locks and furniture, iron rods and bayonets . . . the Muskets to be made three feet six inches long in the barrel and bayonet seventeen inches long. . . .(536) In 1776 the British command laid its first plans to invade South Carolina and hold Charleston. In light of the discovery of the plan, South Carolina mobilized its militia.(537) In the autumn of 1776 the South Carolina legislature sent Colonel Williamson into the backwoods, to fight the Cherokee nation, which was then under British influence. In September the force under Colonel Wiliamson crossed the Catawba River in North Carolina, in pursuit of the enemy. They sought to join with North carolina militia under General Rutherford and Virginia militia under Colonel William Christian. Initially ambushed, Williamson fought back and turned the engagement into a victory over the hostiles, who then fled. After joining with Rutherford and Christian, the force laid waste to most of the Cherokees' principal towns and villages and took British supplies valued at £2500. They also recaptured several runaway slaves and several British agents.(538) In 1776 the state adopted a new constitution. That document first noted that Britain had forced a defensive war upon the colony in part because of its military policies. It empowered the legislature to create a militia and to commission all military officers.(539) Military occurrences in the colony were few until 1780, and thus the militia remained generally inactive. The militia served primarily as a reservoir of trained manpower to furnish troops for South Carolina's share of the Continental Line. The provincial militia act remained in force until 1778 when the legislature decided to rewrite the law to reflect the change from dependency to sovereign state. The law specifically disallowed private militias such as had been formed as vehicles to achieve independence, and ordered that any such private armed forces then existing be disbanded. Every able-bodied man between ages 16 and 60 was required to serve or pay a fine of £200. Those exempted from militia duty included all state executive, judicial and legislative personnel and their clerks; past-masters and post-riders; river and harbor pilots and their crews; one white man in each grist mill and ferry; and firemen in Charleston. Each man was obliged to provide "one good musket and bayonet, or a good substantial smooth bore gun and bayonet, a cross belt and cartouch box, capable of containing 36 rounds, . . . a cover for the lock of said musket or gun, or one good rifle-gun and tomahawk or cutlass, one ball of wax [and] one worm or picker." The militiaman had his choice of providing lead balls or buck-shot, as well as gunpowder and spare flints. The militia was to be divided into three brigades, each commanded by a brigadier-general; and regiments of from 600 to 1200 men commanded by a full colonel; and companies of not more than 60 men commanded by captains. Each captain was to muster and train his company at least once a month, except in Charleston where companies were to train each fort-night. Regiments were to train each six months. Courts martial were authorized at each organizational level, with the superior organizations having power of appeal and the power to impose greater penalties. The act authorized the draft of men into the Continental Line from the militias. When a draft was made the law required that a sufficient militia force be retained to quell insurrections or slave uprisings. Some militiamen were also to drafted to maintain slave patrols and seacoast watches. Penalty for failure to serve on patrols and watches was a fine of £100. Superior militia officers could call an emergency alarm upon the what he considered to be a reliable report. Masters had to provide arms and equipment for apprentices and indentured servants. When discharged from a master's service a former apprentice or servant had to provide himself with his own arms and equipment within six months. Poor men and indentured servants who were maimed were to receive public support as would the families of such men killed in service. Indentured servants who acted bravely in battle could be freed, with public compensation given to the master for loss of his services. Masters had to provide the company officers with lists of reliable slaves, ages 16 to 60, who might be impressed into service in an emergency. Each militia company could enlist slaves up to one-third of its number. Lists of other slaves who might be impressed to do manual labor were also to be submitted, to be used as hatchet-men or pioneers. The government was obliged to pay the owners for slaves killed or maimed in battle.(540) Excepting a few Amerindian raids, there was little action in the South during the early years of the war, but things were to change following the catastrophic defeat of General John Burgoyne at Saratoga. On 29 December 1778 Clinton, who had succeeded Sir William Howe as the British commander, landed a force of 3500 regulars near Savannah, Georgia. General Robert Howe, then commander of the Southern Department, had a mixed force of about 1000, militia and regulars, and could not withstand the assault. Howe was shortly thereafter replaced by Major-General Benjamin Lincoln (1733-1810), a distinguished veteran of actions near New York City and at Saratoga. In February 1780 Clinton decided that could now capture Charleston, and that, if he were successful, loyalists would soon appear and swell the ranks of his force. That would bring the Carolinas and Georgia once again under royal control. He assailed Charleston with 8000 troops and the mixed force of regulars between 11 February and 12 May 1780. Neither the continental line nor the militia available to the Southern Department could not hold out. When General Benjamin Lincoln surrendered on 12 May 1780, he lost 860 men of the North Carolina Continental Line. About all that was left of the North Carolina regular forces were those men who had been on leave, ill or attached to other duties or companies. Clinton captured 5400 Americans, the heaviest patriot loss of the war. It was a general of the regular army who had surrendered 5000 militiamen with his command.(541) On 5 June, Clinton left Cornwallis in charge and sailed back to New York, confident that South Carolina, and perhaps all the South, were about to fall to the crown.(542) Charleston fell on 12 May 1780.(543) Patrick Henry, who attempted to raise two to three thousand militia to march to the defense of Charleston, expressed great admiration for its governor. "The brilliant John Rutledge was Governor of the State. Clothed with dictatorial powers, he called out the reserve militia and threw himself into [the defense of] the city."(544) Disaster struck again on 16 August 1780 at the Battle of Camden, South Carolina. General Horatio Gates blundered into an engagement which neither he nor Lord Cornwallis wanted. Cornwallis commanded a force of 2400 regulars; in addition there were Banastre Tarleton's dragoons. Gates deployed his mixed force of regulars and militia badly. His line was a meager 200 yards from the British and in the line of musket fire. American troops broke when Tarleton's dragoons attacked the rear. A bayonet charge finished off the militia, most of whom were armed with Kentucky rifles which did not mount bayonets. It made no sense for militia to stand against raw steel, and responsibility for defeat at Camden rests more with Gates than with the militia. American losses included 800-900 killed and nearly 1000 captured.(545) Gates retreated to Hillsboro, North Carolina, 160 miles north. Revisionist critics of the militia have chosen to blame it rather than Gates' flawed leadership and poor skills as a field commander.(546) On 18 August Tarleton defeated an American militia force at Fishing Creek, South Carolina. General William Moultrie commented that the southern "militia are brave men and will fight if you let them come to action in their own way."(547) We have discussed at length the patriot's great victory at King's Mountain on 7 October 1780, along the border of North Carolina and South Carolina, in the preceding chapter. It was placed there because the North Carolina and other state and territorial militia had more of a role in the defeat of Cornwallis' left flank, and the death of Major Patrick Ferguson, than had the men from South Carolina. In January 1781 Cornwallis moved his force into the interior of North Carolina with the avowed purpose of destroying the small patriot army led by Nathaneal Greene, commander of the Southern Department. Cornwallis moved to Hillsboro where he thought he could recruit a considerable force of tories, but was disappointed.(548) Greene, meanwhile avoided confrontation, but gathered considerable strength along the way from militiamen. General Daniel Morgan advised Greene on how to deploy his militia supplements. "Put the militia in the center with some picked troops in their rear with orders to shoot down the first man that runs."(549) Finally, on 15 March 1781 the two forces met at the Battle of Guilford Court House. Cornwallis held the field and Greene withdrew, but Greene's army remained intact and his militiamen gained battlefield experience. Cornwallis' force was decimated. Greene wrote to General Sumter on 16 March 1781 that if the North Carolina Militia had behaved bravely he could have completely defeated Cornwallis. He rued the day that he had placed his dependence on the militia whose primary contributions had been the consumption of resources at a rate three times that of the regular army and which was best known for ravaging the countryside.(550) Edward Stevens, an inspirational patriot leader of plebeian origins, writing to Virginia Governor Thomas Jefferson agreed with Greene. "If the Salvation of the Country had depended on their staying Ten or Fifteen days, I dont believe they would have done it. Their greatest Study is to Rub through their Tower [tour] of Duty with whole Bones. . . . These men under me are so exceeding anxceous to get home it is all most impossible to Keep Them together."(551) Henry Lee raised this same point in defense of the federal Constitution in the Virginia Ratifying Convention in June 1787. Let the Gentlemen recollect the action of Guilford. The American regular troops behaved there with the most gallant intrepidity. What did the militia do? The greatest numbers of them fled. Their abandonment of the regulars occasioned the loss of the field. Had the line been supported that day, Cornwallis, instead of surrendering at York, would have laid down his arms at Guilford.(552) Cornwallis retreated to Wilmington, North Carolina. An advance force under Major James Craig took the town and disarmed the populace. On 7 April Cornwallis arrived with the tattered remnants of his army. The cowered townspeople were cooperative, but he found no large reserve of loyalists to join his force. Only about 200 Tory militiamen joined his cause. After resting and supplying his army with foodstuff and transportation, Cornwallis moved north to join with General William Phipps in Virginia.(553) Despite the increasing danger from the British army in 1779 and 1780 the southern colonies resisted any idea of arming blacks, whether freemen or slaves. John Laurens of South Carolina, and son of a member of Congress, and Alexander Hamilton proposed a plan to enlist 3000 blacks under white officers. Their plan was to liberate Georgia, which had effectively been under British control for some time. Laurens offered to lead one regiment. State authorities refused to enroll any blacks in the militia, save as unarmed laborers, out of fear of a slave revolt. Once trained, blacks constituted a greater, long range danger than the British army. Laurens argued that with so many planters absent from their plantations, the enlistment of "more aggressive blacks" would actually be of advantage. As early as March 1779 Laurens and Hamilton advanced their plan in Congress. Laurens' father opposed the idea in Congress. In mid-March Laurens trued to convince General Washington to by-pass the states and directly authorize the enlistment of blacks in the Continental Army. Washington demurred, dismissing the idea as fantastic, injurious of his relations with southern states, and beyond his authority. Laurens wrote to the President of Congress, John Jay, later Chief Justice of the United States. Congress accepted Laurens' plan, urging South Carolina to raise 3000 black men at arms. The South Carolina Council of Safety would not change its stand. Laurens, frustrated at the successive rebuffs from his state and his own father, joined a regiment and shortly after was killed in action. With his death any further idea of a black militia or army unit died.(554) In 1782 South Carolina reconsidered its fundamental militia act, because "the laws now in force for the regulation of the militia of this State are found inadequate to the beneficial purposes intended thereby for the defense of the State in the present time." Nonetheless, the changes to earlier acts were few. The upper age limit for service was lowered from 60 to 50. Militia captains were required to submit lists of eligibles every second month. One-quarter of the militia was to serve on garrison or field duty at any given time, with the men to be rotated every month or two. Should a man fail to appear for his assigned duty, his time was doubled. Up to one-third of the militia could be sent to assist another state. In any event, no county could yield so many of its militia as to render slave patrol and containment ineffective. Any man adjudged guilty of sedition, rebellion or dereliction of duty was required to serve on active duty for twelve months. The list of those exempt remained unchanged, with the exception that teachers who were to be relieved of militia duty had to have enrolled under their care no less than fifteen students. Before the Revolution the South Carolina militia was perhaps the most efficient and most accomplished south of New England. It had to perform the same duties that were required of other militias, while also serving on slave patrols. While on slave patrol it can be said to have acted as posse comitatus. It saved North Carolina in at least one Amerindian War. During the War for Independence it acted most efficiently when transformed into guerrilla bands and led by daring and innovative leaders such as Francis Marion. Without the southern militias, the American cause in the south might have been lost and Cornwallis' schemes accomplished. South Carolina had been the guardian of the southern gate of the British colonies against Amerindian, Spanish and, to a degree, French, ambitions in the south until the establishment of Georgia early in the Royal period. Indeed, one of the principal reasons that the colony was established was to act as a buffer against the French in Louisiana and the Spanish in Florida. Led by James Edward Oglethorpe and Lord John Percival, first Earl of Egmont, a Board of Trustees received a charter in 1732 to govern the colony for 21 years. The first colonists arrived in 1733 and founded Savannah. Spain reacted immediately, and the war lasted from 1739 until 1744. By 1740 the British government took the pressure off the Georgia militia by placing a company of regular troops in Georgia to contain the Spanish ambitions and buttressed them with some Georgia militiamen.(555) James Oglethorpe was the first southern authority to actively oppose the peculiar institution of slavery. So great was his opposition to slavery, and his trust in the good character of the slave that in 1740, when the South Carolina legislature was debating forming an expedition to destroy St. Augustine, James Oglethorpe suggested that 1000 slaves be enlisted. About two hundred would be armed while the other 800 would act as porters and servants. Masters were to be paid £10 per slave per month of service, masters assuming all risks except death. If a slave were killed his master would be compensated for his actual value, not to exceed £250.(556) The Georgia Charter of 1732 provided for a militia. The charter noted that because the "provinces in North America have been frequently ravaged by Indian enemies" the embodiment of a militia was a matter of absolute necessity. It related that "the neighboring savages" had recently "laid waste by fire and sword" the province of South Carolina and that substantial numbers of English settlers had been "miserably massacred" so the militia must be armed, trained and disciplined at as early a date as possible. The colony was to supply "armor, shot, powder, ordnance [and] munitions." The governor, with consent of council, could levy war with the militia against all enemies of the crown.(557) In 1739 the provincial legislature of Georgia passed legislation regarding the arming of blacks that was remarkably similar to the measure passed only slightly earlier in South Carolina. A slave could be armed only upon the recommendation of his master. One who acted bravely in battle could be given various material rewards and excused from menial labor on the anniversary of an act of heroism.(558) Almost immediately after the passage of the act a slave revolt occurred in St. Andrew's Parish and an overseer was killed. That ended the idea of arming slaves in Georgia.(559) The Georgia militia had a role in restraining the slaves who revolted at Stoenoe, South Carolina, in 1739. The South Carolina militia crushed the slave revolt and executed the leaders. Some of the other slaves who took part in the revolt were mutilated or deported. Some of the slaves escaped and attempted to cross Georgia in hope of gaining freedom in Spanish Florida. They were caught in a pincer between South Carolina and Georgia militias, acting as posse comitatus, and killed or captured.(560) The colony did not prosper under the Board of Trustees and Oglethorpe's administration. His attempt to outlaw both rum and slaves was generally unsuccessful. Oglethorpe did devolve a satisfactory policy with the Amerindians and no major Indian war occurred during the entire history of the colony. In 1760 the Crown assumed control and sent Sir James Wright to assume the office of provincial governor. In 1763 the Peace of Paris yielded Florida into English hands. After that cession the role of the Georgia militia as guardian of the southern gate ended.(561) In 1738 Governor William Bull of South Carolina observed that the people of both his own colony and Georgia were "excellent marksmen" and "as brave as any People whatsoever." The problem was that, outside a urban areas, such as Savannah and Charleston, the people were settled far too sparsely to be of much use in the militia. Most frontiersmen were heavily engaged in agriculture, whether on their own or by supervising slaves, and had neither the time nor the ability to contain either the French or the Spanish forces. Indeed, they were barely able to resist the few Amerindian incursions on the frontier. Bull concluded that "Military Discipline is Inconsistent with a Domestick or Country Life."(562) In 1739 James Oglethorpe decided, with urging from both the home government and the legislature of South Carolina, to attack and reduce St. Augustine. Since St. Augustine was the center of power there was nothing new about this strategy. Twice before the South Carolina militia had attacked, damaged the fortress of San Marcos, but had been unable to destroy it. Oglethorpe relied upon his own militia, British sea power, the element of surprise and a substantial number of volunteers and militia from South Carolina. He was also able to recruit over a thousand Amerindian warriors as auxiliaries. The expedition failed. The South Carolina legislature issued a long and involved technical report. Three main conclusions pointed to failures in Oglethorpe's command: he misused the South Carolina volunteers; treated the Amerindians badly; and deployed his troops poorly. Whether it was his fault or not, he failed to achieve the surprise his mission required.(563) On 6 August 1754 the king sent instructions to John Reynolds, governor of Georgia regarding the militia. "You shall take care that all planters and Christian servants be well and fitly provided with arms," the monarch wrote, and "that they be listed under good officers." The militia was to be mustered and trained "whereby they may be in a better readiness for the defence of our said province." He warned that the frequency and intensity of the militia training must not constitute "an unnecessary impediment to the affairs of the inhabitants."(564) In 1770 Georgia passed an "act for the better security of the inhabitants by obliging all white male persons to carry fire arms to all places of public worship." In 1774 Georgia, in an attempt to escape the various Indian wars which had plagued its neighbors, passed legislation designed to protect the natives from massacre. Knowing that it was virtually impossible to distinguish between a hostile and a friendly Amerindian, and being well aware of the bounty paid for scalps in the Carolinas and Virginia, Georgia passed an act which provided for the purchase of scalps only of hostiles. Arms for the colony were largely imported, but a few gunsmiths appeared in the colony. Jeremiah Slitterman was among the earliest men to make muskets for the provincial militia. He also served as colonial armourer, with a verified term from 1766 to 1775.(565) Georgia was the last of the thirteen colonies to be established and was also the last to join the patriot cause. Agitation for independence did not set well for several reasons. First, the colonists feared attack by the English from Florida and their Amerindian allies, the Creeks and Cherokees. Second, they expressed a measure of appreciation to the home government for the large amounts of money it had expended in setting up and maintaining the colony. It was unrepresented at both the Stamp Act Congress and the First Continental Congress. The loyalists were well represented in Georgia and had a most active militia system. Many loyalist militiamen volunteered to serve with the British army when it finally landed in Georgia.(566) News of the events at Lexington and Concord, Massachusetts, reached Savannah on 10 May, about three weeks after the actual events occurred. Citizens exhibited considerable excitement and that night an unknown group, presumably of the local militia, forced their war into the public gunpowder magazine and removed its contents. Royalist governor Wright offered a £50 reward for apprehension of the thieves. No one came forward to report the criminals. There is some belief that the gunpowder was distributed among the local committees of safety in Georgia and South Carolina. Throughout the summer, and against Wright's specific orders, the patriots continued to remove arms and supplies from the public domain. On 10 July militiamen from Georgia and South Carolina stopped a royal vessel carrying gunpowder for the Amerindian trade and removed the cargo of about six tons, before allowing the ship to continue. On 2 June, upon hearing that the colony's cannon were to fire a salute on the king's birthday, patriots spiked them and threw them down the embankment. Royalists recovered several and had them repaired in time to fire the salute on George III's birthday on 4 June. The patriots erected a liberty pole the next day, assembled the militia, and drank toasts to "no taxation without representation." Governor Wright reported these incidents to London, but had no power to do more. He asked to be relieved, reporting that sentiment was overwhelming for the cause of independence.(567) Whig militiamen gathered food, arms, 63 barrels of rice, £123 in specie, gunpowder and other supplies to send to the relief of Boston. It is unclear whether militia volunteers in any significant numbers marched to Massachusetts. On 14 July 1775 the provincial legislature began to consider the creation of a wartime militia. There were many schemes advanced to reorganize it. Georgia realized that it must contribute to the general war effort by drafting a number of men from the militias to form a regiment of the Continental Line. As one delegate observed on 14 July 1775, "The militia was thoroughly organized and drilled and active military operations prefatory to resistance to the continuance of British aggression were seen on every hand."(568) Initially, Georgia was reluctant to join the rebellion and sign a declaration of independence. The other colonies responded by ordering an embargo of all goods, but especially of arms and gunpowder, against Georgia. Once the legislature acted, the Continental Congress removed the embargo. Georgia applied to Congress for permission to export its indigo harvest and to import trade goods to pacify the Amerindians. Most of the 1775 legislative calendar was occupied with matters of governmental transition from the Crown to the Whigs. Wright, who had not received permission to withdraw and return to England, was powerless to stem the flow of power to the Whigs. Popular democracy took over, with three state congresses being elected in 1775 and a fourth in January 1776. The area of greatest governmental activity was with the Committee of Safety. The provincial congresses had created and supervised the state committee of safety which, in turn, loosely supervised local committees. Most of the work of these committees was devoted to the reconstitution of the militia, appointment of officers, confirmation of commissions to existing officers, administration of loyalty oaths, contracting for arms and supplies, and securing of existing military supplies. It ordered that muskets be purchased "as nearly [as possible] to the size recommended by the Continental Congress." It created a Committee of Safety which was authorized to place an initial order for 400 stands of arms with bayonets for the militia.(569) The militiamen insisted on electing their own officers, most of whom were refused confirmation by Governor Wright. Confirmation was then undertaken by the Committee of Safety. The Committee had to negotiate an equitable settlement of a dispute between a company of rangers stationed on the frontier and backwoodsmen who, for unknown reasons, who dis-trusted and had disarmed them. The committee required that the rangers take an oath of loyalty to the state and to renounce their presumed loyalty to the governor. This done, the committee ordered that they be rearmed and returned to duty. The Committee of Safety also recommended making certain changes in the basic militia act, but none of the first four congresses undertook to make the requested revisions. Legislative effort was directed at finding funding for the enormous expenses that the move to independence was requiring. It also wished to direct agents to work at retaining the loyalty, or at least neutrality, of the indigenous aboriginal population. The British had agents hard at work among the Amerindians and the legislature knew it had to act boldly to prevent a major Indian war. On 2 August a band of militiamen left Georgia and entered South Carolina and there took captive one Thomas Brown, reputed the natural son of Lord North who it was thought had been sent to America to recruit a Tory militia. Taken to Augusta, Brown was tarred and feathered and forced to swear allegiance to the new nation. Released, he did then attempt to recruit a Tory militia to avenge his maltreatment. The Son of Liberty gathered a counter-force of perhaps 700 men. Brown had perhaps 150 men and Governor Wright refused to test the loyalty of what remained of his local troops and refused to act on Brown's behalf. Brown retired to South Carolina and eventually moved to St. Augustine, Florida.(570) The enrolled militia of Georgia in 1775 numbered 1000 men under the command of brigade generals Lachlan McIntosh and Samuel Elbert. This number remained constant despite the desertion of some men to the tories in 1776, 1777 and 1778. In the early years of the Revolution about 750 men had been drafted into, or had volunteered for service in, the Continental Line. In July 1778 the state could count 2000 men serving six months enlistment in the Line. In that year the state also had 750 men enrolled as minute men. By 1779 British presence had reduced the number of men in the Line to about 750 while the state militia counted about the same number. In 1781 General Nathaneal Greene enrolled from the militia a special brigade to serve with him known as the Georgia Legion and commanded by General James Jackson.(571) In January 1776 South Carolina sent an urgent message, reporting that some British ships of war had arrived at Charleston to secure military supplies and were now headed for Savannah. The Committee of Safety ordered the militia to a state of readiness and called the militia units from other areas to assemble in Savannah. Fearing a British-inspired slave revolt, it ordered some militia to join with overseers to search the plantations near the seacoast, especially along the Savannah River, for weapons and ammunition. Militiamen were ordered to stand coastal watch for British activity. Four British men of war arrived by 18 January. Governor Wright attempted to persuade the Committee of Safety that all the British wanted was to purchase rice and other supplies. While such sales were in technical violation of the Continental Congress' embargo, selling them what they wanted was far better than suffering occupation, Wright argued. The Whig leaders responded by arresting the royal council, Wright and other suspected of being Tories. After a few days, the Committee accepted their paroles that they would not communicate with the British ships' captains. Emboldened by the arrival of several more ships with 200 regular soldiers on board, Wright fled to their protection, made a final appeal to forget about independence, and embarked for England. Hoping to escape blame for entering into armed conflict with the Whigs in the south, neither Wright nor the naval commander, Captain Barclay were prepared to force this issue just yet. Barclay continued to attempt to purchase the supplies he needed. Meanwhile, on 12 January 1776 the provisional legislature enacted a militia law which made all able-bodied men in all parishes, towns and counties subject to enrollment in the militia. Colonel Drayton was empowered to issue orders for the precise terms of enlistment and training.(572) The legislature decided against enlisting indentured servants, but allowed apprentices to serve.(573) The militia continued to gather in Savannah, with perhaps as many as 700 men on hand. The legislature sent to South Carolina for assistance. Meanwhile, Governor Wright, now safely on board the man of war Scarborough requested that Sir Henry Clinton dispatch 500 to 1000 British regulars to reestablish royal government in Georgia. If these troops arrived soon, Wright argued, the vast majority of the citizens of Georgia would resume their allegiance to the Crown. If the royal government abandoned Georgia, it would be very costly to return and reestablish governance since the Whigs would be very active in firming up loyalties and suppressing Tories. Most citizens, he thought, had been panicked and intimidated by the few active Whigs in the colony. Wright thought that the patriot militia would retreat at the first show of force. Captain Barkley agreed with this assessment, but was unwilling to land the troops under his command without Clinton's specific orders, and his orders in hand required him to return to Boston. On 20 June the legislature, on recommendation of the militia officers, "ordered that every man liable to bear arms do Militia Duty in the Parish or District where he resides." There was no age limits noted in the decree, and exemptions were made only for those "who shall be enrolled in some Volunteer Company." The Georgia Provincial Congress appointed Colonel Lachlan McIntosh to command the state militia, assisted by Samuel Elbert and Joseph Habersham. By 28 April McIntosh had recruited 286 men, and with a few more weeks the active militia numbered at least 600. Estimates of 4000 men able to bear arms may have been optimistic, although technically the enrolled militia numbered that many. No more than one-half that number could be mustered at any given time if the colony was to survive economically.(574) The militia was organized into brigades under a brigade general and a major who served as brigade inspector, a quartermaster and a captain who served the general; regiments commanded by colonels or lieutenant-colonels, which consisted of surgeons, quartermasters, pay-masters and adjutants; and companies under captains, a first and a second lieutenant, ensign, four sergeants and 64 enlisted men. Additionally, there were drummers, fifers, color bearers and various other functionaries. One novel feature of the organization of the Georgia militia was its division into three parts in ordinary times and into two divisions in times of emergency. The practice had begun in July 1775, even before the militia had been fully organized and was renewed on 8 January 1777.(575) One-third,(576) or under state of emergency, one-half, of the militia were actually on active duty at one time, with the remainder being allowed to remain at home.(577) Active duty was for a "fortnight," after which the militia was rotated with those who had not serve earlier.(578) In time of grave emergency the governor could order "that a draft be made of one-half [of the militia] and that they hold themselves in readiness to march at a moment's notice.(579) On 24 October 1781 the legislature resolved that "his honor the Governor be requested and empowered to order immediately the whole of the Militia of this State to join camp as soon as they can possibly be collected."(580) Because so many militiamen from the frontier owned their own riding horses, some of the militia were enlisted as mounted infantry.(581) Some militia were ordered to serve as scouts, primarily on the frontier, or against the British, as the situation requires.(582) While the patriots were establishing their control over the state government, many Tories fled to British protection in Florida, from which they raided into Georgia. Just as the colonists in the north had delusions of grandeur, thinking of conquering Canada, so the patriots in the south thought of conquering Florida. The latest intelligence showed that in the autumn of 1775 only about 150 British regulars occupied St. Augustine. On 1 January 1776, the Continental Congress offered to underwrite the cost of capturing the British garrison. Through well-placed Tory spies, the British knew as much as the patriots about the planned expedition. As an idea, there was much merit in the plan for a successful invasion of Florida would sever the Amerindians from the British agents and end the cattle raids and pillaging of farms that tied down most militiamen. Lee estimated that he would need about 1000 men, of which Georgia was to supply 600 of its Continental Line and militia. It was September before the expedition got under way, by which time the British army had substantially strengthened its garrison in St. Augustine and also recruited many Amerindian warriors. Some of the troops reached St. John where they laid waste the Tories' fields and farms. Few got farther south than Sunbury and none saw St. Augustine. Inclement weather, lack of transportation, and illness were the major impediments, although many militiamen were concerned about the increased pressures of Amerindian raids on their unprotected families on the frontier. The failure of the expedition did little to bolster the flagging spirits of the patriots. What it did do was to invite additional Tory raids on the outlying farms. By January 1777 intelligence reports indicated that the defenses at St. Augustine had been strengthened. British naval vessels controlled the port of Savannah. And on 18 February Captain Richard Winn surrendered his garrison of fifty men at Fort McIntosh on the Satilla River to British regulars and Tory militia. On 27 February 1776, the Continental Congress created the Southern Military District, comprised of Virginia, North Carolina, South Carolina, and Georgia, under the command of Major-general Charles Lee. South Carolina and Georgia came under the command of Lee's assistant, Brigadier-general John Armstrong. Lee ordered Armstrong to raise 2000 men, a wholly unrealistic number. McIntosh's militia was inducted into national service and placed under Lee's command as a part of the Continental Line. When Lee arrived in Charleston, South Carolina, in the summer of 1776, McIntosh reported that raising six battalions in Georgia was quite impossible, although he did turn over command of four troops of cavalry. McIntosh pleaded Georgia's case to Lee. Warriors of the Creek Nation outnumbered the Georgia militia and were on very friendly terms with the British Indian agents. Raiders from Florida were already stealing cattle and other supplies and despoiling the backwoods. The British had a substantial military presence in St. Augustine from whence they supplied the natives and rewarded the raiders. As mounted and foot militia were drafted into national service and deployed where Lee best thought to use them, the frontiers, even Augusta and Savannah, lay open for attack. McIntosh hoped to use the mounted men to patrol the state's borders. Among his first priorities was cutting off contact between British agents and the Creeks. Lee decided to inspect conditions in Georgia personally. When he arrived in August, McIntosh was able to turn out 2500 militia in addition to his command, now in Continental service. Lee suggested exchanging the Georgia Continental Line with men from another area, perhaps South Carolina, since most had Tory friends either locally or in Florida. Lee thought the militia to be unreliable for the same reason. In a letter to General Armstrong dated 27 August 1776, Lee was even more critical of the Georgia militia. The people here are if possible more harum scarum than their sister colony [i. e., South Carolina]. They will propose anything, and after they have proposed it, discover they are incapable of performing the least. They have proposed securing their Frontiers by constant patrols of horse Rangers, when the scheme is approved of they scratch their heads for some days, and at length inform you that there is a small difficulty in the way; that of the impossibility to procure a single horse -- . . . . Upon the whole I should not be surprised if they were to propose mounting a body of Mermaids on Alligators. . . .(583) As with most states, the lines of authority between state and national control over soldiers of the Continental Line were unclear and ill-defined. Most states absolutely denied any national control over their militias. Lee was concerned for security, especially about plans laid for punitive expeditions against Florida. As it was, his concerns were well-founded, although it is impossible to say whether the Line or the militia were the greater offenders. Probably, information leaked to Tories in Florida from the one merely buttressed information received from the other. Lee's recommendations for rotation of Georgia's troops in the Line with those from other states angered local authorities who resented any intimation that there were secret Tories among their men. Congress decided to augment the local troops by dispatching a battalion of riflemen and another of mounted troops to Georgia. Upon receipt of that information, Lee decided to move troops from Virginia and North Carolina to Georgia, angering the authorities in North Carolina. When no resolution was forthcoming, North Carolina withdrew its troops from congressional command. The Continental Congress in November 1776 ordered the states to create magazines for gunpowder and storage facilities for other supplies the army would require along with similar supplies for the state militias. Georgia was able to supply its own needs, along with those of other states, for rice and salted meat. The Georgia Constitution of 1777 provided for a militia. All counties having 250 or more militiamen under arms was permitted to form one or more battalions. The governor acted as commander-in-chief of all militia and other armed forces of the state. As such, the governor could appoint superior militia officers.(584) There were a few religious dissenters in Georgia, mainly Mennonites, who had been welcomed and granted haven under Oglethorpe's governance, but Georgia made no provision for their exemption. Some religious dissenters decided to leave the province when they were not granted military exemptions. Other "persons in the backwoods settlements" decided that they could not withstand an attack from the native aborigine if they were "seduced by British aims" and began to abandon their farms and homesteads. "The commanding officers of the Militia [are] to be directed to stop and secure the property of such persons as are about to depart the Province."(585) The legislature decided to create and maintain a show of force in Savannah and so on 16 January it ordered to "order forthwith a draft of at least one-third of the militia within . . . [the] parishes and have them immediately marched to Savannah together with every other person who may choose to come down as a volunteer." Those mustering and undergoing training in Savannah were to be paid £0/1/6 per day.(586) On 8 June 1776 the legislature ordered the militia to "hire a number of negroes to finish in a more proper manner the intrenchments about [Fort] Sunbury."(587) The legislature guessed correctly that any British invasion of Georgia would originate in Florida and move against Sunbury. In June 1776 it began to draft militia to staff the fort, rotating the militia every fortnight. Rotation helped to prevent the boredom that accompanies garrison duty; and it allowed the militiamen to keep in touch with their families and businesses and farms. When "it appears that the frontiers of this State, from Information, is in danger of being distressed by the Indians" the legislature moved to create a band of specially trained militia, the frontier rangers, or ranging companies.(588) For frontier rangers, who were to respond to a call on a moment's notice, the division was by halves rather than thirds, largely because there were so few able-bodied men to defend the state.(589) On 29 May 1776 the legislature authorized the formation of "three companies of Minutemen as soon as they can be furnished with arms, to be stationed where they may protect the Inhabitants from Indians.(590) "The Amerindians, having received presents and arms from the British in Florida, went on the warpath for the first time in Georgia. So severe were the depravations that on 24 September 1778 Colonel Williamson recruited 546 militiamen, virtually all the experienced frontiersmen in the state militia, to repel the Creeks and Cherokees.(591) The patriot militia of Georgia elected its own officers during the Revolution.(592) The pay of militiamen was wholly tied to the pay South Carolina granted to its militiamen. On 14 August 1779 the legislature ordered that pay "shall in every respect [be put] on the same footing that the South Carolina militia at present are."(593) The practice of using South Carolina's rate of pay for militia service antedated the Revolution, dating back to the time "when it was called out to suppress [slave] insurgents in South Carolina."(594) General Robert Howe, the new southern commander of the Continental Line, visited Savannah in March 1777, trying to recruit additional men. The Georgia light horse refused induction into federal service, leaving only the 400 men of the First Georgia Battalion in national service. Button Gwinnett called out the militia, hoping to assemble enough troops to mount another attack on British Florida and relieve pressures on the frontier. Howe, having been angered at his poor reception in Georgia, refused to detach any troops under his command in Charleston to assist. The British authorities having received the information about the planned expedition, roused the Creeks and some other Amerindians to ravage the frontier. By 1 May two groups embarked from Georgia, McIntosh's Continental Line making the voyage by water, and Colonel John Baker's mounted militia making the trip overland. The militia arrived at St John first and were immediately dispersed by the British regulars who were lying in ambush. McIntosh continued to experience difficulties in transit and abandoned the expedition on 26 May. The only tangible result was the confiscation of about a thousand head of cattle. Once again the Tories responded by raiding into Georgia in parties rarely exceeding 150 men. They sacked Augusta and came within five miles of Savannah. The militia seemed to be ineffective in dealing with the marauders. The legislature authorized the commissioning of bands of fifteen or more men to enter Florida and wreak what havoc they could. On 10 October 1777, Congress sent McIntosh, now a general, north to assume a new command and appointed Colonel Samuel Elbert to replace him in command of the Georgia Continental Line. He inherited a command in which the troops had not received regular pay for some months and in which morale was low and desertions were high. The militia ignored the Line and refused induction into it. Early in 1778 there was again discussion about making the now annual expedition against St. Augustine. Elbert thought that he would need 1500 men to stand any chance of capturing St. Augustine, which meant he required both a substantial infusion of regular troops and a significant number of Georgia militiamen. Word reached Savannah that British Governor Tonyn had sent German immigrants into Georgia to recruit German speaking settlers and Loyalist Florida Rangers were again raiding cattle along the frontier. Intelligence reported that some 400 to 700 disaffected South Carolina Tories were migrating through Georgia on their way to the British settlement in Florida. Governor Houstoun sent the militia to intercept them, but no contact was ever made. By mid-April some 2000 troops were in readiness to invade Florida. Robert Howe commanded members of the Continental Line from South Carolina and Georgia; Colonel Andrew Williamson commanded the attached South Carolina militia; and Governor Houstoun took personal command of the Georgia militia, probably because most of those men mustered in response to his direct appeal. The Whigs had a genuine opportunity to capture St. Augustine for they outnumbered the British forces by about two to one and were probably better equipped in superior physical condition. The problems in this third expedition were at the command level. The headstrong Houstoun, barely 30 years of age and with no military experience, thought himself the senior officer and refused to accept orders from Howe. Following this example, Williamson announced that he would not accept orders from either Howe or Houstoun because his militia were independent of both national and Georgia state command. Although the Florida Rangers and their Tory and Amerindian allies retreated at the approach of the patriot force, Howe asked for and received permission to withdraw his men because of the problems of command. Congress decided that if another expedition were mounted against Florida, it would have to be undertaken with trained regulars. The Georgians, militia or soldiers of the Line, leaked too much information to the British. Militia had proven themselves unreliable and undisciplined in the three previous expeditions and would be left behind to defend the frontier from Tory and Amerindian attacks. Civilian Whig authorities had assumed throughout the early years that with some effort they could defeat the English and capture Florida. Military opinion on both sides generally agreed that neither side was strong enough to conquer the other. But if one side did win, it would not be able to hold on to the prize. Both sides then had been reduced to raiding the cattle, food and other supplies of the other. The Whigs' punitive expeditions had done little more than cause the British to bribe the Amerindians to undertake massacres along the unprotected frontier. Because the militia was small, hesitant to leave its home areas undefended, and ill organized and poorly led, it failed to perform its primary function of protecting the home folks. If anyone can be said to have come out ahead in this bloody game of attrition, it was probably Tonyn's Tories, Amerindians and Florida Rangers. The Rangers were loathed by the regulars because they were essentially the dregs of humanity who had been given a license to plunder, but proved to be a better force because of superior organization, better administration and superior weapons and supplies. Their destruction of homes, crops and supplies, combined with the stealing of livestock, caused a great deal of hardship among the patriots. The Georgia militiamen were poorly supplied, many being without blankets, canteens, knapsacks, shoes or firearms. They were paid in state currency which had little value, and indeed was generally not accepted outside the colony. The well-supplied and equipped enemy received British currency, still accepted anywhere at face value. Most refused induction into the Continental Line and some were reluctant to report to militia musters, fearing that they might be conscripted by recruiting agents for the Line. Poor leadership also contributed to poor morale, although the militiamen had to share a portion of the blame for that failure since they elected most of the officers. Georgia's Amerindian policy was generally a failure, although the Whigs did make attempts to pacify the natives by the usual methods of meeting with them, distributing gifts, pledging that their lands would be protected and guaranteeing their borders. The first Cherokee War began in the summer of 1776, although most of the fighting occurred in the Carolinas. Eventually, the combined efforts of the militias of Georgia, North Carolina, South Carolina and Virginia defeated them. It was perhaps most important that the Creeks generally decided against allying with the Cherokees. After the Cherokees acknowledged defeat and signed the Treaty of DeWitt's Corner on 20 May 1777, there were few major problems with the natives. However, sporadic raids, largely incited by British agents, kept the frontier militia in a constant state of readiness. Georgia had been spared any direct military action in the Revolution until 1778 when the British moved against Savannah. When Sir Henry Clinton succeeded Sir William Howe as the British commander, he was determined to carry the war into the south. The British had planned to return ever since James Wright had been forced to flee the colony. Clinton's staff thought that it would require 5000 troops to capture Charleston, but only 2000 to take Savannah. Georgia thus became the logical place to begin the reduction of the southern colonies. On 27 November 1778, the British command sent Lieutenant-colonel Archibald Campbell of the 71st Scottish Regiment with 3000 British and Hessian regulars and four battalions of loyalists to accomplish the reduction of Georgia. General Robert Howe, then commander of the Southern Department, had a mixed force of about 1000, militia and regulars, and could not withstand the assault. On 23 December Campbell arrived at Tybee Island near Savannah and was unopposed. The patriot army crossed into South Carolina. Meanwhile, General Augustine Prevost, marching northward from Florida, captured the remaining patriot militia and army at Fort Sunbury on 10 January 1779. Having eliminated both regular army units and patriot militia as a factor in Georgia, Campbell was uncertain what to do next. The home office had wished to test its theory that the tories of the southern states were just waiting to show their loyalty, and would do so in considerable numbers. So Campbell decided to spread his command and seek out loyalist supporters.(595) Howe was shortly thereafter replaced by Benjamin Lincoln (1733-1810). Howe delayed his departure to assist the Georgia militia who were being pressed by British-induced Amerindian raids all along the frontier. The native American force was not large, but was extremely mobile, largely massacring isolated settlements and striking from ambush. On 6 January 1779 General Augustine Prevost, marching north from Florida, secured the surrender of Fort Sunbury. By 29 January Prevost's army had occupied Savannah. The British, assisted by loyalists, occupied the most populous parts of the state within a few months. Heartily encouraged, Campbell made additional sorties into the back country of Georgia, but these proved to be as fruitless as the first action was successful. Under the protection of the British army, James Wright returned to occupy the governor's office. There were only a few military actions of note. John Moultrie, with Georgia and Carolina militia, successfully defended Port Royal, South Carolina, in early February. The patriot militia under Colonel Andrew Pickens won a small battle over loyalist militia at Kettle Creek, South Carolina, on 14 February 1779. A certain Colonel Boyd had recruited about 700 loyalist militia and marched south to join Campbell's regulars in Georgia. Colonel Andrew Pickens gathered some 4000 militia and surprised the tories at Kettle Creek, killed Boyd and about forty of his men, wounded and captured another 150, and scattered the remainder. Pickens took his prisoners back to South Carolina where five leaders were hanged as traitors, another 65 condemned but pardoned, and others forced to take an oath of loyalty to the republic.(596) The patriots lost an engagement at Brier Creek, on 3 March. At Brier Creek, General John Ashe (1720-1781) led the patriot militia, which lost 350 men while inflicting only twenty casualties on the mixed British and tory force. Leaving Campbell in command at Savannah, Prevost moved northward into South Carolina. Meanwhile, Major-general Benjamin Lincoln rallied the patriot army and moved to Purysburg, about fifteen miles from Savannah. The swamps surrounding Lincoln's army inhibited Prevost's movements, and not wanting to become entrapped in such hostile territory, Prevost sent Major Gardiner to Port Royal Island. Lincoln sent General William Moultrie who led the Georgia militia against Gardiner who withdrew and returned to Savannah. Emboldened by French support, patriots made a desperate assault on Savannah, but were repulsed. Washington detached a corps of the Continental Line under General Benjamin Lincoln to support the militia in an assault on Augusta on 23 April. Prevost had moved his army northward along the coast toward Charleston, South Carolina. He had hoped loyalists could retain control in Georgia. Upon learning of Lincoln's arrival, he moved south. Lincoln's army met Prevost at Stono Ferry on 19 June. Lincoln suffered 300 casualties against 130 inflicted on the British, thus allowing Prevost to retain Savannah. Still, the British controlled only the area immediately surrounding Savannah, and the tories had been disheartened. The British finally withdrew from Savannah in 1782 as a result of patriot pressures to the north. The Georgia loyalist militia could not withstand patriot pressure and quickly disbanded and fled. The Georgia militia probably failed to achieve the universally accepted goals of colonial militia units to a greater extent than did the militias of other states. It was highly ineffective in stopping the constant raids from Florida, did not fill the ranks of the Continental Line, and did very little to contain the native Americans. In the latter function, it is fortunate that other militias were successful in breaking the power of the Cherokees. 1. Edward W. James, ed., The Lower Norfolk County, Virginia, Antiquary. 5 vols. New York: Peter Smith, 1951, 1: 104. 2. Williams v. State, 490 S.W. 117 at 121. 3. Don Higginbotham, Daniel Morgan. Chapel Hill: University of North Carolina Press, 1961, 132-33; Hugh F. Rankin, Francis Marion. New York: Capricorn, 1973; John R. Alden., The South in the Revolution, 1763-1789. Baton Rouge: Louisiana State University Press, 1957, 267; Robert Pugh, "The Revolutionary Militia in the Southern Campaign, 1780-81," William and Mary Quarterly 3d series, 19 : 154-75. 4. William L. Shea. The Virginia Militia in the Seventeenth Century. Baton Rouge: Louisiana State University Press, 1983, 136-40. 5. Virginia Charter of 1606 in Benjamin P. Poore, ed. The Federal and state Constitutions, Colonial Charters and Other Organic Laws of the United States. 2 vols. Washington: U. S. Government Printing Office, 1877, 2: 1891. 6. Virginia Charter of 1612, in Ibid., 2: 1906. 7. Travels and Works of Captain John Smith. Edward Arber and A. G. Bradley, eds. 2 vols. Edinburgh: Grant, 1910, 2: 433-34. 8. Records of the Virginia Company of London. S. K. Kingsbury, ed. 4 vols. Washington, D.C.: U.S. Government, 1906-35, 3: 21-22, 27, 220. 9. R. Hamor, A True Discourse on the Present State of Virginia . Richmond, Va.: Virginia State Library, 1957, 5-16; D. B. Rutman, "The Virginia Company and its Military Regime," in D. Rutman, ed. The Old Dominion. Charlottesville: University of Virginia Press, 1964, 1-20. 10. Quoted in Congressional Record, Executive Document 95, 48th Congress, Second Session. 11. Quoted in Congressional Record, Executive Document 95, 48th Congress, Second Session. 12. William Shea, "The First American Militia," Military Affairs, : 15-18; Records of the Virginia Company, 3: 164-73. 13. Statutes at Large, Being a Collection of All Laws of Virginia. W. W. Hening, ed. 13 vols. Richmond: State of Virginia, 1818-23, 1: 114. 14. Hening, Statutes at Large, 1: 121-29. 15. R. A. Brock, ed. Virginia Company of London, 1619-1624. 2 vols. Richmond: Virginia Historical Society, 1889, 2: 7, 9. 16. Act XXII of 25 September 1622, Hening, Statutes at Large, 4: 127-29. 17. Hening, Statutes at Large, 1: 122-23. 18. Records of the Virginia Company, 4: 580-84; Minutes of the Council and General Court of Colonial Virginia, 1622-1632. Richmond, Va.: State of Virginia, 1924, 18. 19. Journals of the House of Burgesses, 1619-1777. 30 volumes. Richmond: State of Virginia, 1905-15, 1: 52-53. 20. Hening, Statutes at Large, 1: 140-41, 153. 21. In Virginia Magazine of History and Biography, 2: 22-23. 22. Hening, Statutes at Large, 1: 167, 174, 176, 219. 23. Lower Norfolk Country Minute Book, 1637-1646. manuscript, Virginia State Library, 35, 39, 99. 24. Hening, Statutes at Large, 1: 224, 226. 25. Raoul F. Camus. Military Music of the American Revolution. Chapel Hill: University of North Carolina Press, 1976, 40. 26. "Instructions to Sir William Berkeley," 1642, in Virginia Magazine of History and Biography, 2: , 281-88. 27. Hening, Statutes at Large, 1: 219, 285. 28. S. M. Ames, ed. County Court Records of Accomack--Northampton Counties, Virginia, 1640--1645. Richmond: Virginia Historical Society, 1973, 268. 29. Hening, Statutes at Large, 1: 263. 30. See William L. Shea, "Virginia at War, 1644-46," Military Affairs, : 142-47. 31. General Court Session of 23 May 1677. 32. Hening, Statutes at Large, 1: 292-93. 33. Hening, Statutes at Large, 1: 293, 315-19. 34. See Northumberland County, Virginia, Order Book 2. Manuscript, Virginia Historical Society, 13. 35. Wesley Frank Craven, "Indian Policy in Early Virginia," William and Mary Quarterly, third series, 1: 73-76; Hening, Statutes at Large, 1: 140-41, 292-93, 323-26, 355. 36. Hening, Statutes at Large, 1: 393-96. 37. Northumberland County, Virginia, Order Book 2, 8. 38. Act XXIV of 1658-59, Hening, Statutes at Large, 1: 525. 39. Hening, Statutes at Large, 1: 515; 2: 34-39. 40. Hening, Statutes at Large, 2: 15. 41. Hening, Statutes at Large, 1: 185, 193. 42. Thomas Ludwell, "Description of Virginia," 17 September 1666, a report to the Lords of Trade, in Virginia Magazine of History and Biography, 5 : 54-59. 43. Hening, Statutes at Large, 2: 237, 336. 44. Thomas J. Wertenbaker. Virginia Under the Stuarts, 1607-1688. Princeton: Princeton University Press, 1914, 99-100. 45. Hening, Statutes at Large, 2: 326-36, 341-50; Wilcomb E. Washburn. The Governor and the Rebel: A History of Bacon's Rebellion in Virginia. Chapel Hill: University of North Carolina Press, 1957. 46. Hening, Statutes at Large, 2: 341. 47. Hening, Statutes at Large, 2: 326-36; 341-50. 48. Camus, Military Music of the American Revolution, 41. 49. Hening, Statutes at Large, 2: 336, 410, 439, 491-92. 50. Nathaniel Bacon, quoted in Thomas J. Wertenbaker. Torchbearer of the Revolution: The Story of Bacon's Rebellion and its Leader. Princeton: Princeton University Press, 1940, p. 135. 51. Bacon's rebellion was widely held a generation ago to have been a political event, an early revolution undertaken to ensure the rights of Englishmen, and so on. It was brought on by the despotic conduct of a tyrannical governor who had illegally and unjustly raised taxes without the consent of the governed. See, for example, Thornton Anderson, "Virginia: The Beginnings" in his Development of American Political Thought. New York: Appleton-Century-Crofts, 1961, 1-18. It is now viewed quite differently. Berkeley was the just defender of the peaceful Amerindians who wanted to prevent a mad, bloodthirsty and covetous bigot from exterminating a whole race. Berkeley wanted only to punish the wrong doers on the Indian side while protecting the vast majority who were peace loving brothers. See, for example, "Bacon's Rebellion," in Thomas C. Cochran and Wayne Andrews, Concise Dictionary of American History. New York: Scribner's, 1962, 79. 52. Quoted in Virginia Magazine of History and Biography, 1 [1893-94]: 2. 53. "Causes of Discontent in Virginia, Isle of Wright," numbers 7 and 8, 1676, Virginia Magazine of History and Biography, 2: 381-92. See also the statement on the same subject by the inhabitants of Surry County, in Ibid., 2: 170-73. 54. Hening, Statutes at Large, 2: 513. 55. Hening, Statutes at Large, 2: 233-45. 56. Hening, Statutes at Large, 2: 481. 57. Hening, Statutes at Large, 3: 335-36, 459. 58. An Act for the better supply of the country with armes and ammunition, Hening, Statutes at large, 3: 13-14; 36 Charles II act iv, April 1684. 59. Quoted in Virginia Magazine of History and Biography, 2: 263-64. 60. Camus, Military Music, 41. 61. Hening, Statutes at Large, 1: 526. See also "The Randolph Manuscript," Virginia Magazine of History and Biography, 20 : 117. 62. Shea, Virginia Militia, 122-35, 140. 63. Hening, Statutes at Large, 3: 69; Pallas v Hill , Hening and Mumford Reports, 2: 149. 64. Great Britain. Public Records Office Records: Colonial, 4: 1306. 65. Beverly Fleet, ed. Virginia Colonial Abstracts. Richmond, Va.: Fleet, n.d., 6: 14. 66. "Charges Against Governor Nicholson," Virginia Magazine of History and Biography, 3: 373-82. 67. John Shy. Toward Lexington. Princeton: Princeton University Press, 1965, 11; R. A. Brock, ed. Official Letters of Alexander Spotswood. 2 vols. Richmond: Virginia Historical Society, 1: 131-33, 194, 197, 204-07. 68. "Journal of John Barnwell," Virginia Magazine of History and Biography, 6 : 50. 69. Virginia State Papers, 1: 152. 70. Hening, Statutes at Large, 3: 335-42. 71. Alexander Spotswood, "Letter to the Lords, Commissioners of Trade," The Official Letters of Alexander Spotswood. R. A. Brock, ed. 3 vols. Richmond, Va.: State of Virginia, 1882-85, 2: 37, 194-212. 72. Letter to the Lords, Commissioners of Trade, Spotswood Letters, 2: 37, 194-212. 73. Virginia Gazette, 14 December 1737. 74. Spotswood Letters, 1: 163. 75. Spotswood Letters, 2: 140. 76. Spotswood Letters, 2: 209-10. 77. Journal of the House Burgesses, 1629-1677. 30 vols. Richmond: State of Virginia, 1905-15, August 9, 1715. 78. Spotswood Letters, 1: 210-13. 79. Spotswood, Letters, 1: 121, 130-35, 141-45, 166-67; Hening, Statutes at Large, 4: 10. 80. Spotswood, Letters, 1: 130. 81. Hening, Statutes at Large, 3: 343, 464-69; Spotswood, Letters, 1: 167. 82. Spotswood, Letters, 1: 169-72, 2: 19-25. 83. See Spotswood to Lords of Trade, especially letter of 9 May 1716, Spotswood, Letters, 2: 25, 121, 145. 84. Hening, Statutes at Large, 4: 103, 405, 461. 85. Spotswood, Letters, 2: 227. 86. Hening, Statutes at Large, 4: 118-19, 130-31. 87. Hening, Statutes at Large, 4: 119. 88. Hugh Jones. Present State of Virginia. London, 1724; see also College Catalogue of William and Mary, 1855, 5-10. 89. Virginia Gazette, 7 November 1754, supplement. 90. Original documents reported in Virginia Magazine of History and Biography 3 : 119. 91. Minute Book, King and Queen County, 5: 47. 92. William Byrd's History of the Dividing Line Betwixt Virginia and North Carolina. edited by William K. Boyd. Raleigh, N. C., 1929, 116. 93. Hening, Statutes at Large, 5: 16-17; 6: 533; 7: 95. 94. Pennsylvania Archives. J. H. Linn and W. H. Egle, eds. 119 vols in 9 series. Harrisburg: Commonwealth of Pennsylvania, 1852-1935. [Hereinafter, Pa. Arch, with series first, then vol. number, followed by page number). 1 Pa. Arch. 1: 581-83, 616-19; Archives of Maryland. W. H. Browne et al., eds. 72 vols. Annapolis: state of Maryland, 1883-. [Hereinafter Md. Arch.]. 28: 193-99, 224; Pennsylvania Colonial Records. 16 vols. Harrisburg: Commonwealth of Pennsylvania, 1852-60. [Hereinafter Pa. Col. Rec.], 4: 455-56. 95. William A. Foote, "The Pennsylvania Men of the American Regiment," Pennsylvania Magazine of History and Biography, 88 : 31-38; New York Weekly Journal, 17 January 1743. 96. Great Britain. Public Records Office, Colonial Office. Roll 5: 1325, 235, 237-39. 97. Boston News Letter, 18 December 1746; Maryland Gazette, 21 October 1746. 98. Boston News Leader, 9 August 1753 and 28 March 1754; Virginia Gazette, 23 February 1754; Pennsylvania Gazette, 12 March 1754; Maryland Gazette, 14 March 1754. 99. Boston News Letter, 21 and 28 March 1754. 100. See, for example, Pennsylvania Gazette, 10 May 1753. 101. Robert Dinwiddie served as governor of Virginia from 20 November 1751 until January 1758. His earliest appointment seems to have dated from 1727 when he was appointed collector of customs for Bermuda. He was promoted to surveyor of customs for all American colonies. Worn out by the performance of his duties in the Seven Years' War, he returned to England in 1758 and died in July 1770. Official Records of Governor Robert Dinwiddie. R. A. Brock, ed. 2 vols. Richmond: State of Virginia, 1883-84. 102. Hening, Statutes at Large, 6: 530-33. 103. Virginia Gazette, 19 July 1754. 104. "A Proclamation for Encouraging Men to Enlist in his Majesty's Service for the Defence and Security of this Colony." Hening, Statutes at Large, 7. 105. Hening, Statutes at Large, 6: 438. 106. Brock, Dinwiddie Papers, 1: 344. See also Dinwiddie to Colonel Jefferson, 5 May 1756, in Ibid., 1: 405. 107. Dinwiddie Papers, 1: 515. 108. Dinwiddie to Charles Carter, 18 July 1755 in Dinwiddie Papers, 2: 101. 109. Edward Braddock to Robert Napier, 17 March 1755, in Stanley Pargellis, editor. Military Affairs in North America, 1748-1765. Hampden, Ct.: Anchor, 1969, 78. 110. Dinwiddie Papers, 2: 67, 93. 111. Waddell, Annals of Augusta County, 112. 112. Dinwiddie Papers, 2: 100-200. 113. Hening, Statutes at Large, 6: 550-51. 114. Dinwiddie Papers, 2: 207-10. 115. The Writings of George Washington from the Original Manuscript Sources, 1745-1799. edited by Jacob E. Cooke and John C. Fitzpatrick. 39 volumes. Washington: Washington Bicentennial Commission, 1931-44, 1: 235. 116. Writings of Washington, 1: 399-400. 117. Writings of Washington, 1: 416. 118. Washington to Dinwiddie, 9 November 1756, Writings of Washington, 1: 493. 119. Writings of Washington, 1: 99. 120. Writings of Washington, 1: 158-59. 121. Writings of George Washington, 1: 188. 122. Writings of Washington, 1: 202. 123. Washington to Dinwiddie, 15 May 1756, Writings of Washington, 1: 371. 124. Dinwiddie Papers, 2: 197-200. 125. Hening, Statutes at Large, 6: 631-48. 126. Dinwiddie Papers, 2: 344-45. 127. Dinwiddie Papers, 1: 41. 128. Virginia Magazine of History and Biography, 1: 287. 129. Dinwiddie to Washington, 8 May 1756, in Dinwiddie Papers, 1: 406-08. 130. Pennsylvania Gazette, 15 April 1756. 131. Maryland Gazette, 30 January 1755. 132. Maryland Gazette, 12 September 1754. 133. Boston News Letter, 3 January 1754. 134. Boston News Letter, 6 September and 6 December 1753 and 3 January 1754. 135. Journals of the House of Burgesses, 1756-1758, 346, 356-61; Dinwiddie Papers, 2: 390; Preston Papers, 1QQ: 131-36. 136. Hening, Statutes at Large, 7: 17. 137. Boston News Letter, 13 May 1756. 138. Dinwiddie to Henry Fox, 10 May 1756, in Dinwiddie Papers, 1: 408-10. 139. Journal of the House of Burgesses, 1758-1761, 379-81. 140. Preston Papers, 1QQ: 131-33; Journal of the House of Burgesses, 1756-58, 499. 141. Dinwiddie to County Lieutenants, 5 May 1756, in Dinwiddie Papers, 1: 404. 142. Dinwiddie to Sharpe, 24 May 1756, in Dinwiddie Papers, 1: 426-28. 143. Dinwiddie to Washington, 27 May 1756, in Dinwiddie Papers, 1: 422-24. 144. Dinwiddie to Abercrombie, 28 May 1756, in Dinwiddie Papers, 1: 424-26. 145. Marion Tinling, ed. Correspondence of the Three William Byrds of Westover, Virginia. 2 vols. Richmond: Virginia Historical Society, 1977, 2: 616. 146. Hening, Statutes at Large, 7: 93-95. 147. Writings of Washington, 1: 354-59. 148. See Dinwiddie to Thomas Jefferson, 5 May 1756, in Dinwiddie Papers, 1: 405. 149. in Louis K. Koontz. The Virginia Frontier. Baltimore: Johns Hopkins University Press, 1925, 85, 176. 150. Dinwiddie Papers, 2: 476. 151. Dinwiddie to Loudoun, 28 October 1756, Dinwiddie Papers, 1: 532-34. 152. Dinwiddie Papers, 2: 581-92; Military Grants, French and Indian War, in Virginia Land Office; Preston Papers, 13, 18, 26. 153. Dinwiddie Papers, 2: 620-23. 154. Dinwiddie to James Atkin, 16 June 1757, Dinwiddie Papers, 1: 640. 155. Dinwiddie to Pitt, 18 June 1757, Dinwiddie Papers, 1: 641-42. 156. The Act of the Assembly, Now in Force in the Colony of Virginia. Williamsburg: Rind, Purdie and Dixon, 1769, 334-42. 157. An Act for Reducing the Several Acts for Making Provision against Invasions and Insurrections into One Act, Acts of the Assembly, 342-44. 158. Hening, Statutes at Large, 7: 172-73; "Memorial to the House of Burgesses, 3 April 1758," in Legislative Journal, 3: 1183. 159. George Reese, ed. Official Papers of Francis Fauquier, 1758--1768. 3 vols. Richmond: Virginia Historical Society, 1980, 2: 168. 160. Koontz, Virginia Frontier, 293. 161. Preston Papers, 54. 162. Journal of the House of Burgesses, 1758-1761, appendices; Hening, Statutes at Large, 7: 492-93. 163. Preston Papers, 55. 164. Draper mss, 2QQ44; Withers, Border Warfare, 99; Koontz, Virginia Frontier, 288; Waddell, Annals of Augusta County, 198-99. 165. Ibid., 263, 289. 166. Idib., 292-93. 167. Howard H. Peckham. Pontiac and the Indian Uprising. Princeton: Princeton University Press, 1947, 214-17. 168. Amherst to Lt.-gov. Fauquier, 29 August 1763, Collections of the Michigan Pioneer and Historical Society, 19 : 228-29. 169. Hening, Statutes at Large, 7: 93-106; 274-75; 8: 241-45, 503; The Acts of the Assembly nowe in Force in the Colony of Virginia. Williamsburg: Rind, Purdie and Dixon, 1769, 474-76. 170. Virginia Gazette, 13 November 1766. 171. Dunmore was the last royal governor of Virginia. He opened up the Ohio Territory by defeating the Shawnee in Dunmore's War of 1774. He fought constantly with the House of Burgesses and finally had to flee to a British man of war. He returned to England in July 1776 and later served as governor of the Bahamas, 1787 to 1796. 172. For example, in Rivington's New York Gazette, 17 November 1774. 173. Virgil A. Lewis. History of the Battle of Point Pleasant. Charleston, W. Va., 1909; E. O. Randall, "Lord Dunmore's War," Ohio Archaeological and Historical Publications, 11 : 167-97; R. G. Thwaites and Louise P. Kellogg, eds. Documentary History of Dunmore's War. Madison: Wisconsin State Historical Society, 1905. 174. William Wirt Henry, ed. Patrick Henry: Life, Correspondence and Speeches. 3 vols. New York: Scribner's, 1891, 1: 252. 175. American Archives. Peter Force, ed. 9 vols. in series 4 and 5. Washington: U. S. Government, 1837-53. 4 Amer. Arch. 2: 1211-15. 176. Robert Douthat Meade. Patrick Henry. 2 vols. Philadelphia: Lippincott, 1957-62; Moses Tyler C. Patrick Henry. New York: Houghton Mifflin, 1915; Richard R. Beeman. Patrick Henry. New York: McGraw-Hill, 1974; Norine Dickson Campbell. Patrick Henry, Patriot and Statesman. New York: Devin-Adair, 1969. 177. Henry, Patrick Henry, 1: 257-58. 178. R. A. Brock, "Eminent Virginians," in Historical and Geographical Encyclopedia. New York: Hardesty, 1884, 348. 179. Henry, Patrick Henry, 1: 258. 180. Edmund Randolph, ms. in Virginia Historical Society. 181. 4 Amer. Arch. 1: 881. 182. Henry, Patrick Henry, 1: 279. 183. Henry, Patrick Henry, 1: 156. 184. Letter from Virginia, 1 July 1775, Morning Chronicle and London Advertiser, 21 August 1775. 185. Henry, Patrick Henry, 1: 280. 186. Letter from Virginia, dated 16 April 1775, London Chronicle, 1 June 1775. 187. J. T. McAllister, The Virginia Militia in the Revolutionary War. Hot Springs, Va.: McAllister, n. d., 7. 188. Revolutionary Virginia: The Road to Independence. Brent Tarter, ed. 8 vols. Charlottesville: University of Virginia Press, 1983, 7: 515. 189. Virginia Magazine of History and Biography, 22 : 57. 190. 5 Amer. Arch. 3: 52. 191. Ordinances of the Convention, July 1775, 1: ch. 1: 33, 34, 35. 192. Ordinances of the Convention, July 1775, 33. 193. Ibid., 34. 194. Julian P. Boyd, ed. Papers of Thomas Jefferson. Princeton: Princeton University Press, 1950-, 1: 268. 195. "Letters from Virginia, 1774-1781," Virginia Magazine of History 3: 159. 196. Each man enlisted was to be equipped with a hunting shirt, a pair of leggings and a proper arm at the public expense. If the men provided their own weapon they were to receive an additional allowance of 20 shillings per years. Hening, Statutes at Large, 9: 18. 197. Lenora H. Sweeny. Amherst County, Virginia, in the Revolution. Lynchburg, Va.: Bell, 1951, 3. 198. Benson J. Lossing. Pictorial Field Book of the Revolution. 2 vols. New York: Harper, 1851-52, 2: 536. 199. Letter from Philadelphia, 12 March 1776, London Gazetteer and New Daily Advertiser, 16 May 1776. 200. McAllister, Virginia Militia, 5. 201. Henry, Patrick Henry, 1: 317. 202. Letter of Jasper Yates to James Wilson, 30 July 1776, "You may recollect that sometime ago the Convention of Virginia resolved that 200 Indians should be inlisted by John Gibson in the service of that Colony." in Pennsylvania Magazine of History and Biography, 39 : 359. 203. John Page to Jefferson, 24 November 1775, Jefferson Papers, 1: 266. 204. Hening, Statutes at Large, 9: 30ff. 205. Hening, Statutes at Large, 9: 28-29, 139-41; Revolutionary Virginia, 7: 505. 206. Revolutionary Virginia, 7: 597. 207. Revolutionary Virginia, 7: 633. 208. Virginia Gazette, 23 September 1775; Henry, Patrick Henry, 1: 319. 209. Hening, Statutes at Large, 9: 9-10; Virginia Gazette, 1 April 1775. 210. Robert G. Albion and Leonidas Dodson, eds. Journal and Letters of Philip Vickers Fithian, 1773-1774. Williamsburg: Colonial Williamsburg Foundation, 1943, 24. 211. Hening, Statutes at Large, 9: 267-68. 212. Humphrey Bland. A Plan of Military Discipline. Several editions were published. Washington specified the London edition of 1762. 213. Turpin de Crisse (1715-1792). An Essay on the Art of War. London, 1755. The count was among the most distinguished scholars of military history of his day. He had translated and interpreted many of the more important works of antiquity. He served in Belgium, Holland and France under Marshal Saxe. 214. Roger Stevenson. Military Instructions for Officers, "lately published in Philadelphia" [Philadelphia, 1775]. Washington had a copy of this work in his library at Mount Vernon Library. 215. M. de Jeney. The Partisan. London, 1760. 216. William Young. Essays on the Command of Small Detachments. 2 vols. London, 1771. 217. Thomas Simes. The Military Guide for Young Officers. Philadelphia, 1776. This work was published in two volumes, the second a military dictionary; the first a military scrap book containing many quotations from other works, such as Bland and Saxe. 218. Friedrich Kapp. The Life of Major-General Frederick William von Steuben. New York: Mason, 1859, 130. 219. John W. Wright. Some Notes on the Continental Army. Vails Gate, N. Y.: National Temple Hill Assn., 1963, 2-4. 220. Henry, Patrick Henry, 1: 327, 337. 221. Technically, Patrick Henry carried a commission which read, "colonel of the first regiment of regulars, and commander in chief of all forces raised for the protection of this colony" in fact Henry was too heavily involved in the politics of the Convention and political organization of the state to command any force. There was an apparent contradiction between Woodford's and Henry's commissions, but there was little opportunity for clash because each man was too busy at his own task. Henry, Patrick Henry, 1: 338-39. 222. William Woodford to Convention, 7 December 1775, in Henry, Patrick Henry, 1: 335-37. 223. Henry, Patrick Henry, 1: 429. 224. Virginia Bill of Rights, 1776, in Poore, Constitutions, 2: 1909. 225. The Papers of Thomas Jefferson. Julian P. Boyd and others, eds. 20 vols. Princeton: Princeton University Press, 1950-, 1: 344-45. 226. Jefferson Papers, 1: 353. 227. Jefferson Papers, 1: 363. 228. Virginia Constitution of 1776, in Poore, ed. Federal and State Constitutions, Colonial Charters, and Other Organic Laws of the States, Territories and Colonies, Now and Heretofore Forming the United States of America. Washington: U.S. Government Printing Office, 1909, 2: 1911. 229. Clark was born near Charlottesville, Virginia, on 19 November 1752, and in 1775 moved to the Kentucky territory, where he organized militiamen to defend their homes. After the war, Clark returned to Louisville, where he lived until his death on 13 February 1818. 230. Journal of the Virginia Council, 23 August 1776. 231. Revolutionary Virginia, 7: 306. 232. Revolutionary Virginia, 7: 552. 233. Edmund Pendleton to James Madison, 11 December 1780, in D. J. Mays, ed. The Letters and Papers of Edmund Pendleton, 1734-1802. 2 vols. Richmond: Virginia Historical Society, 1967, 1: 186. 234. Henry, Patrick Henry, 3: 13-15. 235. Revolutionary Virginia, 7: 548-49. 236. Alexander Brown. The Cabells and Their Kin. Richmond: privately printed, 1896, 124. 237. Revolutionary Virginia, 7: 625, 695. 238. "Two Letters of Colonel Francis Johnson," this one dated 14 June 1776, in Pennsylvania Magazine of History and Biography, 39 : 302. 239. D. J. Mays, ed. Letters and Papers of Edmund Pendleton, 1734-1803. 2 vols. Richmond: Virginia Historical Society, 1967, 2: 223. 240. Proceedings of the Virginia Historical Society, New Series, 11 : 346. 241. Jefferson Papers, 4: 664. 242. According to Sabine, American Loyalists, II, 146-47, William Panton of Georgia was the principal agent for British arms sent the Cherokee, telling them that "these guns were to kill Americans and that he would rather have them applied to that use than to the shooting of deer." 243. William Christian to Patrick Henry, 23 October 1776, in Henry, Patrick Henry, 3: 25-29. 244. Willie Jones, President of the North Carolina Council, Halifax, to Patrick Henry, 25 October 1776, Ibid., 3: 29-30. 245. Patrick Henry to Richard Henry Lee, 28 March 1777, in Henry, Patrick Henry, 1: 515. 246. Maud Carter Clement, History of Pittsylvania County, Virginia. Lynchburg, Virginia: Bell, 1929, 142. 247. Henry, Patrick Henry, 1: 483. 248. Resolution of Legislature, 21 December 1776, in Henry, Patrick Henry, 1: 502-04. 249. 5 Amer. Arch. 3: 1425. 250. Patrick Henry to George Washington, 29 March 1777, in Henry, Patrick Henry, 1: 516-17. 251. Virginia Gazette, 21 February 1777. 252. Patrick Henry to George Washington, 29 March 1777, in Henry, Patrick Henry, 1: 516-17. 253. Hening, Statutes at Large, 9: 275. 254. Patrick Henry to the Lieutenant of Montgomery County, 10 March 1777, in Henry, Patrick Henry, 3: 44. 255. Patrick Henry to Thomas Johnson, 31 March 1777, in Henry, Patrick Henry, 3: 51-53. 256. Stuart, "Memoir of the Indian Wars," Collection of the Virginia Historical and Philosophical Society, 1: 1. 257. McDowell to Jefferson, 20 April 1781; Moffet to Jefferson, 5 May 1781, Jefferson Papers, 5: 507, 603-04. 258. Hening, Statutes at Large, 9: 267-68. 259. Proceedings of the Virginia Historical Society. . New Series, 11: 346. 260. Patrick Henry to Richard Henry Lee, 20 March 1777, in Henry, Patrick Henry, 1: 514. 261. Henry, Patrick Henry, 1: 518-19. 262. Calendar of Virginia State Papers, 2: 301, 398 144, 173, 260, 234 & 232; M. C. Clement, History of Pittsylvania County, 170-87. 263. D. J., Mays, ed. Letters and Papers of Edmund Pendleton, 1734--1803. 2 vols. Richmond: Virginia Historical Society, 1967, 1: 221. 264. Executive Journal, 18 August 1777, Virginia Historical Society 61; Patrick Henry to George Washington, 29 October 1777, in Henry, Patrick Henry, 1: 541-42. 265. Hening, Statutes at Large, 9: 373. 266. George Washington to Patrick Henry, 13 November 1777, in Henry, Patrick Henry, 1: 542-44. 267. Hening, Statutes at Large, 9: 445. 268. Patrick Henry to Congress, 18 June 1777, in Henry, Patrick Henry, 3: 177. 269. John Wilson to Patrick Henry, 20 May 1778, in Henry, Patrick Henry, 3: 169-70. 270. Patrick Henry to Benjamin Harrison, 21 May 1778, in Henry, Patrick Henry, 3: 167-69. 271. Journal of the Executive Council, 28 June 1777, 30, Virginia Historical Society. 272. 1 Pa. Arch. 6: 18. 273. Journal of the Executive Council, 19 February 1778, Virginia Historical Society. 274. Executive Journal, 1778, 227, 273; Annals of Augusta County, 164, 18 April and 5 May 1778. 275. Patrick Henry to Congress, 8 July 1778; Congress to Henry, 6 August 1778, in Henry, Patrick Henry, 1: 578-79; 3: 189. 276. John Bakeless. Background to Glory. Philadelphia: Lippincott, 1957; James A. James The Life of George Rogers Clark. Chicago: University of Chicago Press, 1928. 277. Hening, Statutes at Large, 9: 374-75. 278. Executive Journal, 2 January 1778. 279. Patrick Henry to George Rogers Clark, 2 January 1778, in Henry, Patrick Henry, 1: 588. 280. Patrick Henry to Richard Henry Lee, 19 May 1779, in Henry, Patrick Henry, 2: 30-31. 281. Executive Journal, 303. 282. Executive Journal, 305. 283. Henry, Patrick Henry, 2: 7. 284. Henry to Henry Laurens, 28 November 1778, in Henry, Patrick Henry, 2: 21-23. 285. Patrick Henry to George Washington, 13 March 1779; Arthur Campbell to Patrick Henry, 15 March 1779, in Henry, Patrick Henry, 2: 23; 3: 231. Isaac Shelby was born in Washington County, Maryland, on 11 December 1750. He became a leader of patriot militia in the Carolinas. About 1783 he moved to Kentucky and became its first governor when it was admitted to statehood. In the War of 1812 he organized band of militia and volunteers some 4000 strong and defeated the British army at the Battle of the Thames on 15 October 1813. He died on 18 July 1826. Sylvia Wrobel and George Grider. Isaac Shelby: Kentucky's First Governor and Hero of Three Wars. Danville, Ky.: Cumberland Press, 1974. 286. John G. Patterson, "Ebenezer Zane, Frontiersman," West Virginia History, 12 . 287. Patrick Henry to Richard Henry Lee, 19 May 1779, in Henry, Patrick Henry, 2: 30-31; Sir George Collier to Sir Henry Clinton, 16 May 1779, in Henry Clinton. The American Rebellion. Sir Henry Clinton's Narratives of His Campaigns, 1775-1782. William B. Willcox, ed. New Haven, Ct.: Yale University Press, 1954, 406. 288. "Journal of Jean Baptiste Antoine de Verger," in Howard C. Rice, ed. The American Campaign of Rochambeau's Army. Princeton: Princeton University Press, 1957, 152. 289. In 1785 Patrick Henry, serving again as governor of Virginia, hired Lafayette to advise him on militia training and discipline. Lafayette wrote Henry on 7 June 1785, "I have been honored with your Excellency's commands . . . and find myself happy to be employed in the service of the Virginia Militia . . . . Indeed, Sir, the Virginia militia deserves to be well armed and properly attended." Henry, Patrick Henry, 3: 298-99. 290. Jefferson Papers, 6: 36. 291. Jefferson Papers, 4: 298-99. 292. Jefferson Papers, 4: 130-31. 293. Jefferson Papers, 3: 576-77. 294. Jefferson Papers, 4: 54. 295. Jefferson Papers, 4: 57. 296. Writings of Washington, 20: 45-46. 297. Edmund Pendleton to James Madison, 11 December 1780, in D. J. Mays, ed. The Letters and Papers of Edmund Pendleton, 1734-1802. 2 vols. Richmond: Virginia Historical Society, 1967, 1: 326. 298. Benjamin F. Stevens, ed. The Campaign in Virginia, 1781: An Exact Reprint of Six Rare Pamphlets on the Clinton-Cornwallis Controversy. 2 vols. London, 1888; Randolph G. Adams, " A View of Cornwallis's Surrender at Yorktown," American Historical Review, 37 : 25-49; William B. Willcox, "The British Road to Yorktown: A Study in Divided Command," American Historical Review, 52 : 1-35. 299. George Washington to Patrick Henry, 5 October 1776, in Henry, Patrick Henry, 3: 12-15. 300. American State Papers: Military Affairs. 7 vols. Washington: Gales and Seaton, 1832-61, 1: 14ff. 301. Sources of Our Liberties. R. Perry and J. Cooper, eds. Washington: American Bar Association, 1959, 312. 302. Richard Henry Lee in the Pendleton Papers, 473, dated 21 February 1785. 303. Quoted in Hugh B. Grigsby, History of the Virginia Convention of 1788. R. A. Brock, ed. 2 vols. Richmond: Virginia Historical Society, 1890, 1: 158-59. 304. Quoted in Grigsby, op. cit., 1: 161. 305. Grigsby, op. cit., 1: 258. 306. Calendar of Virginia State Papers, 7: 218. 307. Colonial Records of North Carolina. William L. Saunders, ed. 10 vols. Raleigh, State of North Carolina, 1886-1890, 1: 83-87. Hereinafter cited as N. C. Col. Rec. North Carolina State Records. ed. Walter Clark and William L. Saunders. (Raleigh: State of North Carolina, 1886-1905). Hereinafter cited as N. C. State Rec. 308. N. C. Col. Rec., 1: 87. 309. In 1672 Cooper was named Earl of Shaftsbury. 310. Poore, Constitutions, 2: 1388. 311. N. C. Col. Rec. 1: 31. 312. Ibid., 2: 1395-96. 313. N. C. Col. Rec., 1: 112; Poore, Constitutions, 2: 1401-02. 314. William S. Powell, ed. Ye Countie of Albemarle in Carolina. Raleigh: North Carolina Department of Archives and History, 1958, 23-24. 315. N. C. Col. Rec. 1: 239, 361, 389. 316. Fundamental Constitutions of North Carolina of 1669, in Poore, Constitutions, 2: 1396. 317. H. T. Lefler and A. R. Newsome. North Carolina. Chapel Hill: University of North Carolina Press, 1954, 600. 318. E. M. Wheeler, "Development and Organization of the North Carolina Militia," North Carolina Historical Review, 41 : 307-43. 319. John Archdale, "A New Description of that Fertile and Pleasant Province of Carolina," in A. S. Salley, Jr., ed. Narratives of Early Carolina, 1650-1708. New York: Scribner's, 1911, 277--313. 320. John Oldmixon, "History of the British Empire in America: Carolina" , in Ibid., 313-74. 321. N. C. Col. Rec., 1: 541. 322. Walter Clark, "Indian Massacre and the Tuscarora War," North Carolina Booklet, 2 : 9. 323. Spotswood Letters, 1: 123. 324. "Journal of John Barnwell," Virginia Magazine of History and Biography, 5 : 391-402; also in South Carolina Historical and Genealogical Magazine, 9 : 28-54. 325. N. C. Col. Rec., 1: 877. 326. Ibid., 1: 871-75. 327. Ibid., 1: 877. 328. Ibid., 1: 886. 329. Ibid., 1: 886. 330. N. C. State Rec., 13: 29-31. 331. Ibid., 13: 23-31. 332. Ibid., 13: 30. 333. "A Short Discourse on the Present State of the Colonies in America with Respect to the Interest of Great Britain," in N. C. Col. Rec., 2: 632-33. 334. Ibid., 4: 78. 335. N. C. State Rec., 13: 244-47. 336. N. C. State Rec., 13: 330. 337. N. C. State Rec., 15: 334-37. 338. Md. Arch. 50: 534. 339. N. C. State Rec., 22: 370-72. 340. Koontz, Virginia Frontier, 169. 341. Loudoun to Cumberland, November 1756, in Pargellis, Military Affairs, 267. 342. "Some Hints for the Operations in North America for 1757," in ibid., 314. 343. N. C. Col. Rec., 4: 220-21. 344. N. C. Col Rec., 4: 119. 345. Laws of the State of North Carolina. 2 vols. Raleigh: State of North Carolina, 1821, 1: 135. 346. N. C. State Rec., 13: 518-22. 347. Laws of North Carolina, 1: 135. 348. N. C. State Rec., 23: 787-88, 941. 349. North Carolina Statutes, 1715-1775, 434-35. 350. Luther L. Gobbel, "The Militia in North Carolina in Colonial and Revolutionary Times," Historical Papers of the Trinity College Historical Society, 12 : 42. 351. Laws of North Carolina, 1: 125. 352. N. C. State Rec., 23: 601. 353. Wheeler, "Carolina Militia," 317-18; N. C. Col. Rec., 5: xli. 354. N. C. State Rec., 23: 597. 355. Wheeler, "Carolina MIlitia," 318. 356. N. C. Col. Rec., 10: 302; 4 Amer. Arch. 4: 556. 357. North Carolina Constitution of 1776, in Poore, Constitutions, 2: 1410. 358. In 1868 townships were created in the counties and these served, among their many functions, as permanent militia districts. Clarence W. Griffin, History of Old Tryon and Rutherford Counties. Asheville, NC: Miller, 1937, 139, 141-43. 359. 4 Amer. Arch. 5: 1330. 360. Poore, Constitutions, 2: 1409. 361. 4 Amer. Arch. 5: 1337-38. 362. 4 Amer. Arch. 5: 1326. 363. Robert Gardner. Small Arms Makers. New York: Crown, 1963, 141-41, 212. 364. Eric Robson, "The Expedition to the Southern Colonies, 1775-1776," English Historical Review, 116 : 535-60. 365. N. C. State Rec., 10: xiii. 366. Hugh F. Rankin, "The Moore's Creek Bridge Campaign," North Carolina Historical Review, 30 : 23-60. 367. N. C. State Rec., 10: xiii. 368. North Carolina Constitution of 1778, in Poore, Constitutions, 2: 1623-27. 369. Marquis Charles Cornwallis, eldest son of the First Earl Cornwallis, inherited his father's title in 1762. He was a graduate of Eton, an officer in the Seven Years War, and an active Whig in the House of Lords, where he opposed the Declaratory Act of 1766. He was second in command in America to Sir Henry Clinton and served with distinction. He subdued New Jersey in 1776 and defeated the patriots at Brandywine, occupying Philadelphia in 1777. He urged aggressive action in the southern states early in the war, but his pan received no support until 1780. After the Revolution, in 1786, was transferred to India where he laid the foundations for the British administrative system. He checked the uprising of Tippu Sultan, reformed the land and revenue systems and introduced a humane legal code and reformed court system. In 1792 he was made a marquess, returned to England in 1793, and made a member of the cabinet in 1795. He worked to pass the Act of Union, unifying the Irish and English parliaments. After George III objected to emancipation of Roman Catholics, he resigned from the cabinet in protest. Appointed Governor-general of Indian in 1805, he died on 5 October of that year. Frank and Mary Wickwire. Cornwallis: The American Adventure. 2 vols. Boston: Houghton-Mifflin, 1970-80; Mary and F. B. Wickwire. Cornwallis and the War of Independence. London: Faber and Faber, 1971. 370. Ward, War of the Revolution, 2: 722-30. 371. Smith, Loyalists and Redcoats, 145-47. 372. N. C. Rec., 14: 614-15, 647, 655, 774, 786; 19: 958. 373. Henry, Patrick Henry, 2: 65. The reference to Deckard rifles is interesting. Jacob Dickert (1740-1822) was born in Germany, emigrated to America in 1748, and settled in Lancaster County, Pennsylvania, after living briefly in Berks County, Pa. He operated a large gunshop in Lancaster, where he was an important figure in the development of the uniquely American product, the Pennsylvania long rifle, also commonly called the "Kentucky rifle." Stacy B. C. Wood, Jr. and James B. Whisker. Arms Makers of Lancaster County, Pennsylvania. (Bedford, PA: Old Bedford Village Press, 1991, 14-15. We find another, later reference to Dickert's products by name in an advertisement of merchant Robert Barr for "Dechard rifle guns." Kentucky Gazette, 1 September 1787. 374. Quoted in Henry, Patrick Henry, 2: 64. 375. North Callahan. Royal Raiders: The Tories of the American Revolution. Indianapolis: Bobbs-Merrill, 1963, ch. 10. 376. Lyman C. Draper. King's Mountain. Cincinnati: Thompson, 1881, 314. 377. Nathaneal Greene was born in Rhode Island, served as a deputy in the Rhode Island Assembly (1770-72, 1775), and was appointed a brigadier-general in May 1775 to lead three Rhode Island regiments. After serving at the siege of Boston and as commander of the American occupation army, he was promoted on 9 August 1776 to major-general. He supported George Washington at Trenton in December 1776 and Germantown and spent the winter of 1777-78 at Valley Forge. He served as quartermaster-general and was present at the battles of Monmouth and Newport. In 1780 he chaired the court martial which condemned Major André in the Benedict Arnold plot. After relieving Horatio Gates, he led the southern army to a string of effective delaying actions and victories and many credit the ultimate defeat of Lord Cornwallis' army to his leadership. He died on 19 June 1786 near Savannah, Georgia. Papers of Greene. 378. Horatio Gates received much credit for the American victory over General John Burgoyne's army at Saratoga, although he spent most of his time at the critical juncture in the battle debating the merits of the American Revolution with a captured British officer while Benedict Arnold led the men to victory. He was born in England, served in the Seven Years War and retired on half-pay and in 1772 purchased an estate in Virginia. In 1775 Congress appointed him adjutant-general and in 1776 promoted him to major-general. In 1777 he was president of the board of war. The Conway Cabal, led in Congress by Thomas Conway, sought to replace George Washington with Gates, but failed. In 1780, following his disastrous loss to Lord Cornwallis at the Battle of Camden, Congress replaced him and Gates retired to his plantation. Activated in 1782 at Newburgh, New York, he retired again in 1783. He moved to Manhattan where he died on 10 April 1806. Max M. Mintz, Generals of Saratoga: John Burgoyne and Horatio Gates. New Haven: Yale University Press, 1990; Paul D. Nelson. Horatio Gates. Baton Rouge, La.: Louisiana State University Press, 1976. 379. Nathaneal Greene to Thomas Jefferson, 10 February 1781, Calendar of [Virginia] State Papers, 1: 504. 380. Don Higginbotham. Daniel Morgan: Revolutionary Rifleman. New York, 1961. Morgan was later part of Washington's force that put down the Whiskey Rebellion. He also served in the U. S. House of Representatives, 1797-99. 381. See Banastre Tarleton. A History of the Campaigns of 1780 and 1781 in the Southern Provinces of North America. London: Cadell, 1787. 382. Hugh F. Rankin, "Cowpens: Prelude to Yorktown," North Carolina Historical Review, 31 : 336-69. 383. Ward, War of the Revolution, 2: 755-62. 384. Robert C. Pugh, "The Revolutionary Militia in the Southern Campaign, 1780-81," William and Mary Quarterly, 3d series, 14 : 164-65; Hugh F. Rankin, "Cowpens: Prelude to Yorktown," North Carolina Historical Review, 31 : 336-69. 385. Tarleton did raid into Virginia and on 4 June 1781 nearly captured Thomas Jefferson, then governor of Virginia, and some members of the state legislature. 386. Ward, War of the Revolution, 2: 783-96. 387. Hugh F. Rankin. Francis Marion: The Swamp Fox. New York: Crowell, 1973. 388. William G. Simms. The Life of Francis Marion. New York: Appleton, 1845, 126ff. 389. Robert O. Demond. The Loyalists in North Carolina During the Revolution. Durham: North Carolina State University Press, 1940. 390. Rankin, Francis Marion. 391. Paul H. Smith. Loyalists and Redcoats. Chapel Hill: University of North Carolina Press, 1964, 152-53. 392. Francis Vinton Greene. General Greene. New York: Scribner's, 1914. 393. Robert C. Pugh, "The Revolutionary Militia in the Southern Campaign, 1780-81," William and Mary Quarterly, 3d series, 14 : 160. 394. George W. Kyte, "Strategic Blunder: Lord Cornwallis Abandons the Carolinas, 1781," The Historian, 22 : 129-44; William B. Willcox, "The British Road to Yorktown: A Study in Divided Command," American Historical Review, 52 : 1-35. See also Willcox's "British Strategy in America," Journal of Modern History, 19 : 97-121. 395. John Tate Lanning, ed. The St. Augustine Expedition of 1740. Columbia: State of South Carolina, 1954, 4; A. S. Salley, Jr., ed. Journal of the Grand Council of South Carolina, August 25, 1671, to June 24, 1680. Columbia: State of South Carolina, 1907, 21. 396. Cacique is Spanish for Amerindian chief and was a term applied to land barons in the Carolinas who owned 24,000 or more acres of land. Alongf with landgraves and lords of the manor, caciques constituted the medieval style landed seignory in these colonies. 397. Osgood, American Colonies, 2: 373. 398. David Cole, "A Brief Outline of the South Carolina Colonial Militia System," Proceedings, South Carolina Historical Association, 24 : 14-23. 399. Poore, Constitutions, 2: 1388. 400. Ibid., 2: 1395-96. 401. The Statutes at Large of South Carolina. Thomas Cooper and David McCord, eds. Columbia, S.C.: State of South Carolina, 1836-41, 1: 48-49. 402. Journal of the Grand Council of South Carolina, 1671-1680. A. S. Salley, ed. Columbia, S.C.: State of South Carolina, 1907, 10-11, 42. 403. Calendar of State Papers: Colonial America and West Indies. 11: 540. hereinafter cited as C.S.P. 404. Cole, "Brief Outline," 16. 405. Edward McCrady. The History of South Carolina under the Proprietary Government. New York, 1897, 477. 406. Act . . . for the Defence of the Government, No. 30 of 15 October 1686, Statutes at Large, 1: 15-18. 407. Act 33 of 22 January 1686; Act 52, 1690, Statutes at Large, 2: 20-21, 42-43. 408. Act 162 of 8 October 1698, Statutes at Large, 1: 7-12. 409. A. S. Salley, Jr., ed. Records in the British Public Record Office Relating to South Carolina, 1685 to 1690. Atlanta: State of South Carolina, 1929, 87. 410. Lanning, St. Augustine Expedition, 9; Statutes at Large, 2: 15. 411. The primary source of information on militia slave patrols comes from H. M. Henry, Police Control of the Slave in South Carolina. Lynchburg, Va.: Emory, 1914. Professor Henry gave the date of 1686 as the year of the first deployment of militia slave patrols, but this is strongly disputed in Cole, "Brief Outline," 21. 412. South Carolina Statutes at Large, 7: 346. 413. Act 49 of 1690. 414. Laws of Governor Archdale, 1-8, in Statutes at Large of South Carolina. 415. An Act for . . . Maintaining of a Watch on Sullivan's Island, No. 51 of 22 December 1690, Statutes at Large, 2: 40-42. 416. An Act for Settling a Watch in Charlestown and for Preventing Fires, 1698, in Kavenagh, Colonial America, 3: 2389-90. 417. Thomas Nairn. A Letter from South Carolina, Giving an Account of the Soil, Air, Products, Trade, Government, Laws, Religion, People, Military Strength . . . of that Province. London, 1718, 28-29. 418. Journals of the Commons House of Assembly of the Province of South Carolina. hereinafter J. C. H. A., 3: 35. 419. Statutes at Large of South Carolina, 1: 29. 420. Instructions to Francis Nicholson, Royal Governor of South Carolina, 30 August 1720, in Kavenagh, Colonial America, 3: 1975. 421. South Carolina Statutes at Large, 7: 33. 422. David J. McCord. The Statutes at Large of South Carolina. 10 vols. Columbia, S. C.: State of South Carolina, 1836-41, 8: 617-24. 423. Statutes at Large, 2: 33. 424. South Carolina Statutes at Large, 7: 347-49. 425. South Carolina Statutes at Large, 3: 108-11. 426. Colonial Records of South Carolina: Journal of the Common House of Assembly. edited by J. H. Easterby and others. Columbia, S.C.: State of South Carolina, 1951--. 11 vols to date. Volumes in this series are still coming out. I: 228, dated 20 August 1702. 427. South Carolina Statutes at Large, 7: 33. 428. South Carolina Statutes at Large, 7: 347-49. 429. South Carolina Statutes at Large, 7: 349-51. 430. An Act for the Encouragement and Killing and Destroying Beasts of Prey, No. 128 of 16 March 1696; No. 211 of 8 May 1703, South Carolina Statutes at Large, 2: 108-10, 215-16. 431. "The settlers who held Charleston against the allied forces of France and Spain were partners in the glory of Stanhope and Marlborough, heirs to the glory of Drake and Raleigh." John H. Doyle, English Colonies in America. 5 volumes. New York: Holt, 1822, 1: 369. 432. Statutes at Large, 8: 625-31. 433. Orders of Lords Proprietors to Governors of the Carolinas in N. C. Col. Rec., 1: 877, 886. 434. Acts 237 of 1704 and 418 and 419 of 1719, Statutes at Large, 2: 347-49; 3: 108-11. 435. Joseph P. Barnwell, "Second Tuscarora Expedition" in 10 South Carolina Historical and Genealogical Magazine : 33-48. 436. J. C. H. A., 11: 5-27. 437. J. C. H. A., 7: 456. 438. J. C. H. A., 3: 552. 439. Trott, Laws of South Carolina, 480. 440. Statutes at Large of South Carolina, 7: 353. 441. Trott, Laws of South Carolina, 217-18. 442. C. S. P., 15: 1407, 1412; 16: 5. 443. An Act to Impower . . . Council to Carry on and Prosecute the War Against our Indian Enemies and their Confederates, Act No. 351 of 10 May 1715, South Carolina Statutes at Large, I2: 624-26. 444. Verner W. Crane. The Southern Frontier, 1670-1732. University of North Carolina Press, 1929, 178. 445. British Public Records Office, Records Relating to South Carolina. London: H.M. Stationary Office, 1889--, 8: 67. 446. Cole, "Brief Outline," 19. 447. N. C. Col. Rec., 2: 178. 448. London Transcripts in Public Records of South Carolina, 7: 7. 449. Statutes at Large, 8: 631. 450. Cooper, South Carolina Statutes, 3: 108-10. 451. Act 408 of 12 February 1719, South Carolina Statutes at Large, 2: 100-02. 452. David A. Cole, "The Organization and administration of the South Carolina Militia, 1670-1783," Ph. D. dissertation, University of South Carolina, 1953; David Cole. "A Brief Outline of the South Carolina Militia System," Proceedings, South Carolina Historical Association, 24 : 14-23; Michael Stauffer. South Carolina's Antebellum Militia. Columbia, S. C.: South Carolina Department of Archives and History, 1991; Jean Martin Flynn. The Militia in Antebellum South Carolina Society. Spartanburg, S. C.: Reprint Company, 1991; Journal of the Grand Council of South Carolina, 1671-1680. edited by A. S. Salley. Columbia, S. C.: State of South Carolina, 1907; Benjamin Elliott. The Militia System of South Carolina. Charleston: Miller, 1835; Fitzhugh McMaster. Soldiers and Uniforms: South Carolina Military Affairs, 1670-1775. Columbia, S. C.: University of South Carolina Press, 1972. 453. Quoted in Warren B. Smith. White Servitude in Colonial South Carolina. Columbia: University of South Carolina Press, 1961, 29. 454. J. C. H. A., 5: 153. 455. J. C. H. A., 5: 158, 457. 456. South Carolina Statutes at Large, 3: 39. 457. South Carolina Statutes at Large, 2: 324. 458. South Carolina Statutes at Large, 2: 636-37. 459. South Carolina Statutes at Large, 2: 347-53; 3: 33. 460. Statutes of South Carolina, 3: 109-10. 461. Colonial Records of South Carolina, James H. Easterby, ed, Columbia: State of South Carolina, 1951-, 7: 225, 233-34, 238-39; 8: 66; 9: 67-68. 462. Colonial Records of S. C., 7: 233-9. 463. Shy, Toward Lexington, 11; Calendar of State Papers: America and West Indies, 29 January 1720, no. 531. 464. South Carolina Statutes at Large, 9: 254-55. 465. Bills for salve patrols were considered through 1740. S. C. Col. Rec. 1: 202, 334, 351-53, 392, 398, 424, 427, 507-08, 509, 511-12, 515, 552, 562. Two bills were considered relative to the slave patrols. Bill number 22 was passed on 25 March 1738 and bill number 64 was enacted on 3 April 1739. 466. George Edward Frakes. Laboratory for Liberty: The South Carolina Legislative System, 1719-1776. Lexington, Ky.: University of Kentucky Press, 1970, 43-46; William James Rivers. A Chapter in the Early History of South Carolina. Charleston, 1874, 477. 467. South Carolina Statutes at Large, 8: 631-41. 468. N. C. Col. Rec. 2: 256. 469. South Carolina Statutes at Large, 3: 272-73. 470. J. C. H. A., 1: 233. 471. J. C. H .A., 7: 376. 472. J. C. H. A., 14: 166. 473. Lawrence Lee. The Lower Cape Fear in Colonial Days. Chapel Hill: University of North Carolina Press, 96-99. 474. Charleston Gazette, 25 April 1728. 475. S. C. Col. Rec. 3: 83. 476. S. C. Col. Rec. 3: 83. 477. A Journal of the Proceedings in Georgia, Beginning October 20, 1737. William Stephens' diary. London: Meadows, 1742, 2: 128ff. 478. Edward McCrady, History of South Carolina Under the Proprietary Government, 1670-1719. New York: Macmillan, 1897, 151. 479. Colonial Records of South Carolina, Series 1, 2: 25. 480. Colonial Records of S. C., 14: 243. 481. Colonial Records of S. C., 18: 89. 482. South Carolina Statutes at Large, 3: 330. 483. Act 574 of 9 April 1734, South Carolina Statutes at Large, 3: 395-99. 484. South Carolina Gazette, 15 June 1734; South Carolina Statutes at Large, 8: 641. 485. South Carolina Gazette, 22 June 1734. 486. South Carolina Gazette, 19 April 1735. 487. Boston News Letter, 13 January 1737. 488. An Act for Regulating the Guard at Johnson's Fort and for Keeping Good Order in the Several Forts and Garrisons, Act 621 of 5 March 1737, South Carolina Statutes at Large, 2: 465-67. 489. Colonial Records of S. C., 1: 429. 490. Proceedings in Georgia, 2: 128. 491. The account of the Stoenoe [or Stono] Revolution follows Peter H. Wood, "Black Resistance: The Stono Uprising and Its Consequences," in James K. Martin. Interpreting Colonial America. New York: 2d ed.; Harper and Row, 1978, 162-75. 492. South Carolina Statutes at Large, 3: 568-73. 493. Colonial Records of S. C., 1: 674. 494. South Carolina Statutes at Large, 8: 641-44. 495. South Carolina Gazette, 8 January 1741. 496. Humphrey Bland. Treatise of Military Discipline. London: Millar, 1727. Bland's book, in the unabridged London edition, was not advertised in the South Carolina Gazette until 12 February 1756. 497. Colonial Records of South Carolina: Journal of the Common House of Assembly. edited by J. H. Easterby and others. Columbia, S.C.: State of South Carolina, 1951-. 11 volumes to date, 2: 227-28, 237-38, 240-47, 250-52, 257, 302-06, 397, 402. 498. Journals of the House of the Assembly, 2: 179, 190. 499. Journals of the House of the Assembly, 2: 164-65, 172-73. 500. Acts of the South Carolina Legislature, 1733-1739, unpaged manuscript. 501. Colonial Records of South Carolina, 2: 175-78, 195, 309. 502. Benjamin Quarles, "Colonial Militia and Negro Manpower," Mississippi Valley Historical Review, 45 [1958-59]: 643-52. 503. Act of 11 December 1740, S. C. Col. Rec., 2: 420. 504. Journal of the House of the Assembly, 2: 265, 273-75, 278-79, 288-90, 294-95, 302, 309-10. 505. Journal of the House of the Assembly, 2: 357. 506. William Bull II served as lieutenant-governor from 1760 to 1761 and again 1764 to 1768. 507. Journal of the House of the Assembly, 2: 364-67, 369, 381. 508. Ibid., 2: 161; 3: 78-247. 509. Colonial Records of S. C., 1: 321, 333; Acts of the Legislature, nos 48 and 50. 510. Colonial Records of S. C., 3: 572. 511. Act for the Immediate Relief of the Colony of Georgia, Act 695 of 10 July 1742, South Carolina Statutes at Large, 3: 595-97. 512. Act of May 7, 1743, South Carolina Statutes at Large, 7: 417. 513. South Carolina Statutes of South Carolina, 2: 755. 514. Pennsylvania Gazette, 5 April 1744. 515. William Roy Smith. South Carolina as a Royal Province, 89; Frakes, op. cit., 78. 516. Colonial Records of S. C., 22: 115; 27: 369-70. 517. South Carolina Gazette, 28 September 1747; South Carolina Statutes at Large, 9: 645-63. 518. Oliver Morton Dickerson. American Colonial Government, 1696-1765, 361-62; Frakes, op. cit., 82. 519. Edmund Atkin. Indians of the Southern Frontier, xxvii-xxviii, 4; Frakes, op. cit., 87-90. 520. Peckham, The Colonial Wars, 1689-1762, 201-04; Frakes, 94-97. 521. McCrady, Royal Government, 623, 635-52. 522. William A. Schaper, "Sectionalism and Representation in South Carolina," Report of the American Historical Society, 1 : 333. 523. An Act for the Better Ordering and Governing Negroes and other Slaves in this Province, Act 790 of 17 May 1751, South Carolina Statutes at Large, 3: 420. 524. Precisely what constituted lunacy was hard to define and the legislature made little provision for defining it beyond including such anti-social behavior as acts of gross insubordination, theft, arson, running away, conspiracy and poisoning masters or other slaves. Poorer masters could be compensated for the loss of slaves incarcerated by slave patrols or law enforcement officers, and the colony was charged with the costs of deporting, executing or confining lunatic slaves. There was no thought of rehabilitation or counseling. The act also covered at length the prevention of poisoning of masters and the teaching of slaves the art of administering poison. 525. Public Records of S. C., 27: 192, 369-70. 526. "Some Hints for the Operations in North America for 1757," in Pargellis, Military Affairs, 314. 527. Loudoun to Cumberland, 17 October 1757, in Pargellis, Military Affairs, 407. 528. Daniel Pepper to Governor Lyttleton, 30 November 1756, William McDowell, ed. Colonial Records of South Carolina: Documents Relating to Indian Affairs, 1754-1765. Columbia, S. C.: Department of Archives and History, 1970, 295-97. 529. Cole, "Brief Outline," 18-19. 530. South Carolina Statutes at Large, 8: 664-66. 531. South Carolina Statutes of South Carolina, 4: 128. 532. Colonial Records of S. C., 32: 388, 395. 533. Cole, "Brief Outline," 18-19. 534. Letter of a gentleman from Charles-Town, South Carolina, to his friend in London, 10 May 1775, London Gazetteer and New Daily Advertiser, 5 July 1775. Split-shirts was a term that was interchangeable with Shirtmen, backwoods militia usually armed with rifles and expert in their use. 535. 4 Amer. Arch. 5: 578. 536. 4 Amer. Arch. 5: 581. 537. Frances R. Kepner, ed. "A British View of the Siege of Charleston, 1776," Journal of Southern History, 11 : 93-103. 538. Willie Jones, president of the North Carolina Council, to Virginia Governor Patrick Henry, in Henry, William Henry, 3: 30-31. 539. South Carolina Constitution of 1776, in Poore, Constitutions, 2: 1616-19. 540. South Carolina Statutes at Large, 8: 666-82. 541. N. C. Col. Rec. 14: xi. Lincoln was later exchanged and served as Secretary of War. He also commanded the Massachusetts militia force that suppressed Shays' Rebellion in 1787. See also, Ella P. Levett, "Loyalism in Charleston, 1761-1784," Proceedings of the South Carolina Historical Association, : 3-17. 542. George W. Kyte, "The British Invasion of South Carolina in 1780," The Historian, 14 : 149-72. 543. William T. Bulger. "The British Expedition to Charleston, 1779-1780" Ph. D. dissertation, University of Michigan, 1957. 544. Henry, Patrick Henry, 2: 7, 21-23. 545. Howard Lee Landers. The Battle of Camden, South Carolina. Washington, 1929. 546. One of the principal apologists for Gates, and harshest critics of the militia, is Samuel White Patterson. Horatio Gates, Defender of American Liberties. New York, 1941. See especially pages 320-21 in which Patterson blames the loss at Camden wholly on the cowardice of the militia. 547. William Moultrie. Memoirs of the American Revolution. 2 vols. New York, 1802, 2: 245. 548. Henry Lee. The Campaign of 1781 in the Carolinas. Chicago: Quadrangle, 1824; Henry Lee. Memoirs of the War in the Southern Department of the United States. New York: University Publishing, 1870. 549. Daniel Morgan quoted in James Graham. Life of Daniel Morgan of the Virginia Line of the Army of the United States. New York: Derby & Jackson, 1856, 370. 550. Greene to Sumter, 18 March 1781. See also Greene to John Mathews, 16 March 1781. in Greene Papers, Clements Library. 551. Edward Stevens to Thomas Jefferson, dated 8 February 1781, Jefferson Papers, 4: 561-64. 552. Henry Lee's Reply to Patrick Henry, June 1778, in Bernard Bailyn, ed. The Debate on the Constitution. 2 vols. New York: Library of America, 1993, 2: 638. 553. N. C. State Rec., 15: 451-52, 543. 554. Leslie H. Fishel. The Negro American: A Documentary History. Chicago: Scott, Foresman, 1967, 49-52; David D. Wallace. Life of Henry Laurens, with a Sketch of the Life of Lieutenant Colonel John Laurens. New York, 1915, 259-450. 555. Osgood, American Colonies, 2: 373. 556. Colonial Records of South Carolina, 2: 175-78, 195, 309. 557. Poore, Constitutions, 1: 371-77. 558. Allen D. Candler, ed. Colonial Records of the State of Georgia. 26 vols. (Atlanta: State of Georgia, 1904-16), 19: part 1, 324-29. 559. Quarles, "Colonial Militia and Negro Manpower," 651. 560. Journal of the Proceedings in Georgia, 2: 128ff. 561. David Cole, "A Brief Outline of the South Carolina Colonial Militia System," Proceedings, South Carolina Historical Association, 24 : 14-23. 562. Calendar of State Papers: Colonial, 1719-20, No. 531, 29 January 1720; to the Board of Trade, Public Records Office 30/47, Egremont mss, 25 May 1738, 14: 55-56. 563. Colonial Records of S. C., 2: 353, 357, 364-67, 369, 381; 3: 78-247. 564. Instructions to John Reynolds, 6 August 1754, Kavenagh, Colonial America, 3: 2053. 565. Robert Gardner. Small Arms Makers. New York: Crown, 1963, 178. 566. Kenneth Coleman, The American Revolution in Georgia. Athens: University of Georgia Press, 1958; Wilbur W. Abbott, The Royal Governors of Georgia, 1754-1775. Chapel Hill: University of North Carolina Press, 1959. 567. Smith, Loyalists and Redcoats, 192-96; Kenneth Coleman. The American Revolution in Georgia, 1763-1789. Athens: University of Georgia Press, 1958, 51-53. 568. Revolutionary Records of Georgia, 1: 273. 569. Ibid., 1: 85. 570. Collections of Georgia Historical Society, 8:20-21; Coleman, Revolution in Georgia, 65-66. 571. George White. Statistics of the State of Georgia. Savannah: Williams, 1849, 63-64. 572. Revolutionary Records of Georgia 1: 97. 573. Ibid., 1: 82-83. 574. Ibid., 1: 141. 575. Ibid., 2: 206, 221. 576. Ibid., 2: 291. 577. Ibid., 2: 291. 578. Ibid., 2: 103. 579. Ibid., 2: 254. 580. Ibid., 2: 277. 581. Ibid., 2: 293. 582. Ibid., 2: 103. 583. Collections of the New York Historical Society, : 246. 584. Georgia Constitution of 1777 in Poore, Constitutions, 1: 381-82. 585. Ibid., 1: 184. 586. Ibid., 1: 100. 587. Ibid., 1: 136-37. 588. Ibid., 2: 317. 589. Ibid., 2: 312. 590. Ibid., 1: 306. 591. Ibid., 2: 104-05. 592. Revolutionary Records of Georgia, 2: 87. 25 August 1778, "all vacancies of officers in the Militia of the state shall be forthwith be filled up by new elections and that from time to time as fast as elections happen a report [is] to be made out to the Governor." 593. Ibid., 1: 97. 594. Ibid., 2: 154. 595. Charles Stedman, The History of the Origin, Progress and Termination of the American War. 2 vols. London: printed for the author, 1794, 2: 103-20; David Ramsay, The History of the American Revolution. 2 vols. Philadelphia: Aitkin, 1789, 2: 420-31; Kenneth Coleman, The American Revolution in Georgia, 1763-1789. Athens: University of Georgia Press, 1958. 596. Charles Olmstead, "The Battles of Kettle Creek and Brier Creek," Georgia Historical Quarterly, 10 : 85-125.
http://constitution.org/jw/acm_5-m.htm
13
210
MSP:MiddleSchoolPortal/Math Focal Points: Grade 8 From Middle School Portal Math Focal Points - Grade 8 - Introduction With the goal of highlighting “the mathematical content that a student needs to understand deeply and thoroughly for future mathematics learning,” the National Council of Teachers of Mathematics has developed Curriculum Focal Points for Prekindergarten Through Grade 8 Mathematics. A “focal point” is an area of emphasis within a complete curriculum, a “cluster of related knowledge, skills, and concepts.” This is the fourth and last in a Middle School Portal series of publications that highlight the focal points by grade level. Others in the series are Math Focal Points: Grade 5, Math Focal Points: Grade 6, and Math Focal Points: Grade 7. This publication offers resources that directly support the teaching of the three areas highlighted for eighth grade: (For a complete statement of the NCTM Curriculum Focal Points for grade 8, please see below.) NCTM recommends that students in grade 8 analyze linear functions, translating among their verbal, tabular, graphical, and algebraic representations. They should also solve linear equations and systems of linear equations in two variables as they apply them to analyze mathematical situations and solve problems. In our section titled Linear Functions and Equations, we offer tutorials, games, carefully crafted lessons, and online simulations that provide varied approaches to these algebraic concepts. You will also find opportunity for the practice needed for understanding. Eighth-graders are expected to use fundamental facts of distance and angle to analyze two- and three-dimensional space and figures. NCTM recommends that they develop their reasoning about such concepts as parallel lines, similar triangles, and the Pythagorean theorem, both explaining the concepts and applying them to solve problems. In the section titled Geometry: Plane Figures and Solids, we feature visual, interactive experiences in which your students can work with concepts of angle, parallel lines, similar triangles, the Pythagorean theorem, and solids. You will find games as well as lessons and challenging problems. In grade 8, the emphasis is on understanding descriptive statistics; in particular, mean, median, and range. Students organize, compare, and display data as a way to answer significant questions. In the Analyzing Data Sets section, you will find tutorials, lesson ideas, problems, and applets for teaching these topics, and even full projects that involve worldwide data collection and analysis. In Background Information for Teachers, we identify professional resources to support you in teaching the materials targeted in the focal points for grade 8. In NCTM Standards, we relate the curriculum focal points to Principles and Standards for School Mathematics. - NCTM Curriculum Focal Points for Grade 8 Algebra: Analyzing and representing linear functions and solving linear equations and systems of linear equations. Students use linear functions, linear equations, and systems of linear equations to represent, analyze, and solve a variety of problems. They recognize a proportion (y/x = k, or y = kx) as a special case of a linear equation of the form y = mx + b, understanding that the constant of proportionality (k) is the slope and the resulting graph is a line through the origin. Students understand that the slope (m) of a line is a constant rate of change, so if the input, or x-coordinate, changes by a specific amount, a, the output, or y-coordinate, changes by the amount ma. Students translate among verbal, tabular, graphical, and algebraic representations of functions (recognizing that tabular and graphical representations are usually only partial representations), and they describe how such aspects of a function as slope and y-intercept appear in different representations. Students solve systems of two linear equations in two variables and relate the systems to pairs of lines that intersect, are parallel, or are the same line, in the plane. Students use linear equations, systems of linear equations, linear functions, and their understanding of the slope of a line to analyze situations and solve problems. Geometry and Measurement: Analyzing two- and three-dimensional space and figures by using distance and angle. Students use fundamental facts about distance and angles to describe and analyze figures and situations in two- and three-dimensional space and to solve problems, including those with multiple steps. They prove that particular configurations of lines give rise to similar triangles because of the congruent angles created when a transversal cuts parallel lines. Students apply this reasoning about similar triangles to solve a variety of problems, including those that ask them to find heights and distances. They use facts about the angles that are created when a transversal cuts parallel lines to explain why the sum of the measures of the angles in a triangle is 180 degrees, and they apply this fact about triangles to find unknown measures of angles. Students explain why the Pythagorean theorem is valid by using a variety of methods—for example, by decomposing a square in two different ways. They apply the Pythagorean theorem to find distances between points in the Cartesian coordinate plane to measure lengths and analyze polygons and polyhedra. Data Analysis and Number and Operations and Algebra: Analyzing and summarizing data sets. Students use descriptive statistics, including mean, median, and range, to summarize and compare data sets, and they organize and display data to pose and answer questions. They compare the information provided by the mean and the median and investigate the different effects that changes in data values have on these measures of center. They understand that a measure of center alone does not thoroughly describe a data set because very different data sets can share the same measure of center. Students select the mean or the median as the appropriate measure of center for a given purpose. Background Information for Teachers If looking to refresh your mathematical content knowledge, or simply to find a new approach to teaching the material targeted in the Grade 8 Focal Points, you will find these professional resources valuable. Learning Math: Patterns, functions, and algebra In this online course designed for elementary and middle school teachers, each of ten sessions centers on a topic, such as understanding linearity and proportional reasoning or exploring algebraic structure. The teacher-friendly design includes video, problem-solving activities, and case studies that show you how to apply what you have learned in your own classroom. Linear functions and slope In one session from the online workshop described above, teachers gather to explore linear relationships--as expressed in patterns, tables, equations, and graphs. Video segments, interactive practice, problem sets, and discussion questions guide participants they consider such concepts as slope and function. Similarity Explore scale drawing, similar triangles, and trigonometry in terms of ratios and proportion in this series of lessons developed for teachers. Besides explanations and real-world problems, the unit includes video segments that show teachers investigating problems of similarity. To understand the ratios that underlie trigonometry, you can use an interactive activity provided online. Indirect Measurement and Trigonometry For practical experience in the use of trigonometry, look at these examples of measuring impossible distances and inaccessible heights. These lessons show proportional reasoning in action! Pythagorean Theorem A collection of 76 proofs of the theorem! From the diverse approaches used by Euclid, Da Vinci, President Garfield, and many others, these proofs are clearly and colorfully illustrated, often accompanied by an interactive Java illustration to further clarify the brief explanations. Incredible as it sounds, this page is far from boring. Variation about the mean Just what do we mean by “the mean”? This workshop session, developed for K-8 teachers, explores this statistic in depth. Participants work together to investigate the mean as the "balancing point" of a data set and come to understand how to measure variation from the mean. Gallery of Data Visualization: The Best and Worst of Statistical Graphics This site offers graphical images that represent data from a range of sources (historical events, spread of disease, distribution of resources). The author contrasts the differences between the best and worst graphics by showing how some images communicate data clearly and truthfully, while others misrepresent, lie, or totally fail to "say something." If you are looking for innovative representations of data or examples of misrepresentation, you will find this resource helpful. Linear Functions and Equations Students use linear functions, linear equations, and systems of linear equations to represent, analyze, and solve a variety of problems. They recognize a proportion (y/x = k, or y = kx) as a special case of a linear equation of the form y = mx + b, understanding that the constant of proportionality (k) is the slope and the resulting graph is a line through the origin. Students understand that the slope (m) of a line is a constant rate of change, so if the input, or x-coordinate, changes by a specific amount, a, the output, or y-coordinate, changes by the amount ma. Students translate among verbal, tabular, graphical, and algebraic representations of functions (recognizing that tabular and graphical representations are usually only partial representations), and they describe how such aspects of a function as slope and y-intercept appear in different representations. Students solve systems of two linear equations in two variables and relate the systems to pairs of lines that intersect, are parallel, or are the same line, in the plane. Students use linear equations, systems of linear equations, linear functions, and their understanding of the slope of a line to analyze situations and solve problems (NCTM, 2006, p. 20). These resources offer a variety of ways to learn the material targeted in this Focal Point: tutorials, games, carefully crafted lessons, and online simulations. Your middle school students will also find plenty of opportunity for practice in real-world as well as imaginative scenarios. Lines and Slope At this site, students learn to draw a line and find its slope. Joan, a cartoon chameleon, is used throughout the tutorial to demonstrate the idea of slope visually. Background information on solving equations and graphing points is laid out clearly, followed by a step-by-step explanation of how to calculate slope using the formula. Finally, the slope-intercept form (y = mx + b) is carefully set out. Walk the Plank Students place one end of a wooden board on a bathroom scale and the other end on a textbook, then "walk the plank." They record the weight measurement as their distance from the scale changes and encounter unexpected results: a linear relationship between the weight and distance. Possibly most important, the investigation leads to a real-world example of negative slope. An activity sheet, discussion questions, and extensions of the lesson are included. Writing Equations of Lines This lesson uses interactive graphs to help students deepen their understanding of slope and extend the definition of slope to writing the equation of lines. Online worksheets with immediate feedback are provided to help students learn to read, graph, and write equations using the slope intercept formula. Linear Function Machine The functions produced by this machine are special because they all graph as straight lines and can be expressed in the form y = mx + b. In this activity, students input numbers into the machine and try to determine the slope and y-intercept of the line. Algebra: Linear Relationships Seven activities focus on generalizing from patterns to linear functions. Designed for use by mentors in after-school programs or other informal settings, these instructional materials have students work with number patterns, the function machine, graphs, and variables in realistic situations. Excellent handouts included. Explorelearning.com The following three resources come from this subscription site; a free 30-day trial is available. Experiment with the online simulations, particularly selected for their use in teaching equations of a line. Subscriptions include inquiry-based lessons, assessment, and reporting tools. - Slope calculation - Examine the graph of two points in the plane. Find the slope of the line that passes through the two points. Drag the points and investigate the changes to the slope and to the coordinates of the points. - Point-slope form of a line - Compare the point-slope form of a linear equation to its graph. Vary the coefficients and explore how the graph changes in response. - Slope-intercept form - Compare the slope-intercept form of a linear equation to its graph. Find the slope of the line using a right triangle on the graph. Vary the coefficients and explore how the graph changes in response. Slope slider What difference does it make to the graph of a function if you change the slope or the y-intercept? Students can see the changes in the equation itself and in its graph as they vary both slope and y-intercept. Excellent visual! The activity could be used for class or small group work, depending on computer access. Grapher : algebra (grades 6-8) Using this online manipulative, students can graph one to three functions on the same window, trace the function paths to see coordinates, and zoom in on a region of the graph. Function parameters can be varied as can the domain and range of the display. Tabs allow the student to incorporate fractions, powers, and roots into their functions. Planet hop In this interactive game, students find the coordinates of four planets shown on the grid or locate the planets when given the coordinates. Finally, they must find the slope and y-intercept of the line connecting the planets in order to write its equation. Players select one of three levels of difficulty. Tips for students are available as well as a full explanation of the key instructional ideas underlying the game. Constant dimensions This complete lesson plan requires students to measure the length and width of a rectangle using both standard and nonstandard units of measure, such as pennies and beads. As students graph the ordered pairs, they discover that the ratio of length to width of a rectangle is constant, in spite of the units. This hands-on experience leads to the definition of a linear function and to the rule that relates the dimensions of this rectangle. Barbie bungee Looking for a real-world example of a linear function? In this lesson, students model a bungee jump using a doll and rubber bands. They measure the distance the doll falls and find that it is directly proportional to the number of rubber bands. Since the mathematical scenario describes a direct proportion, it can be used to examine linear functions. Exploring linear data This lesson connects statistics and linear functions. Students construct scatterplots, examine trends, and consider a line of best fit as they graph real-world data. They also investigate the concept of slope as they model linear data in a variety of settings that range from car repair costs to sports to medicine. Handouts for four activities, spread out over three class periods, are provided. Supply and Demand Your company wants to sell a cartoon-character doll. At what price should you sell the doll in order to satisfy demand and maintain your supply? The lesson builds from graphing data to writing linear equations to creating and solving a system of equations in a real-world setting. Discussion points, handouts, and solutions are given. Printing Books Presented with the pricing schedules from three printing companies, students must determine the least expensive way to have their algebra books printed. They compile data in tables, spreadsheets, and a graph showing three equations. Throughout the lesson, students explore the relationships among lines, slopes, and y-intercepts in a real-world setting. Purplemath - Your Algebra Resource Algebra modules provide free tutorials in every topic of algebra, from beginning to advanced. Lessons concentrate on "practicalities rather than the technicalities" and include worked examples as well as explanations. Of particular interest are the modules on Systems of Linear Equations and Systems-of-Equations Word Problems. A site worth visiting! Geometry: Plane Figures and Solids Students use fundamental facts about distance and angles to describe and analyze figures and situations in two- and three-dimensional space and to solve problems, including those with multiple steps. They prove that particular configurations of lines give rise to similar triangles because of the congruent angles created when a transversal cuts parallel lines. Students apply this reasoning about similar triangles to solve a variety of problems, including those that ask them to find heights and distances. They use facts about the angles that are created when a transversal cuts parallel lines to explain why the sum of the measures of the angles in a triangle is 180 degrees, and they apply this fact about triangles to find unknown measures of angles. Students explain why the Pythagorean theorem is valid by using a variety of methods—for example, by decomposing a square in two different ways. They apply the Pythagorean theorem to find distances between points in the Cartesian coordinate plane to measure lengths and analyze polygons and polyhedra (NCTM, 2006, p. 20). These activities offer your eighth graders visual, interactive experiences with geometry. Through games as well as lessons and problems, they work with concepts of angle, parallel lines, the Pythagorean theorem, and solids. Angles This Java applet enables students to investigate acute, obtuse, and right angles. The student decides to work with one or two transversals and a pair of parallel lines. Angle measure is given for one angle. The student answers a short series of questions about the size of other angles, identifying relationships such as vertical and adjacent angles and alternate interior and alternate exterior angles. Triangle Geometry: Angles This site directly addresses students as it leads them to explore angles and their measurement. Most important, it offers applets to introduce the Pythagorean theorem by collecting data from right triangles online and provides an animated picture proof of the theorem. Manipula math with Java : the sum of outer angles of a polygon This interactive applet allows users to see a visual demonstration of how the sum of exterior angles of any polygon sums to 360 degrees. Students can draw a polygon of any number of sides and have the applet show the exterior angles. They then decrease the scale of the image, gradually shrinking the polygon, while the display of the exterior angles remains and shows how the angles merge together to cover the whole 360 degrees surrounding the polygon. Parallel Lines and Ratio Three parallel lines are intersected by two straight lines. The classic problem is: If we know the ratio of the segments created by one of the straight lines, what can we know about the ratio of the segments along the other line? An applet allows students to clearly see the geometric reasoning involved. Area triangles This applet shows triangle ABC, with a line through B parallel to base AC. Students can change the shape of the triangle by moving B along the parallel line or by changing the length of base AC. What happens to the length of the base, the height, and the area of the triangle as students make these moves? Why? Understanding the Pythagorean Relationship Using Interactive Figures The activity in this example presents a visual and dynamic demonstration of this relationship. The interactive figure gives students experience with transformations that preserve area but not shape. The final goal is to determine how the interactive figure demonstrates the Pythagorean theorem. Distance Formula Explore the distance formula as an application of the Pythagorean theorem. Learn to see any two points as the endpoints of the hypotenuse of a right triangle. Drag those points and examine changes to the triangle and the distance calculation. Measuring by Shadows A student asks: How can I measure a tree using its shadow and mine? This letter from Dr. Math carefully explains the mathematics underlying this standard classroom exercise. Finding the Height of a Lamp Pole Without using trigonometry, how can you find the height of a lamp pole or other tall object? Two methods, both depending on similar triangles, are outlined and illustrated. A rich problem. Polygon Capture In this lesson, students classify polygons according to more than one property at a time. In the context of a game, students move from a simple description of shapes to an analysis of how properties are related. Sorting Polygons In this companion to the above game, students identify and classify polygons according to various attributes. They then sort the polygons in Venn diagrams according to these attributes. Fire hydrant : what shape is at the very top of a fire hydrant? This activity begins an exploration of geometric shapes by asking students why the five-sided (pentagonal) water control valve of a fire hydrant cannot be opened by a common household wrench. The activity explains how geometric shape contributes to the usefulness of many objects. A hint calls students' attention to the shape of a normal household wrench, which has two parallel sides. Answers to questions and links to resources are included. Diagonals to Quadrilaterals Instead of considering the diagonals within a quadrilateral, this lesson provides a unique opportunity: Students start with the diagonals and deduce the type of quadrilateral that surrounds them. Using an applet, students explore characteristics of diagonals and the quadrilaterals that are associated with them. Image:Geometric Solids and their Properties A five-part lesson plan has students investigate several polyhedra through an applet. Students can revolve each shape, color each face, and mark each edge or vertex. They can even see the figure without the faces colored in — a skeletal view of the "bones" forming the shape. The lesson leads to Euler’s formula connecting the number of edges, vertices, and faces, and ends with creating nets to form polyhedra. An excellent introduction to three-dimensional figures! Slicing solids (grades 6-8) So what happens when a plane intersects a Platonic solid? This virtual manipulative opens two windows on the same screen: one showing exactly where the intersection occurred and the other showing the cross-section of the solid created in the collision. Students decide which solid to view, and where the plane will slice it. Studying Polyhedra What is a polyhedron? This lesson defines the word. Students explore online the five regular polyhedra, called the Platonic solids, to find how many faces and vertices each has, and what polygons make up the faces. An excellent applet! From this page, click on Polyhedra in the Classroom. Here you have classroom activities to pursue with a computer. Developed by a teacher; the lessons use interactive applets and other activities to investigate polyhedra. Analyzing Data Sets Students use descriptive statistics, including mean, median, and range, to summarize and compare data sets, and they organize and display data to pose and answer questions. They compare the information provided by the mean and the median and investigate the different effects that changes in data values have on these measures of center. They understand that a measure of center alone does not thoroughly describe a data set because very different data sets can share the same measure of center. Students select the mean or the median as the appropriate measure of center for a given purpose (NCTM, 2006, p.20). As reflected in this set of resources, the emphasis here is on understanding descriptive statistics; in particular, measures of center. You will find tutorials, lesson ideas, problems, and applets for teaching these topics, and even full projects that can involve your middle school students in worldwide data collection. Describing Data Using Statistics Investigate the mean, median, mode, and range of a data set through its graph. Manipulate the data and watch how these statistics change (or, in some cases, how they don't change). Understanding Averages Written for the student, this tutorial on mean, median, and mode includes fact sheets on the most basic concepts, plus practice sheets and a quiz. Key ideas are clearly defined at the student level through graphics as well as text. Plop It! Users click to easily and quickly build dot plots of data and view how the mean, median, and mode change as numbers are added to the plot. An efficient tool for viewing these statistics visually. Working hours : how much time do teens spend on the job? This activity challenges students to interpret a bar graph, showing only percentages, to determine the mean number of hours teenagers work per week. A more complicated and interesting problem than it may seem at first glance! A hint suggests that students assume that 100 students participated in the survey; a full solution sets out the math in detail. Related questions ask students to calculate averages for additional data sets. Stem-and-Leaf Plotter Can your students find the mean, median, and mode from a stem-and-leaf plot? They can use this applet to explore the measures of center in relation to the stem-and-leaf presentation of data. Students use the online plotter to enter as much data as they choose; then they determine measures of center and have the program check and correct their values. Ideas for class practice and discussion are provided in a lesson outline. Train race In this interactive game, students compute the mean, median, and range of the running times of four trains, then select the one train that will get to the destination on time. Players extend their basic understanding of these statistics as they try to find the most reliable train for the trip. Students can select one of three levels of difficulty. There are tips for students as well as a full explanation of the key instructional ideas underlying the game. Comparing Properties of the Mean and the Median Through the Use of Technology This interactive tool allows students to compare measures of central tendency. As students change one or more of the seven data points, the effects on the mean and median are immediately displayed. Questions challenge students to explore further the use of these measures of center; for example, What happens if you pull some of the data values way off to one extreme or the other extreme? The Global Sun Temperature Project This web site allows students from around the world to work together to determine how average daily temperatures and hours of sunlight change with distance from the equator. Students can participate in the project each spring, April-June. Students learn to collect, organize, and interpret data. You will find project information, lesson plans, and implementation assistance at the site. Down the Drain: How Much Water Do You Use? This Internet-based collaborative project will allow students to share information about water usage with other students from around the country and the world. Based on data collected by their household members and their classmates, students will determine the average amount of water used by one person in a day. Students must develop a hypothesis, conduct an experiment, and present their results. Data analysis : as real world as it gets Resources that firmly place data analysis in context! The lessons and interdisciplinary projects were selected to promote student interest by focusing on real-world situations and developing skills for using the power of mathematics to form important conclusions relevant to life. Students learn that working with data offers insights into society’s problems and issues. SMARTR: Virtual Learning Experiences for Students Visit our student site SMARTR to find related math-focused virtual learning experiences for your students! The SMARTR learning experiences were designed both for and by middle school aged students. Students from around the country participated in every stage of SMARTR’s development and each of the learning experiences includes multimedia content including videos, simulations, games and virtual activities. The FunWorks Visit the FunWorks STEM career website to learn more about a variety of math-related careers (click on the Math link at the bottom of the home page). You may be asking yourself, "What do the curriculum focal points have to do with the Principles and Standards for School Mathematics (PSSM)?" NCTM answers that identifying areas of emphasis at each grade level is the next step in implementing those principles and standards. Curriculum Focal Points for Prekindergarten Through Grade 8: A Quest for Coherence "provides one possible response to the question of how to organize curriculum standards within a coherent, focused curriculum, by showing how to build on important mathematical content and connections identified for each grade level, pre-K–8" (NCTM, 2006, p. 12). The curriculum focal points draw on the content standards described in PSSM, at times clustering several topics in one focal point. Also, the process standards are pivotal to well-grounded instruction, for "it is essential that these focal points be addressed in contexts that promote problem solving, reasoning, communication, making connections, and designing and analyzing representations" (p.20). This Middle School Portal publication offers resources intended to support you in teaching the key mathematical areas identified for grade 8: linear functions and simple systems of linear equations, parallel lines and angles in polygons, the Pythagorean theorem, polyhedra, and descriptive statistics. The selected resources are grounded in the Algebra, Geometry, and Data Analysis standards, and particularly in the process standards of Problem Solving and Representation. A variety of formats (tutorials, lesson plans, games, problems, and projects) are provided for your use in teaching these focal points. We believe you will find here resources that engage your eighth graders in probing the deeper and increasingly abstract concepts of middle school mathematics. A full description of the Curriculum Focal Points for Grade 8 is available at http://www.nctm.org/standards/focalpoints.aspx?id=340&ekmensel=c580fa7b_10_52_340_10 Curriculum Focal Points for Prekindergarten Through Grade 8: A Quest for Coherence may be viewed in its entirety at http://www.nctm.org/standards/content.aspx?id=270 Reprinted with permission from Curriculum Focal Points for Prekindergarten through Grade 8 Mathematics: A Quest for Coherence, copyright 2006 by the National Council of Teachers of Mathematics. All rights reserved. Author and Copyright Terry Herrera taught math several years at middle and high school levels, then earned a Ph.D. in mathematics education. She is a resource specialist for the Middle School Portal 2: Math & Science Pathways project. Please email any comments to [email protected]. Connect with colleagues at our social network for middle school math and science teachers at http://msteacher2.org. Copyright May 2008 - The Ohio State University. This material is based upon work supported by the National Science Foundation under Grant No. 0424671 and since September 1, 2009 Grant No. 0840824. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
http://msp.ehe.osu.edu/wiki/index.php/MSP:MiddleSchoolPortal/Math_Focal_Points:_Grade_8
13
51
Newton's 1st Law of Motion Newton’s 1st Law of Motion, also known as the law of inertia, can be summarized as follows: “An object at rest will remain at rest, and an object in motion will remain in motion, at constant velocity and in a straight line, unless acted upon by a net force.” This means that unless there is a net (unbalanced) force on an object, an object will continue in its current state of motion with a constant velocity. If this velocity is zero (the object is at rest), the object will continue to remain at rest. If this velocity is not zero, the object will continue to move in a straight line at the same speed. However, if a net (unbalanced) force does act on an object, that object’s velocity will be changed (it will accelerate). This sounds like a simple concept, but it can be quite confusing because it is difficult to observe this in everyday life. People are usually fine with understanding the first part of the law: “an object at rest will remain at rest unless acted upon by a net force.” This is easily observable. The donut sitting on your breakfast table this morning didn’t spontaneously accelerate up into the sky. Nor did the family cat, Whiskers, lounging sleepily on the couch cushion the previous evening, all of a sudden accelerate sideways off the couch for no apparent reason. The second part of the law contributes a considerably bigger challenge to the conceptual understanding of this principle. Realizing that “an object in motion will continue in its current state of motion with constant velocity unless acted upon by a net force” isn’t easy to observe here on Earth, making this law rather tricky. Almost all objects observed in everyday life that are in motion are being acted upon by a net force - friction. Try this example: take your physics book and give it a good push along the floor. As expected, the book moves for some distance, but rather rapidly slides to a halt. An outside force, friction, has acted upon it. Therefore, from typical observations, it would be easy to think that an object must have a force continually applied upon it to remain in motion. However, this isn’t so. If you took the same book out into the far reaches of space, away from any gravitational or frictional forces, and pushed it away, it would continue moving in a straight line at a constant velocity forever and ever, as there are no external forces to change its motion. When the net force on an object is 0, the object is in static equilibrium. You’ll revisit static equilibrium when discussing Newton’s 2nd Law. The tendency of an object to resist a change in velocity is known as the object’s inertia. For example, a train has significantly more inertia than a skateboard. It is much harder to change the train’s velocity than it is the skateboard’s. The measure of an object’s inertia is its mass. For the purposes of this course, inertia and mass mean the same thing - they are synonymous. Question: A 0.50-kilogram cart is rolling at a speed of 0.40 meter per second. If the speed of the cart is doubled, what happens to the inertia of the cart? Answer: The inertia (mass) of the cart remains unchanged. Question: Which object has the greatest inertia? - a falling leaf - a softball in flight - a seated high school student - a rising helium-filled toy balloon Answer: (3) a seated high school student has the greatest inertia (mass). Question: Which object has the greatest inertia? - a 5.00-kg mass moving at 10.0 m/s - a 10.0-kg mass moving at 1.0 m/s - a 15.0-kg mass moving at 10.0 m/s - a 20.0-kg mass moving at 1.0 m/s Answer: (4) a 20.0-kg mass has the greatest inertia. If you recall from the kinematics unit, a change in velocity is known as an acceleration. Therefore, the second part of this law could be re-written to state that an object acted upon by a net force will be accelerated. But, what exactly is a force? A force is a vector quantity describing the push or a pull on an object. Forces are measured in Newtons (N), named after Sir Isaac Newton, of course. A Newton is not a base unit, but is instead a derived unit, equivalent to 1 kg×m/s2. Interestingly, the gravitational force on a medium-sized apple is approximately 1 Newton. You can break forces down into two basic types: contact forces and field forces. Contact forces occur when objects touch each other. Examples of contact forces include pushing a crate (applied force), pulling a wagon (tension force), a frictional force slowing down your sled, or even the force of air accelerating a spitwad through a straw. Field forces, also known as non-contact forces, occur at a distance. Examples of field forces include the gravitational force, the magnetic force, and the electrical force between two charged objects. So, what then is a net force? A net force is just the vector sum of all the forces acting on an object. Imagine you and your sister are fighting over the last Christmas gift. You are pulling one end of the gift toward you with a force of 5N. Your sister is pulling the other end toward her (in the opposite direction) with a force of 5N. The net force on the gift, then, would be 0N, therefore there would be no net force. As it turns out, though, you have a passion for Christmas gifts, and now increase your pulling force to 6N. The net force on the gift now is 1N in your direction, therefore the gift would begin to accelerate toward you (yippee!) It can be difficult to keep track of all the forces acting on an object.
http://www.aplusphysics.com/courses/honors/dynamics/N1Law.html
13
82
Principles of Arc Welding also known as "Stick Welding" Electric arc welding is based on the fact, as electricity passes through a gaseous gap from one electric conductor to another, a very intense and concentrated heat is produced. The temperature of the spark or arc jumping between two conductors is approximately 6,500 to 7,000 degrees. (although the plasma created by the arc reaches temperatures in excess of 20,000f) When the arc welding electrode is brought into contact with the workpiece a high temperature arc is established between them. The arc heat is controlled by current and by arc length (electrode to workpiece spacing). If they are correct, the arc heat melts the electrode tip and the workpiece beneath it to the desired depth. If two pieces to be joined together form the workpiece, they are fused together in the puddle. Fused filler metal from the electrode reinforces the joint, forming a penetrating raised deposit of welded metal (bead) as the puddle. Fluxing materials in the heavily covered electrode will give the arc stability, help to float out impurities from within the molten puddle, and will develop inert gases which will keep the outer surfaces of the molten weld from oxidizing. The flux hardens as the weld cools, forming a crust/slag on the surface, providing a shield against contamination of the bead. As the bead forms, the electrode (and the arc) are continuously moved ahead. The molten puddle moves with it, and filler metal continues to transfer to it. With all factors fully controlled, a smooth, continuous bead is deposited to make a well fused joint. These basic control factors are speed of electrode travel, amount of current (amperage) and length of arc (voltage). Arc welding requires high electrode current to produce an arc of great heat. These two controllable factors (current and heat) can produce hazards when not controlled. The by-products of arc welding are fumes, ultraviolet and infrared rays, sparks and spatter are also potential hazards when not controlled. Arc Welding Positions The American welding society classifies different arc welding positions with a numbering system that goes like this: 1f= flat fillet weld, 2f = horizontal fillet weld, 3f = vertical fillet weld, 4f= overhead fillet weld. To keep things simple I will just refer to the different positions as : flat, horizontal, vertical and overhead. OK? Arc Welding in the Flat Position There are many different types of flat welds, possibly these are the easiest type welds for a beginning welder to practice on, later advancing to the more difficult positions. One of the advantages is easier control of the welding puddle. Gravity is not much of concerning factor, because no matter what type joint you are welding, it’s always FLAT and all gravity does is help the weld lay down Flat. On material to be welded which is 1/4 inch or less in thickness, V grooving the edges usually is not necessary, depending on the strength and circumstances relating to the job. When welding material of greater thickness it is necessary to grind or cut a V-joint in the two pieces of metal, which will increase the overall strength of the weld. The two pieces should be placed touching each other with both surfaces free from paint or other contaminants which will affect the strength of the weld. After striking an arc, with proper setting of amperage control and electrode size, the arc should emit a sound (such as BACON FRYING) which indicates proper arc length and speed of electrode movement. In the case of a long weld where more than one electrode is used, the arc shall be restarted about 1/4 inch ahead of the crater. Going back over the crater to properly fill in and keep a uniform contour, thus assuring proper preheating as in the start of the weld. This type of seam is common although it is not the best weld for strength. To arc weld this joint successfully, remember that the piece which is not having its edge welded will require the greater portion of the heat. To distribute the heat, a weaving motion must be used with most of the motion taking place over the bottom piece. The electrode must be held in a position to point it at a slight angle to the joint. To keep the arc length constant, the welder must raise the electrode slightly as the arc travels over the edge of the upper piece. The finished bead should be crowned ,straight, must be even in width, smooth and clean. It should show good fusion between the bead and the parent metal T - Joint The t-joint is formed by placing one of the base metal pieces in the center, or near the center of the other piece of base metal and at a right angle to it, to form a T shape. This joint may be welded on one side or on both sides, depending on the thickness and strength needed for the job. Horizontal ARC Welding When arc welding a butt, lap, edge or outside corner joint in the horizontal position, the electrode should be pointed upward at about 20 deg. to counteract the sag of the molten metal in the crater. The electrode is also inclined about 20 deg. in the direction of travel of the weld. The arc gap (low voltage) should be shortened so the molten filler metal will travel across the horizontally positioned arc. Be sure to eliminate undercutting at the edge of the bead, which is the result of excess current for the size of the electrode used or poor electrode motion. When welding a T or inside corner joint with one piece of the base metal in a near vertical position, and one in a near horizontal position, the arc welding electrode is inclined 20 deg in the direction of travel. The arc welding electrode is also positioned at about 45 deg. angle to the horizontal piece of metal. The motion is usually some type of weave with the forward motion taking place on the vertical piece. Vertical Up and Down Arc Welding These welds even though more difficult than flat must be of the same strength and quality. When welding a vertical seam, the molten metal tends to flow down the seam. This flow can be minimized by pointing the electrode approximately 20 deg. upward. A short arc must be maintained, and the motion must be such that the force of the arc will oppose the sagging. This can be done by using the previously welded area to help support the crater. When welding vertical down, hold the rod at an angle of, 20 deg to the work piece. As you pass the electrode down the seam, crossing from side to side. Use pressure the arc creates to actually push the molten crater upward. This will allow time for each side to fuse together before each pass, resulting in a good strong weld. Vertical up is done in much the same way, the difference is your pass from side will be in the up position allowing the weld crater to gently flow across the previously welded material, thus supporting the weld. Overhead ARC Welding To master the final and most difficult arc welding position (overhead), proceed through the practice beads, butt welds and fillet welds. Electrode positions and welding techniques are shown below. The weaving motion of the electrode is lift and return, to freeze the puddle and prevent slag drip. Always be very careful when welding in this position, to keep your shirt collar buttoned and use a welding apron to prevent any welding slag from dropping into your clothes. 2 main things that beginners struggle with is amperage and arc length. Make sure to use a close arc length (tight enough to actually “feel” the rod scrubbing against the metal) and use enough amperage so the rod doesn’t stick when your arc is that tight. Stick Welding Electrode Identification System Mild Steel Electrodes Mild steel electrodes are identified by a system that uses a series of numbers to indicate the minimum tensile strength of a good weld, the position’s in which the electrode can be used, the type of flux coating, and the types of welding currents Mild Steel welding electrodes Mild Steel Electrodes Mild steel electrodes are identified by a system that uses a series of numbers to indicate the minimum tensile strength of a good weld, the position’s in which the electrode can be used, the type of flux coating, and the types of welding currents. E6012 The letter E prefixes the number and represents the electrode. The E is used as a prefix for any filler metal that uses electricity to perform a weld. E6012 The first two or three numbers indicate the minimum tensile strength of a good weld, for example E60XX, E70XX, E110XX. The tensile strength is given in pounds per square inch (psi.). The actual strength is obtained by adding three zeros to the right of the number given. For example, E60XX is 60,000 psi. and E110XX is 110,000 psi. E6012 Refers to the metal welding position. The number 1,2,3, or 4 to the right of the tensile strength designation gives the position...1= all positions - 2 =horizontal or flat - 3 = flat only - 4 = all positions but vertical down. E6012 The last number indicates the major type of flux covering and the type of welding current. 0 - DCRP 1 - AC/DCRP 2 - AC/DCSP 3 - AC/DC 4 - AC/DC 5 - DCRP 6 - AC/DCRP 8 - AC/DCRP DC = direct current DC/SP = direct current straight polarity AC =alternating current DC/RP = direct current reverse polarity Understanding Stick welding Electrode Data Tensile Strength The load in pounds that would be required to break a section of good weld that has a cross-sectional area of one square inch. Yield point,PSI The point in low and medium-carbon steels at which the metal begins to stretch when stress is applied, after which it will not return to its original length. Elongation The percentage, a two inch piece of metal will stretch before it breaks. Charpy V-notch ft.lb. The impact load required to break a test piece of welded metal. This test may be performed on metal below room temperature, at which point it is more brittle. Stick Welding Alloy Elements and Their effects on Steel Carbon (c) As the percentage of carbon increases, the tensile strength increases, the hardness increases, and ductility is reduced. Sulphur (S) It is usually a contaminant and the percentage should be kept as low as possible, below 0.04%. As the percentage of carbon increases, Sulphur can cause hot shortness and porosity. Phosphorus (P) It is usually a contaminant and the percentage should be kept as low as possible. As the percentage of phosphorus increases, it can cause brittleness, reduced shock resistance, and increased cracking. Manganese (Mn) As the percentage of manganese increases, the tensile strength, hardness, resistance to abrasion, and porosity all increase; hot shortness is reduced. Silicon (Si) As the percentage of silicon increases, tensile strength increases, and cracking may increase. Chromium (Cr) As the percentage of chromium increase, tensile strength, hardness, and corrosion resistance increase, with some decrease in ductility. Nickel (Ni) As the percentage of nickel increases, tensile strength, toughness, and corrosion resistance increase Molybdenum (Mo) AS the percentage of molybdenum increases, tensile strength at elevated temperatures and corrosion resistance increase. Stick Electrode Selection--Choosing the right Stick Rod Type of Current: Can the welding power source supply AC only, DC only, or both AC and DC?. Power Range: What is the amperage range on the welder and it’s duty cycle? Different types of electrodes require different amperage settings even for the same size welding electrode. Type of Metal: Some welding electrodes may be used to join more than one similar type of metal. Other electrodes may be used to join two different types of metal. Metal Thickness: The penetration characteristics of each welding electrode may differ. Selecting the electrode that will properly weld on a specific thickness of material is very important. Weld Position: Some electrodes will weld in all positions. Other electrodes may be restricted to the flat and horizontal or vertical positions, or only the flat electrodes will weld on metal that is rusty, oily, dirty or galvanized with sufficient penetration. No. of passes: The amount of reinforcement needed may require more than one pass. Some welding electrodes build up faster and others will penetrate deeper. The welding slag may be removed more easily from some welds than others. Temperature: Welded metals react differently to temperature extremes. Some welds become very brittle and crack easily in low temperature service. (when welding in cold temperatures, you should pre-heat the metal, and after the weld is made it should be covered or shielded from the cold to allow it to cool slowly). Mechanical: Mechanical properties such as tensile strength, yield strength, hardness, toughness, ductility, and impact strength can be modified by the selection of specific welding electrodes. Distortion: Welding electrodes that will operate on low-amperage settings will have less heat input and cause less distortion. Welding electrodes that have a high rate of deposition (fills joints rapidly) and can travel faster will cause less distortion. Welding Clean-up: The hardness or softness of a weld greatly affects any grinding, drilling, or machining. Also slag and spatter removal affects the time and amount of clean-up required. Most Common Steel arc or stick Welding Electrodes E6011 The E6011 electrodes are designed to be used with AC or DC reverse polarity currents and have an organic-based flux. They have a forceful arc that results in deep penetration and good metal transfer in the vertical and overhead positions. The electrode is usually used with a whipping or stepping motion. This motion helps remove unwanted surface materials such as paint, oil, dirt and galvanizing. The E6011 has added arc stabilizers which allow it to be used with AC. Using this welding electrode on AC current only slightly reduces its penetration qualities. The weld puddle may be slightly concave from the forceful action of the rapidly expanding gases. This forceful action also results in more spatter and spark during welding. E6013 The E6013 electrodes are designed to be used with AC or DC, either polarity current. They have a rutile-based flux (titanium dioxide). The electrode has a very stable arc that is not very forceful, resulting in a shallow penetration characteristic. This limited penetration characteristic helps with poor fitting joints or when welding thin materials. Thick sections can be welded, but the joint must be grooved to allow for multiple weld passes. A thick layer of slag is deposited on the weld, but is easily removed, and may even remove itself after cooling. These electrodes are commonly used for sheet metal fabrication and general repair work. These electrodes are not designed for welding downhill and will trap slag easily. (They work well for learning to weld, but after that move on to real electrodes) E7018 The E7018 electrodes are designed to be used with AC or DC reverse polarity currents. They have a low hydrogen based flux with iron powder added. The E7018 electrodes have moderate penetration and buildup. The slag layer is heavy and hard but can be removed easily with a chipping hammer. The weld metal is protected from the atmosphere by the slag layer and not by rapidly expanding gases. The E7018 is very susceptible to moisture which may lead to weld porosity. E7024 The E7024 electrodes are designed to be used with AC or DC, either polarity current. They have a rutile based flux with iron powder added. This welding electrode has a deep penetration and fast fill characteristic. The flux contains about 50% iron powder, which gives the flux its high rate of deposition. The heavy flux coating helps control the arc and can support the electrode so that a drag technique can be used. The drag technique allows this electrode to be used by welders with less skill. The slag layer is heavy and hard but can be easily removed. Because of the large fluid puddle, this electrode is used in the flat and horizontal positions only. Amperage guide for Stick welding / Arc Welding to Help in Choosing Heat Settings for Arc Welding I Hope this table helps you select the correct amp settings for Arc Welding Voltage is typically only adjustable on really old stick and arc welding machines. Amperage is the main thing for arc welding on newer welding machines.
http://www.weldingtipsandtricks.com/arc-welding.html
13
61
Imagine you are in charge of humanities search for extra terrestrials program. One day, after scanning the skies, you find a signal. The signal consists of a series of pulses, and after a little bit of work, you discover that they form an image 197×199 in size. The image contains what looks like simple arithmetic. However, no normal number-system seems to work.101110 + 100100 = 100010 Welcome to the world of "weird binary". So what exactly is going on? It turns out that even though a number system can be based on two different symbols, it doesn't have to be the traditional binary system we all know and love. These new binary number systems have differing properties, and come with their own strengths and weaknesses. So what defines a binary number system? Firstly we require two symbols. Let us use "0" and "1" for simplicity. We also require a place-based number system, which we will assume operates in the normal right-to-left increasing powers of a single base. Note that we could imagine a left-to-right system, but that just corresponds to using the inverse of a right-to-left base, so doesn't add anything new. Once we have this, we can define a simple multiplication table; anything multiplied by zero is zero, and 1×1=1. Similarly, addition by zero is idempotent. Thus only 1+1=2 needs to be defined, since "2" doesn't fit in our binary system. By choosing different representations of the number 2, we can define different binary number systems. Thus by enumerating these choices we can see which weird binary systems exist. Before we will proceed, it is nice to define a format for such numbers. Unfortunately, the complexity of the addition operation will be quite large. This is due to the carries being unlike those of normal binary numbers. Thus, to simplify the carry calculation, we will provide an integer per bit. Such a layout would look like the following in C: The above has the form of the weird binary encoded in the value of the number 2. This specification is stored within the To create weird binary numbers, we can use induction. We know the value of one, and can then add it multiple times to obtain any natural number we require. These C functions assume that the symbolic representation for the number two lies solely to the left of the radix point. This means that carries only propagate to the left. Allowing carries to propagate to the left and right simultaneously makes it rather difficult to construct an addition routine. (In effect it is possible to make a linear cellular automata this way, and such things are subject to the halting problem.) Of course, some cases are solvable. i.e. The case where 1+1=10.1 yields b+1/b=2, and thus b=1. This base-one solution also appears in other situations, see the 1+1=11 case below. The first possibility results in a rather boring number system. In this system, addition works like the xor operator. Multiplication still mixes number places together. However, no carries take place, so all number positions operate as if they are alone with the "addition" step. This system is somewhat useful in cryptography, and Intel has added the PCLMULQDQ instruction to perform this operation. This number system is even less interesting, with addition acting like a logical-or. Once you have a 1, you can never remove it. This means that negative numbers cannot exist in this system, since you can never add two non-zero numbers to get zero. This is the traditional binary number system used in computers. By using twos-complement arithmetic, we can represent negative numbers. All normal mathematical operations work as you would expect in this system, (unlike the previous two systems). However, as you can see, this isn't the end of the story, as several more interesting binary systems exist. Any other binary system that defines two as ending with a 1 is problematic. In such systems, you cannot form minus one. (In other words, the equation 1 + y = 0 has no solution.) This severely restricts the usefulness of such systems. However, there is one system which is of note. 1+1=11 describes the "stick counting" system of natural numbers. The more sticks you have, the larger the number. The total number of sticks exactly corresponds to the number you have. Another way of looking at this system, is saying that it is "base 1". Unfortunately, working in base 1 is extremely inefficient, as the exponential savings in symbol compression don't happen. i.e. 999 only requires three symbols to write in decimal... but would require 999 symbols in base 1. Since all the interesting systems will define two ending with a zero symbol, we can now evaluate what negative numbers are. This can be done by first calculating what minus one is, and then multiplying that by the correct natural number. Using the above, we have b2=2, where b is the base. Thus, we find that the bases that satisfy this system are ±√2. This system thus can exactly store values proportional to the square root of two. This comes at a cost, numbers written in this form are twice as large as in normal binary. Normal binary can approximate the square root of two by including digits below the radix point. By using enough digits, the error can be made as small as needed. This means that this system is only really useful if exact calculations are required, and isn't really useful. This system is defined by b2+b=2, and thus b=1, or b=-2. It turns out that the b=1 solution is spurious, leaving the base = -2, or "negabinary" number system. This number system is fairly well known. See Knuth's Art of Computer Programming, Seminumerical Algorithms for a discussion of its properties. Since this case is very similar to normal binary, (it is only the odd positions that vary), there exist some fast algorithms to convert from binary to negabinary and back. There are also fast ways to add, multiply and divide these numbers. Such numbers are more uniform than normal binary. No twos-complement trick is required to represent negative numbers. Thus there is no longer a difference between signed and unsigned multiplication. A table of the representation of such numbers is: -16: 110000 -15: 110001 -14: 110110 -13: 110111 -12: 110100 -11: 110101 -10: 1010 -9: 1011 -8: 1000 -7: 1001 -6: 1110 -5: 1111 -4: 1100 -3: 1101 -2: 10 -1: 11 0: 0 1: 1 2: 110 3: 111 4: 100 5: 101 6: 11010 7: 11011 8: 11000 9: 11001 10: 11110 11: 11111 12: 11100 13: 11101 14: 10010 15: 10011 16: 10000 This system is somewhat similar to the case where 1+1=100. It is a "fat binary" system, where there are multiple representations for the same numbers. In this case, the base is the cubic root of two, instead of the square root. The result is that numbers take three times as much space as normal. Such a system is only useful if you need exact arithmetic with such numbers. This case has b3+b=2, and has the following three solutions b=1, b=(-1±i√7)/2. The first base-one case is again spurious, leaving the two bases that are complex conjugates of each other. These bases allow one to calculate complex arithmetic using a single binary string. (The previous case also allowed complex solutions, but symmetry prevented them from being usefully different from the real case.) Integers in this representation look like: -16: 101110000 -15: 101110001 -14: 101111010 -13: 101111011 -12: 101010100 -11: 101010101 -10: 101011110 -9: 101011111 -8: 101011000 -7: 101011001 -6: 100010 -5: 100011 -4: 111100 -3: 111101 -2: 110 -1: 111 0: 0 1: 1 2: 1010 3: 1011 4: 11100100 5: 11100101 6: 11101110 7: 11101111 8: 11101000 9: 11101001 10: 11100110010 11: 11100110011 12: 11001100 13: 11001101 14: 11100010110 15: 11100010111 16: 11100010000 Notice how the length of the numbers doesn't increase monotonically as they increase away from zero. This is due to the fact that the boundary of numbers of a given symbol length in this system is a fractal on the complex number plane: This case has b3+b2=2, resulting in b = 1, -1±i. Again, the base one solution is spurious. The resulting systems are quite well known, and are often presumed as the only complex binary ones. As can be seen by the previous case, this isn't so. See the book "Hacker's Delight" for a description, and the inverse problem of solving for the number two in these systems. Integers in this representation are: -16: 1110100000000 -15: 1110100000001 -14: 1110100001100 -13: 1110100001101 -12: 11010000 -11: 11010001 -10: 11011100 -9: 11011101 -8: 11000000 -7: 11000001 -6: 11001100 -5: 11001101 -4: 10000 -3: 10001 -2: 11100 -1: 11101 0: 0 1: 1 2: 1100 3: 1101 4: 111010000 5: 111010001 6: 111011100 7: 111011101 8: 111000000 9: 111000001 10: 111001100 11: 111001101 12: 100010000 13: 100010001 14: 100011100 15: 100011101 16: 100000000 The symbol lengths for numbers in this system are increasing more rapidly than in the previous case. This is due to the fact that the previous system in effect de-weights imaginary numbers by a factor of the square root of seven. This weighting means that the previous system cannot exactly represent numbers such as the imaginary unit, i. In exchange for the larger verbosity, the current system doesn't suffer this problem. Again, it turns out that the symbol lengths do not increase monotonically away from zero. The numbers of a given length form a "Dragon Fractal": This final case where the number two is represented by four symbols isn't particularly interesting. The cubic equation for the base produces horribly messy solutions. Only when you would like exact arithmetic with such a base would this system be better than others already discussed. This is yet another fat-binary case, where the base is the fourth root of two. Other than that, it is uninteresting. This yields a quartic equation, which unfortunately produces messy solutions just like the 1+1=1110 case. It isn't useful. This gives the equation b4+b2=2, having the solution b=±1,±i√2. Ignoring the non-imaginary solutions yields the interesting case of a pure-imaginary base. (Which one of the two we choose doesn't matter due to symmetry under complex conjugates.) This number system is also mentioned in Knuth's Art of Computer Programming, in where he states that its disadvantage over the -1+i system is the fact that the complex unit is represented by an infinitely long non-repeating string. However, if we use floating-point, then the small error introduced is usually ignorable. This is especially true because this system has no trouble exactly representing integers: -16: 10100000000 -15: 10100000001 -14: 10100010100 -13: 10100010101 -12: 10100010000 -11: 10100010001 -10: 1000100 -9: 1000101 -8: 1000000 -7: 1000001 -6: 1010100 -5: 1010101 -4: 1010000 -3: 1010001 -2: 100 -1: 101 0: 0 1: 1 2: 10100 3: 10101 4: 10000 5: 10001 6: 101000100 7: 101000101 8: 101000000 9: 101000001 10: 101010100 11: 101010101 12: 101010000 13: 101010001 14: 100000100 15: 100000101 16: 100000000 This complex number system technically is also fractal, but the system of nested rectangles isn't particularly complicated: 1+1=10110 ... 1+1=100010 These systems are again like 1110 and 10010, except with quartic and quintic equations needing to be solved. The resulting solutions are complex functions containing several nested roots, and as as result do not make very interesting bases. This case has b5+b3+b2+b=2, which amongst its solutions has b=(1±i√7)/2. Thus this case is very similar to that of 1+1=1010, and has the integer table: -16: 110110000 -15: 110110001 -14: 101011110 -13: 101011111 -12: 101010100 -11: 101010101 -10: 101010010 -9: 101010011 -8: 101101000 -7: 101101001 -6: 101110110 -5: 101110111 -4: 1100 -3: 1101 -2: 1010 -1: 1011 0: 0 1: 1 2: 101110 3: 101111 4: 100100 5: 100101 6: 100010 7: 100011 8: 1111000 9: 1111001 10: 10111000110 11: 10111000111 12: 101100011100 13: 101100011101 14: 101100011010 15: 101100011011 16: 101100010000 And fractal boundary of: This is the base used by the aliens described in the introduction. Their crazy mathematical statement is simply showing that 2+4=6. Creating complex numbers in weird binary As was shown in the beginning, creating natural numbers is easy, induction can be used to create any number once we have the definition of the number two. Integers can be created once we can evaluate what minus one is, which again only depends on the definition of the number two. Unfortunately, complex numbers aren't so simple. There, we need to know which of the possibly many solutions for the base we are using in order to obtain a value for the imaginary unit, i. It turns out that the last number system described contains the identity i√7=1+b+b2, when b = (1+i√7)/2. Thus we can simply divide by the square root of seven, and use induction again to evaluate any pure imaginary integer. There is one other case though, the fact that the base includes the factor of 1/2 means that this system also has half-integer complex numbers. These can be created by adding or subtracting the base, and simplifying the problem into the integer case. A C function which does this is: For other weird binary bases, such as b=-1+i, the procedure is somewhat different, especially in that case, where i can be represented exactly. The reverse process, of converting a weird binary number back to binary is relatively simple. We just add the relevant powers of the base. C code that does this is: Are there any other interesting bases for weird binary? It turns out that no, there aren't. For a base to be interesting, its complex squared norm must be equal to two. A pure imaginary base with this is the b=±i√2, discussed above. Similarly, there is the pure real case b=±√2 also described. This leaves complex cases. Normalizing, we have:b = [±1±i√ (2n2-1)]/n If we evaluate b2, we have:b2=±[1-n2±i√ (2n2-1)] ×2/n2 We need 2/n2 to be in lowest common terms to be a multiple of 1/n. If this isn't the case, then we can never build a terminating expression that evaluates to be the number two. The denominators will increase without limit as we increase the powers of the base, and no cancellation will occur. To prevent that, n needs to be 1 or 2. If n is one, then we get b = ±1 ±i, and if n is two, we obtain the other pure complex solutions; b = (±1±i√7)/2. So binary number systems can be quite complicated. However, they unfortunately cannot represent quaternions or ocotonions due to the roots of polynomials being closed under the complex numbers. But still, as can be seen, there are several complex binary number systems, some more well known than others. Company Info | Product Index | Category Index | Copyright © Lockless Inc All Rights Reserved.
http://locklessinc.com/articles/weird_binary/
13
71
Sets and Functions Help (page 2) Introduction to Sets and Functions A set is a collection of objects. We denote a set with a capital roman letter, such as S or T or U. If S is a set and s is an object in that set then we write s S and we say that s is an element of S. If S and T are sets then the collection of elements common to the two sets is called the intersection of S and T and is written S ∩ T. The set of elements that are in S or in T or in both is called the union of S and T and is written S ∪ T. A function from a set S to a set T is a rule that assigns to each element of S a unique element of T. We write f : S → T. Let S be the set of all people who are alive at noon on October 10, 2004 and T the set of all real numbers. Let f be the rule that assigns to each person his or her weight in pounds at precisely noon on October 10, 2004. Discuss whether f : S → T is a function. Indeed f is a function since it assigns to each element of S a unique element of T. Notice that each person has just one weight at noon on October 10, 2004: that is a part of the definition of “function.” However two different people may have the same weight —that is allowed. Let S be the set of all people and T be the set of all people. Let f be the rule that assigns to each person his or her brother. Is f a function? In this case f is not a function. For many people have no brother (so the rule makes no sense for them) and many people have several brothers (so the rule is ambiguous for them). Let S be the set of all people and T be the set of all strings of letters not exceeding 1500 characters (including blank spaces). Let f be the rule that assigns to each person his or her legal name. (Some people have rather long names; according to the Guinness Book of World Records , the longest has 1063 letters.) Determine whether f : S → T is a function. This f is a function because every person has one and only one legal name. Notice that several people may have the same name (such as “Jack Armstrong”), but that is allowed in the definition of function. You Try It: Let f be the rule that assigns to each real number its cube root. Is this a function? In calculus, the set S (called the domain of the function) and the set T (called the range of the function) will usually be sets of numbers; in fact they will often consist of one or more intervals in . The rule f will usually be given by one or several formulas. Many times the domain and range will not be given explicitly. These ideas will be illustrated in the examples below. You Try It: Consider the rule that assigns to each real number its absolute value. Is this a function? Why or why not? If it is a function, then what are its domain and range? Examples of Functions of a Real Variable Let S = , T = , and let f ( x ) = x2. This is mathematical shorthand for the rule “assign to each x S its square.” Determine whether f : → is a function. We see that f is a function since it assigns to each element of S a unique element of T —namely its square. Math Note: Notice that, in the definition of function, there is some imprecision in the definition of T . For instance, in Example 1.24, we could have let T = [0, ∞) or T = (−6, ∞) with no significant change in the function. In the example of the “name” function (Example 1.23), we could have let T be all strings of letters not exceeding 5000 characters in length. Or we could have made it all strings without regard to length. Likewise, in any of the examples we could make the set S smaller and the function would still make sense. It is frequently convenient not to describe S and T explicitly. Let . Determine a domain and range for f which make f a function. Notice that f makes sense for x [−1, 1] (we may not take the square root of a negative number, so we cannot allow x > 1 or x < −1). If we understand f to have domain [−1, 1] and range , then f : [−1, 1] → is a function. Math Note: When a function is given by a formula, as in Example 1.25, with no statement about the domain, then the domain is understood to be the set of all x for which the formula makes sense. You Try It: Let What are the domain and range of this function? Determine whether f is a function. Notice that f unambiguously assigns to each real number another real number. The rule is given in two pieces, but it is still a valid rule. Therefore it is a function with domain equal to and range equal to . It is also perfectly correct to take the range to be (−4, ∞), for example, since f only takes values in this set. Math Note: One point that you should learn from this example is that a function may be specified by different formulas on different parts of the domain . You Try It: Does the expression define a function? Why or why not? Let . Discuss whether f is a function. This f can only make sense for x ≥ 0. But even then f is not a function since it is ambiguous. For instance, it assigns to x = 1 both the numbers 1 and −1. Find practice problems and solutions for these concepts at: Calculus Basics Practice Test. Add your own comment Today on Education.com WORKBOOKSMay Workbooks are Here! ACTIVITIESGet Outside! 10 Playful Activities - Kindergarten Sight Words List - The Five Warning Signs of Asperger's Syndrome - What Makes a School Effective? - Child Development Theories - Why is Play Important? Social and Emotional Development, Physical Development, Creative Development - 10 Fun Activities for Children with Autism - Test Problems: Seven Reasons Why Standardized Tests Are Not Working - Bullying in Schools - A Teacher's Guide to Differentiating Instruction - Should Your Child Be Held Back a Grade? Know Your Rights
http://www.education.com/study-help/article/calculus-help-sets-functions/?page=2
13
103
ECOLOGY OF THE WOLF Populations of wolves that are unexploited by man are rare and generally so remote that a long-term study is prohibitively expensive or impractical. National parks provide unique environments in which to observe these important predators. The insular nature of Isle Royale lends special significance to wolf-population studies, for there is relatively little opportunity for transfer between the island and the mainland. While the wolf population has exhibited great year-to-year stability in total numbers, there have been important variations in its social organization. Since the population is rather small, significant changes in its structure can be linked circumstantially to single events such as the death of an alpha male (Jordan et al. 1967) or possible ingress of a new pack (Wolfe and Allen 1973). The basic, long-term pattern of a single, large pack and several smaller social units has recently changed. In the early 1970s the island supported two large packs that each utilized about half the area. This provided the potential for an increase in total population, and wolf numbers reached a midwinter high of 44 in 1976. The recent development of a second major pack on the island appears to have resulted from a significant increase in moose vulnerability and a higher beaver population, an important summer prey species. This addition has caused a higher level of predation on moose in winter, especially when deep-snow conditions increased the vulnerability of calves. Observations of wolves in winter were made either from light aircraft or from the ground using a telescope at long range. Summer observations were limited to a period of several days at one rendezvous site. Alpha wolves were the only animals consistently identified from year to year. In addition to providing the pack with leadership, they were most active in scent-marking their environment (during winter observations) and were the most active breeders. Alpha wolves restricted the courtship activities of subordinate members of the pack, and mate preferences were demonstrated, both of which contributed to a reduction in courtship behavior and presumably of mating among subordinate wolves. The recent establishment of a second wolf pack on Isle Royale was a significant departure from the pattern observed in the 1960s, when the population remained remarkably constant. The history of this wolf population has illustrated the effectiveness of natural mechanisms which adjusted wolf numbers to their food base. A Brief History Yearly variations in the Isle Royale wolf population are detailed by Mech (1966), Jordan et al. (1967), and Wolfe and Allen (1973). These provide the basis for the following review. During the initial 11 winters of the project (1959-69), the wolf population varied between 17 and 28 (Table 5). The highest population occurred while the large pack was still in operation in 1965; the lowest was in 1969 after 2 years of social instability. TABLE 5. Estimated number of wolves on Isle Royale in midwinter, 1959-76. From 1959 through 1966, the population contained only one large pack. For the first 3 years, this pack traveled over the entire island in midwinter; in subsequent years, its movement usually was restricted to the southwestern two-thirds of the island. The large pack numbered 15-17 wolves from 1959 through 1963. It reached an all-time high in 1964, when 22 wolves were seen. From 1964 to 1966, the pack remained large, at 15-20 wolves. In 1965, a pack of five appeared, believed to have separated from the large pack. This group may have persisted as a pack of four in 1966, although there was some speculation that it left the island in late winter 1965. In 1966, the large pack initially numbered 15, but three wolves dissociated from the pack shortly after the winter study began. The alpha male, recognizable from 1964 to 1966, developed a limp and was apparently killed by other wolves in March 1966. For the remainder of the 1966 study, the largest group numbered eight wolves. The strong leadership of the alpha male was thought to be instrumental in the maintenance of the large pack, and its fragmentation was linked circumstantially to his demise. When the 1967 winter study began, two packs (six and seven wolves) were found in the central and southwestern parts of the island. These two packs may have been remnants of the large pack since their travels overlapped considerably. Another pack of four occupied the northeastern end of the island. In February, a pack of seven, including four black wolves, was seen in Amygdaloid Channel, apparently having crossed the frozen channel from Ontario. There was evidence of violence among wolves; a wolf with a bloody head was seen running toward Canada when the "Black Pack" was first seen, and a few days later an injured wolf was observed near the lodge buildings at Rock Harbor. The packs of six and four were not relocated after the Black Pack was seen. Between 2 and 7 February 1968, two black wolves were observed in a pack of six at the west end of the island. Another pack, numbering seven (the Big Pack) was first seen on 12 February and included one black wolf. While these packs could have been the same, Wolfe and Allen (1973) considered them distinct and suggested that the pack with two black wolves left the island via an existing ice bridge. The single black wolf in the Big Pack was probably one of the four black wolves first seen in 1967. How this wolf became integrated into a resident pack is unknown; Wolfe and Allen (1973) speculated that it could have been associated previously in some way with wolves in this pack, thus implying an additional interchange of wolves between the island and the mainland. The Big Pack included three wolves that were recognizable from 1968 through 1970the alpha male and female and the black wolf, a male. The alpha pair was observed mating in 1968. There was mutual courtship observed in this pair in 1969 and 1970, indicating probable mating. In all 3 years, the black male was often seen in the company of the alpha pair and seemed to enjoy special status. Consequently, he was designated the second-ranked, or beta, male. A photograph of the alpha female, taken in 1968 by D. L. Allen, revealed an unusual conformation in her left front leg. By 1972, she had developed a severe limp in this leg and was not seen the following year. The Big Pack persisted into the present study period and became known as the West Pack after the establishment of a second pack in 1972. Annual Fluctuations, 1971-74 During the present study, the Isle Royale wolf population continued to increase from a low of 17 in 1968 to a high of 31 in 1974. In summer 1971, a second pack became established. During winter studies from 1972 through 1974, each pack occupied approximately half of the island. 1971. The Big Pack (hereafter referred to as the West Pack) was recognized in 1971 by the presence of the black male and alpha female. The black male was clearly the alpha male, replacing the large gray male that had been dominant from 1968 through 1970. Although ten members were seen twice, the pack usually numbered seven to nine wolves. On the basis of limited behavioral information, two pups were believed present in the pack. The pack ranged over the southwestern third of the island, venturing northeast as far as the middle of Siskiwit Lake. In addition to the main pack, three duos were observed: one traveled among the peninsulas of the northeast end of the island, a second duo ranged from Moskey Basin through Chippewa Harbor to Wood Lake, and a third inhabited the shoreline of Siskiwit Bay, traveling between Houghton Point and Malone Bay. Four single wolves also were recognized, with their respective activities centered at the southwest end, northeast end, the north shore west of Todd Harbor, and Malone Bay. While the maximum number of wolves seen on a single day was 16, the presence of three duos and four singles was well established, and the population may be summarized as follows: 1972. Two packs (West and East) accounted for most of the island's wolves from January to March 1972. Each pack commonly numbered eight wolves in late January, but consistently numbered seven and ten, respectively, after mid-February. The ranges of these packs did not overlapeach occupied about half of the island. In addition, a trio of wolves operated in the Malone Bay-Siskiwit Bay area, with tracks suggesting that they ranged along the shore of the island as far as Chippewa Harbor. The East Pack had its origins within the wolf population present the previous winter, since there was no ice bridge to Canada in the interval between the winter studies of 1971 and 1972. Besides the alpha male and female there were six wolves that were uniform in size and body markings, virtually indistinguishable during observations or in photographs (Fig. 30). All six had the physical appearance of pupspresumably a litter from the alpha pair. This conclusion was supported by the fact that the alpha male was never observed chasing any of these six wolves away from the alpha female during the mating season. In 1972 the alpha wolves in the West Pack were the same individuals as in 1971the black male and small, gray female (Fig. 31). Although no other wolves in the pack were identifiable from 1971, the presence of pups was not confirmed. In spite of a limp, the alpha female was able to retain her dominant status, though occasionally she had trouble keeping up with the other wolves in the pack. I saw this female for the last time in May 1972, when she walked, still limping, along the shore of an inland lake with the black male; her summer coat was quite reddish. In September 1972, the black male, a smaller, reddish wolf (probably the alpha female), and three gray wolves were seen lying on an open ridge (Coley Thede, pers. comm.). The black alpha male and female apparently died between September 1972 and January 1973. The black male was then at least 6.5 years old, since he was first observed in 1967. The alpha female was also at least 6.5 years old when she died, because she mated in 1968 and had to be at least 22 months old at that time. She was probably older, since it is rather unlikely that she could have reached the position of alpha female by her second year. On 24 February, a total of 20 wolves was observed (packs of seven and ten plus the trio). It was obvious from tracks on fresh snow that at least two single wolves were also presentone in the vicinity of Chippewa Harbor and one on the north shore near Little Todd Harbor. On 3 March, a wolf was seen following and apparently trying to remain hidden from the pack. Accordingly, the population totaled: 1973. The East and West packs, numbering 8 and 13 wolves, were again well defined. Spatial arrangements between the packs were similar, although their travels overlapped along the north shore of the island, where they visited each other's kills. On 24 and 25 February, a total of 23 wolves was seen. The final population estimate was: The "Todd duo" was seen only three times. Judging from tracks, most of their activity was centered in the Todd Harbor area, although once they traveled from Little Todd Harbor to Lake Whittlesey. The loner, positively identified as a male, was seen only once, but tracks indicated that he ranged along the north shore from a point opposite Lake Desor to the northeast end of Amygdaloid Island, a distance of 40 km. The leadership of the West Pack had changed completely since the previous yeara new alpha pair had replaced the black male and his limping mate. From their appearance and, especially, their behavior, four wolves in this pack were classed as pups. One of these disappeared from the pack around 20 February and was not seen again. The East Pack numbered 12 or 13 for the entire winter study. Observations were hampered by the wolves' extreme avoidance of the study plane, probably resulting from disturbance by other aircraft earlier in the winter. The alpha pair had not changed from the previous year (Fig. 32). The number of pups was estimated from the increase in maximum pack size from 1972 to 1973certainly a minimum figure since it assumes no mortality in the intervening year. 1974. Both main packs increased in size from 1973, and a duo and at least one single were present. The 31 wolves observed on 17 February provided the following minimum count: Again, each pack inhabited its respective end of the island, but movements of the East Pack into West Pack territory increased the amount of overlap. Two wolves again were active in the Todd Harbor area, quite possibly the duo of 1973. In late January, the West Pack numbered 11 or 12. The increase in pack size from 1973 indicated the presence of at least four pups. Early in February, the pack broke into several smaller groups, and the alpha male, recognizable from 1973, was the dominant wolf in one group of four. His mate, the alpha female, was also in this groupthere was some uncertainty that this was the same female as during the previous year. A single wolf was tolerated by this group near a kill, and it was probably one of the original pack members. Several days later, another group of four was seen leaving a kill in the interior of the island. Two other wolves, soon joined by a single, were observed in the Washington Harbor area, and this trio stayed together for the remainder of the study period. Since no other wolves were seen in the West Pack's range, it appeared that the West Pack had broken into units of 4, 3, 3, and 1. The 4 wolves, one group of 3, and a single wolf reunited in March, forming a pack of 8. On Isle Royale, wolf movements in winter vary from year to year, depending on snow conditions and the presence of shoreline ice. Extensive travel within a pack's range is necessary to locate vulnerable prey; such travel is lowest in years when vulnerable prey are abundant. Natural topography determines the ease of travel in different areas of the island, with the principal avenues for wolf movements consisting of chains of lakes, shorelines, old beachlines of Lake Superior, and bedrock ridges. The shorelines of the island stand out as principal hunting areas for wolves. Moose often seek conifer cover in winter, and since most of the conifer cover on the island is located in predominantly spruce-fir forests near lake level (Linn 1957), moose densities in midwinter tend to be highest along lakeshores. This creates an optimum hunting arrangement for wolves. Of a total of 325 wolf-killed moose located in winters from 1959 through 1974, 45% were within 200 m of either Lake Superior or Siskiwit Lake, a large interior lake. The distribution of wolf-killed moose from 16 winter periods further suggests that some areas of the island produce more favorable hunting conditions than others. Kill density is obviously high in the area of North Gap (mouth of Washington Harbor), Malone Bay, Chippewa Harbor-Lake Mason, and Blake Point. All of these locations receive a high level of hunting effort in winter either because they are land masses lying between frozen lakes or bays, or because many travel routes intersect in those areas. Blake Point was hunted by a large pack in 1972 for the first time in several years, and perhaps a high proportion of vulnerable moose had been allowed to accumulate there. Other areas, notably the 1936 burn, have produced few kills in recent years, probably because of a gradual reduction in use of old burns by moose and unusually deep snow in several recent winters that restricted moose to more dense forest types. Travel routes of the East and West packs during winters 1972-74 are shown in Figs. 33-35, along with locations of old and fresh kills. With the exception of the West Pack in 1974 (which fragmented and was impossible to track adequately), the routes shown represent continuous movements during the period of study. Variations in extent of travel and actual routes used are explained below in relation to snow and ice conditions. EFFECT OF SHORELINE ICE Travel around the perimeter of the island was extensive in 1972 and 1974, but quite reduced in 1973 (Figs. 33-35). In both 1972 and 1974, shelf ice was continuous around the island for most of the study period, and shorelines were used commonly by wolves (Fig. 36). In contrast, little shelf ice formed in 1973, and wolves had to travel onshore. Similarly when there was no shelf ice in 1969, wolves made extensive use of the interior even though snow was exceptionally deep (Wolfe and Allen 1973). Occasionally, wolves venture onto ice that is very thin, especially if it is covered with snow. One morning in February 1974, the East Pack rested within 50 m of the edge of the shelf ice near Houghton Point. Tracks of one wolf led directly to the edge and then back to a resting place close to the other wolves. In the afternoon, the thin ice where the wolf had walked broke off and floated away. EFFECT OF SNOW CONDITIONS Relative to moose, wolves have a lighter foot loading (weight-load-on-track) and consequently receive greater support from snow of a given density. Weight load-on-track for five wolves in the Soviet Union ranged from 89 to 114 g/cm2 (Nasimovich, 1955). In contrast, a cow and calf necropsied on Isle Royale in 1973 had foot-loadings of 488 g/cm2 and 381 g/cm2, respectively. Measurements by others range from 420 g/cm2 to over 1000 g/cm2, depending on the sex and age of the moose (Nasimovich 1955; Kelsall 1969; Kelsall and Telfer 1971). Moose rarely receive consistent support from crusts on the surface or within the snow profile (Kelsall and Prescott 1971), and wolves often have a considerable advantage when crusts are strong enough to support them. For example, in 1972, wolves on Isle Royale appeared to be supported by a crust located 20 cm below the surface of the snow, although moose calves broke through and moved with difficulty. Crusting conditions and frequent thaws (which increase snow density) during the entire 1973 winter study allowed wolves to travel with relative ease throughout the interior of the island (Fig. 34). Similar conditions prevailed during the first half of March 1974. At such times moose usually remained in areas of conifer cover, and their movements seemed greatly restricted. Since wolves have relatively short legs, they are greatly handicapped by deep, soft snow. Nasimovich (1955) found that wolves sank to their chests in snow of density 0.21 or less, which describes essentially all fresh-snow conditions. Thus, wolves generally travel in single file through snow, and have been observed moving into this formation in response to as little as 20-25 cm of snow along lake edges (Fig. 37). Nasimovich also found that wolves had difficulty chasing ungulate prey when snow depths exceeded 41 cm, and, with depths greater than 50-60 cm pursuit through untracked snow was almost impossible. In 1971, 41 cm of fresh snowfall on a 51-cm base precluded extensive travel by wolves in the interior of the island. Frequent fresh snow in 1972 kept depths in open areas above 75 cm, and, in spite of a crust within the snow profile, movements of wolves usually were limited to shorelines. The distribution of wolf-killed moose illustrates one effect of deep snow. When snow depth exceeded 75 cm, there was a significant increase in the number of kills located within 0.8 km of a shoreline, although part of this increase is related to changes in moose distribution. Distance Traveled by Wolf Packs Since most pack movements involve hunting, the amount of travel should roughly reflect success and, indirectly, the relative abundance of vulnerable prey. Average distances traveled by Isle Royale packs between kills are quite variable in different years (Table 6), ranging from a low of 18.5 km/kill to a high of 54.1/kill. TABLE 6. Travel estimates for wolf packs, 1971-74. Highest travel per kill was shown by both East and West packs in 1973, a year when the average daily mileage was also highest for both groups. This suggests that moose vulnerability was lowest in 1973, a hypothesis supported by the fact that calves were killed least often in that year. Frequent snow crusts which made travel for wolves relatively easy contributed to the greater movements in 1973. Minimum movement between kill (18.5 km) was registered by the West Pack in 1971, when wolves had little trouble finding vulnerable prey, especially calves, along shorelines. Even shorter distances were reported by Kolenosky (1972), who found that a pack traveled only 14.7 km between kills of deer in Ontario during 1969 when deep snow rendered deer more vulnerable and probably reduced wolf movements. Fundamentally, predator-prey interaction involves energy transfer from one trophic level to another, as from herbivore to carnivore. The complexity of this energy transfer is directly related to the number of species in a particular food web. On Isle Royale, the wolf, the major carnivore, is entirely dependent on moose and beaver, which are primary consumers of vegetation. This is a relatively simple system compared to the food web described by Cowan (1947) for the Rocky Mountain national parks of Canada, where wolves depended heavily on elk but also killed deer, moose, bighorn sheep, caribou, and, at certain seasons, snowshoe hare and beaver. Isle Royale wolves prey on moose at all times of the year, while beaver are available only during the ice-free season. While quite variable from 1971 to 1974, the entire food base of the Isle Royale wolves was probably higher in the early 1970s than during the previous decade, owing to increased vulnerability of moose (at least in winter) and an increased beaver population. This is probably why the island was partitioned into two pack territories after 1971. The Wolf as a Predator of Big Game Food habits of wolves have been studied intensively, mainly because of human concern for the prey species, domestic or otherwise. Wolves are well adapted both physically and behaviorally for predation on large mammals, and an absence of large ungulate prey may adversely affect resident wolf populations, especially pups (see Pup Production). Food habits of wolves seem to be most variable in tundra areas where wolves typically prey on a single ungulate species. Clark (1971) found wolves in central Baffin Island to be almost completely dependent on caribou, while Tener (1954) indicated snowshoe hares as the principal prey species for wolves on Ellsmere Island. Wolves denning in the northern Brooks Range in Alaska often ate small rodents, birds, fish, and insects, although Stephenson and Johnson (1972) believed that wolves nonetheless depended primarily on ungulates. Pimlott et al. (1969) pointed out that wolves have never been shown to thrive for a significant period on prey smaller than beaver, and most biologists agree that wolves are characteristically dependent on large mammals. Moose, white-tailed deer, and beaver are the principal prey species of wolves in mainland areas adjacent to Lake Superior (Thompson 1952; Stenlund 1955; Pimlott et al. 1969; Mech and Frenzel 1971). Prey size and numbers determine which species are most important for the wolf. Where deer are available they are highly preferred (Pimlott et al. 1969; Mech and Frenzel 1971). Since beaver are small and, presumably, easily killed by wolves, predation on them is determined largely by availability. Nonwinter Food Resources Analysis of wolf scats collected in 1973 showed that Isle Royale wolves preyed on beaver to a much greater extent than a decade earlier (Fig. 38). A sample of 554 wolf scats was collected in 1973 from homesites and game trails used by both packs (Table 7). Most of these were from 1973, although a small proportion of the scats collected at the East Pack den were probably from 1972. TABLE 7. Occurrence of prey remains in wolf scats, 1973. Beaver and moose calves together constituted 90.0% of the food items in nonwinter scats. Remains of beaver (hair and occasionally claws) were found in 75.8% of the total scat sample and made up 50.5% of 831 prey occurrences. Moose hair occurred in 69.3% of the scats and comprised 49.5% of the total food items. Of the identifiable moose remains (in scats deposited before the change in calf pelage in early August), 85.7% were from calves. Hare and bird remains were identified from only one scat and are unimportant as prey. While the 1973 blueberry crop was the best remembered by many long-time island residents, fruit was not found in any of the scats. Vegetation (mostly grass) was found in 6.1%, and unidentified seeds in 2.2% of the scats. These nonanimal items were not tallied in Table 7. Murie (1944) suggested that grass may act as a scouring agent against intestinal parasites, a hypothesis supported by his discovery of roundworms among blades of grass in some scats. An 18-inch section of tapeworm (Taenia sp.) was found in a fresh Isle Royale wolf scat containing grass, and Kuyt (1972) reported a similar finding. INCREASED PREDATION ON BEAVER The incidence of beaver remains in fresh scats from 1958 to 1960 was 13.1% (Mech 1966) (Fig. 39). In the following 3-year period beaver occurrence was essentially the same, 15.6% (Shelton 1966). Although there were no systematic scat collections in subsequent years, field examination of scats found incidental to other work showed no obvious changes (Jordan et al. 1967; Wolfe and Allen 1973). However, the 1973 data clearly demonstrate a significant increase in predation on beaver (ts = 13.7, P 0.001) since 1958-63 (Table 8). TABLE 8. Beaver occurrence in summer wolf scats, and beaver population trends. In the decade between the scat analyses, the beaver population doubled, with the estimated number of active colonies (determined from aerial count) increasing from 140 in 1962 to 300 in 1973 (Shelton, unpubl. data). During the same period, wolf predation on beaver tripled, with percentage of beaver in wolf scats increasing from 14.4% to 50.5%. Pimlott et al. (1969) found that the frequency of beaver in wolf scats in the Pakesley area of southern Ontario was 59.3%, compared to 7.1% in nearby Algonquin Park. A study by Hall (1971) showed that the beaver population in the Pakesley area was at least three times more dense than that of Algonquin Park. Clark (1971) pointed out that increased predation on beaver at higher densities could result from a shift in hunting effort to the more abundant beavers or simply from an increased frequency of encounters between wolf and beaver. Hall (1971) reported an increase in predation on beaver and a decrease in predation on deer in Pakesley during the 1960s, corresponding to changes in densities of these two prey species. He believed this indicated a shift in hunting effort. On Isle Royale, known wolf trails often parallel water courses and pond edges, but we do not know whether predation on beaver is limited to chance encounters or whether purposeful hunting is important. Field observation did not indicate depression of beaver numbers around wolf homesites. Some of these were adjacent to active beaver ponds, suggesting that wolves did not spend a lot of time stalking and hunting beaver. The relative levels of predation on moose and beaver as indicated by scats were consistent throughout spring-fall 1973 (Table 9). Shelton (1966) found a slight increase in beaver occurrence in wolf scats in the fall, when beaver are actively cutting winter stores of food and consequently are more vulnerable. Although scats from the last East Pack rendezvous did not indicate any increase, most of the scats from this area probably dated from August and early September, before intensive cutting begins. TABLE 9. Incidence of beaver and moose remains in wolf scats from various homesites and associated trails, 1973.a While it is difficult to estimate the current importance of beaver to Isle Royale wolves in terms of biomass or numbers of prey, a comparison with other studies provides a rough assessment. The highest reported occurrence of beaver in wolf scats came from studies in the Pakesley area of Ontario. Beaver remains were found in 62% of the scats examined in 1960, and beaver comprised 59% of the total food items (Pimlott et al. 1969). By 1964, the frequency of occurrence of beaver in wolf scats in Pakesley had increased to 77%, and beaver were regarded as the primary summer prey of wolves (Hall 1971; Kolenosky and Johnston 1967). The incidence of beaver in scats from Isle Royale wolves (76%) and the percentage of beaver in total food remains (51%) are second only to the reported data for the Pakesley area. A high beaver population on the northeastern half of the island may have been an important factor allowing rapid growth of the East Pack (from 8 to 10 in 1972, to 16 in 1974). Over a 3-year period a minimum of 13 pups survived to midwinter in this pack. The general appearance in July of the 1973 pups and the rapid growth of at least two of them between observations in July and August suggest an abundant food supply. The dense beaver population probably has been an important factor ensuring high pup survival at a time when the production of moose calves, the other principal summer prey, was subnormal. Winter Predation Patterns Winter food habits determined from direct observations and aerial tracking showed that wolves on Isle Royale continue to subsist in winter almost entirely on moose. The snowshoe hare population was relatively low during this study, and while wolves occasionally flushed hares during observations, they never gave chase. No indications of wolf predation on hares in winter were found. Beaver were available only in rare instances when they ventured from beneath the ice to cut food. During mild weather between January and March 1973, we discovered two wolf-killed beaver. Likewise, during a thaw in March 1974, one or more beaver were killed on the Big Siskiwit River. Although wolves rarely find active beaver in winter, they show great interest in beaver lodges and dams encountered during their travels (Fig. 40). The East Pack even dismantled a lodge in February 1973, near Harvey Lake. The wolves had killed two moose within 100 m of the lodge, and their activities while in the area for several days centered on digging out the lodge. In forested regions such as Isle Royale, wolves depend heavily on their sense of smell for prey detection. Of 30 observations of wolves detecting moose from 1972 to 1974, it was possible to determine the method of prey detection 17 times. In 10 cases in which wolves caught the scent of a moose, they either approached directly upwind or turned toward their prey after crossing downwind from the moose. Mech (1966) reported that wolves seemed to sense prey 2.4 km away, underscoring their olfactory sensitivity. Wolves visually detected moose six times, and once they followed a fresh moose track to the animal. Mech (1966, 1970) provided an extensive discussion of the results of moose-wolf encounters observed in the first three winters of the project. The basic pattern he observed has not changed significantly. Moose that stand their ground when wolves approach are not killed; all observed encounters on Isle Royale that ended in a kill occurred after the victim initially ran from wolves. For unknown reasons, vulnerable moose do not stand and face wolves when first approached. While chasing a moose, wolves apparently respond to vulnerability cues that are not obvious to aerial observers; sometimes they quit immediately, at other times the chase might last for long distances (Fig. 41). The primary point of attack is the hindquarter region of the moose, where wolves can dash in and out and stand the best chance of avoiding the quick strikes of the hooves. When wolves inflict serious wounds, they are often content to wait until the moose weakens. In February 1972, however, the East Pack, after spending most of a night close to a wounded adult near Lake Richie, abandoned the animal around daybreak. The moose continued to stand in heavy cover that morning, but by afternoon was lying on its side. This was an 8-year-old cow in apparently good condition, with abundant marrow and visceral fat reserves and pregnant with one fetus. She had deep wounds around the anal opening and had apparently lost a considerable amount of blood. In the next 5 weeks, the East Pack never returned to the carcass, but by early May the wolves had consumed it entirely. Mech (1966) found that wolves have a low rate of hunting success, presumably because most of the moose they encounter are not vulnerable. Of 77 moose tested by wolves, 6 were killed. From 1972 to 1974, 38 moose were tested during observations, and only one was killed. A schematic representation of results of moose-wolf encounters in the two periods is presented in Fig. 42. Observations of hunts in the recent period were too few to determine changes in hunting success. EFFECT OF SNOW CONDITIONS Crusts within the snow profile or on its surface provide support for wolves but interfere with moose movements. Since crusts are frequent on Isle Royale, deep snow often results in increased hunting success for wolves. This was apparent in 1969 when more kills were found than in any previous winter (Appendix L). An increased kill rate on Isle Royale was also evident in the "deep snow" winters of 1971 and 1972. In all three of these winters, the degree of carcass utilization was noticeably less, indicating higher hunting success (Wolfe and Allen 1973; Peterson and Allen 1974) (Fig. 43). Increased calf vulnerability due to reduced mobility in deep snow is reflected in a high kill of moose calves when snow depths exceed 75 cm (Fig. 44). Most calves are killed near shorelines, which are traveled heavily by wolves when snow is deep and shelf ice present. Calves may be so restricted that they are left in shoreline areas by their mothers who have gone elsewhere to feed. In 1972, the West Pack encountered two adults and a calf on the south shore. Both adults ran along the shore, but the calf headed inland and was pulled down by wolves within 100 m. Either the calf's mother was behaving in a highly abnormal fashion or she was not present. In 1971, we saw two calves without a mother present; one of these was killed by a single wolf (Peterson and Allen 1974). WINTER FOOD AVAILABILITY Estimates of food consumption by wild wolves usually are derived by multiplying the average weight of prey by the number killed in a specific period (Mech 1970). When calculated in this manner, food availability rather than actual consumption is estimated, since utilization of carcasses varies considerably with the size of the pack, size of prey killed, and the ease with which additional prey may be taken (Fig. 45). In winters when moose are more vulnerable to wolves, the kill rate may go up, while the corresponding degree of carcass utilization declines. The calculated availability of food for Isle Royale wolves is most useful for comparisons of hunting success. Whole weights of several Isle Royale moose (Appendix E) provided the basis for estimates of the potential food contributed by each bull, cow, and calf (assumed average whole weights of 432, 364, and 159 kg, respectively). The primary inedible portions of a moose are the stomach contents and some of the hide and skeleton. The stomach of a 400-kg bull necropsied in February weighed 65 kg, or about 16% of its body weight. Inedible stomach and intestinal contents of adults were assumed to weight 68 kg, and an additional 34kg were subtracted for portions of hide and skeleton usually left uneaten. Stomach and intestinal contents of calves were assumed to weigh half those of an adult, or 34 kg. Although wolves sometimes eat the entire skeleton and hide of calves in winter, 11 kg were subtracted for parts usually left uneaten. Thus the potential food of each bull, cow, and calf when killed by wolves is a calculated 330, 261, and 114 kg. Adults of unknown sex were assumed to contribute 295 kg. Carcasses of moose collected for necropsy were consumed by the West Pack; weights of these carcasses were included in the calculations for the West Pack on the assumption that the wolves would have otherwise killed a moose themselves (Fig. 46). From 1971 through 1973, calculated availability of food for both East and West packs varied between 6.2 and 10.0 kg/wolf/day, while in 1974 daily figures for both East and West packs dropped to 5.0 and 4.4 kg/wolf, respectively (Table 10). The drop in availability of food in 1974 stems largely from the increase in pack sizes from 1973 to 1974 and the fact that a high percentage of the kills were calves. Winter availability of food on an individual basis declined for each pack through the period of study, partially reflecting a decline in ease of prey capture from winters in 1971 and 1972 when unusually deep snow contributed to a high kill-rate. TABLE 10. Estimates of food availability for West and East packs, 1971-74. Food available to pack members from 1971 through 1973 on Isle Royale was greater than that indicated for the former large pack (Mech 1966). Using the moose weights given above, that pack had available 4.9, 3.8, and 5.1 kg/wolf/day in 1959, 1960, and 1961, respectively. Food available to Isle Royale wolves is well above the minimum amount required in winter. Mech (1970) estimated the daily food requirement for a wild wolf at about 1.7 kg on the basis that active domestic dogs need about this amount. Growing wolf pups and captive adults can be maintained on this amount of food (Kuyt 1972; Mech 1970). Food availability for an Ontario wolf pack was estimated at 3.7 kg/wolf/day during one winter season (Kolenosky 1972). A Minnesota wolf pack increased after a winter with 5.8 kg/wolf/day of available food, remained the same size at 3.6 kg/wolf/day, and decreased at 3.4 and 3.0 kg/wolf/day (Mech, in press). The food economy of loners (single wolves) and small groups is difficult to study because the extent of their movements and feeding patterns is usually unknown. Although Jordan et al. (1967) described some loners as "gaunt" and implied that most led a rather tenuous existence, this may not be the case in winters of abundant prey. For example, in 1971 a loner subsisted for several weeks on three moose carcasses in the Malone Bay area and apparently moved very little. Likewise, the Todd duo killed two moose and fed on two old kills in a 15-day period in February 1974, rarely moving out of the Todd Harbor area. Long-term Changes in Food Resources The appearance of two socially stable wolf packs on Isle Royale was not observed prior to 1972; this appearance, presented earlier, probably resulted from an increase in the food base of the wolf population. The beaver population increased in the 1960s, as did wolf predation on beaver during the nonwinter months. Since production of moose calves was noticeably lower in recent years than in the early 1960s, beaver assumed a position of significance by supplying food during the critical pup-rearing season. While the moose population also appeared to increase during the 1960s, this would not in itself provide an immediate increase in prey for wolves. The food supply for wolves depends on the density of vulnerable moose rather than absolute moose densities. Thus, a moose population in the early stages of a natural decline may provide wolves with a maximum number of available prey. This was apparently the case on Isle Royale in the early 1970s. The establishment of the East Pack has probably brought about a greater utilization of prey within this territory, where previously only loners or packs of two or three wolves lived. For example, the number of moose killed on the northeast half of the island during the winter study period increased greatly from 1971 to 1972, after the appearance of the East Pack (Fig. 47). In its first three winters of operation, the pack killed nine moose on the Blake Point peninsula, about 8.5 km2 in area, during a total of 18 weeks of aerial tracking. Ground search turned up eight additional kills on this peninsula. Moose densities in midwinter in this area commonly exceed 4-6/km2. Like physical characteristics, an animal's behavior has been shaped by rigorous selection pressures, resulting in behavior patterns that are closely adapted to a particular function in the ecosystem. A comparison of the red fox and wolf provides a simple illustration. While both are canids, foxes exhibit much less diversity in behavioral expression and communication than do wolves (Fox 1970). The fox, a semi-solitary creature, preys extensively on game smaller than itself and, at certain seasons, depends heavily on plant fruits and carrion. Therefore, in terms of food acquisition, there would be no advantage for young foxes to remain with their parents in a social group. The behavioral repertoire of foxes is less diverse, yet sufficient for its more solitary way of life. Cooperation among wolves in a pack, however, is essential to their ecological role as a predator on large ungulates. Consequently a complex dominance hierarchy and elaborate array of behavioral expression have evolved among wolves, allowing them to live in close association as group-hunting carnivores. The organization of wolf populations into packs does not fully explain wolves' diverse means of expression and communication, since other group-hunting canids, notably the bush dog (Speothos venaticus) of South America and the African wild dog (Lycaon pictus), do not exhibit a similar, high level of behavioral expression (Fox 1971; Kruuk 1972). While little is known of the ecology of the bush dog, the wild dog of Africa exists year-round in a cohesive pack, with social bonds apparently maintained by highly ritualized food-begging behavior (K¨hme 1965). Individuals within a wolf pack are frequently separate, however, especially in summer when most hunting is done individually or in small groups. This led Fox (1971) and Kruuk (1972) to suggest that the well-developed means of expression among wolves is not only important in coordinating group activities and maintaining order in the pack but also provides for more effective reintegration of individuals into the group after separation. The territorial nature of wolf packs helps maintain pack integrity and seems to apportion space among resident packs according to the availability of food. Mechanisms of territory maintenance may include scent-marking and howling and agonistic behavior during rare confrontations between packs. In spite of extensive field studies of the wolf, many generalizations concerning behavior within and between packs are poorly documented in the wild, primarily because observations are hampered by the wolf's environment and mobility. Lengthy ground observations have been possible only at den sites in tundra regions (Murie 1944; Haber 1968; Clark 1971); aerial observations, a primary research tool, are limited in scope. Insight into the ecological significance of wolf behavior patterns and a proper appreciation of their variability can be gained only by intensive study of many packs in different ecological settings. Social Hierarchy Within Packs The basic social structure of wolf packs is well understood from studies of captive wolves (Schenkel 1947, 1967; Rabb et al. 1967). Behavioral interaction within a pack occurs in a framework of dominance relationships or social hierarchy. A dominant (or alpha) male and female are the central members of a pack, and the other wolves constantly reaffirm their subordinate status through postures of submission directed toward the dominant individuals. Males and females have a separate dominance ranking, and the subordinates have definite dominance relationships among themselves, although interaction is less frequent and relationships are less well defined. Aggression is channeled into ritualized behavior patterns within the dominance framework, reducing the amount of direct conflict within the pack and promoting social order and stability. Alpha wolves provide leadership during travels of the pack, initiate many pack activities, and sometimes exert considerable social control over activities of subordinate wolves, notably their sexual behavior. Restriction of courtship behavior among subordinates, together with well-developed mate preferences among adults, is thought to reduce the potential number of breeding pairs in a pack, often resulting in the birth of only a single litter. The whole pack participates in gathering food and caring for the young, and this contributes both to the survival of young and cohesion within the pack. EXPRESSION OF DOMINANCE AND SUBORDINATION Facial expression, tail position, and posture combine to indicate subtleties of mood and desire. These indicators provided the basis for determining the social position of certain wolves in the packs on Isle Royale, especially the alpha wolves (Fig. 48). Tail position is easily seen from the air and is thus an obvious indicator of wolf status. The importance of the tail in communication probably lies in the fact that the hindquarters and anogenital region have a considerable function in olfactory and visual expression (Kleiman 1967; Schenkel 1947). Presentation of the anal region by a raised tail indicates a position of dominance, while a lowered tail (during interaction with other wolves) covering the anal region, is a component of submissive behavior. Postural changes reinforce these expressions: a dominant wolf stands erect with tail raised, while an extremely subordinate animal may pull its tail between its legs and lower its rear end to the ground (Figs. 49, 50). While the movements and positions of the ears, eyes, forehead, nose, and mouth of a wolf can be combined to produce subtle variations of expressions (Schenkel 1947), most are not observed by humans except at close range. Ears of dominant wolves are forward, while those of subordinate wolves are turned back or flattened against the head. Teeth are more exposed as the intensity of a threat increases. Wolves of high social standing often stare directly at another wolf as part of an expression of dominance or a mild threat, and subordinates respond by turning the head away and avoiding direct eye contact. Inferior wolves constantly show submissive behavior toward dominant wolves. Schenkel (1967:324) defined submission as "an impulse and effort of the inferior towards friendly and harmonic social integration." He described two basic types of submission in wolves, "active" and "passive." During active submission, Active submission often is seen as an element in greeting behavior, and is the most obvious form of expression during the "group ceremony," described below. Passive submission is usually shown by an inferior wolf in response to a threat from a superior individual: An important element in passive submission is "inguinal presentation," in which the wolf lying on its side raises its hind leg, thus exposing its inguinal region to the dominant wolf. Passive submission, and inguinal presentation in particular, seem to inhibit aggression in dominant wolves and thus are considered appeasement or "cut-off" gestures (Fox 1971). Many times on Isle Royale, active agonistic behavior, or even mild threats, from a dominant wolf caused subordinate individuals to fall into passive submission. The dominant wolf usually would reduce the level of its threat, and either investigate the prone wolf or simply stand over it for a minute or more. Slight movement by the inferior wolf usually brought a quick snap from the dominant. The subordinate wolf usually lay still, often with hind leg raised, until the dominant wolf walked off. Once, in the West Pack, a subordinate wolf maintained a position of inguinal presentation after the black alpha male walked away, and even rolled over and raised the other hind leg when the alpha male wandered behind him. Any other movement by the inferior male brought immediate punishment from the alpha male. Members of a pack often congregate in a "group ceremony," a greeting centered around the alpha animals. Subordinates crowd around the dominant wolves and show exuberant active submission and much body contact. Group ceremonies were observed 34 times among Isle Royale wolves from 1972 to 1974. Most commonly, they occurred immediately after the pack arose from sleep, or when one or several members returned to the pack after a brief absence. Frequently active submission toward an alpha by one wolf brought the rest of the pack running over to join in the proceedings, and sometimes a group ceremony ensued when wolves clustered about an alpha inspecting an inferior wolf lying on the ground. Group ceremonies also were seen when a pack "regrouped" after an unsuccessful chase of a moose. Such ceremonies often terminated with threats directed toward an inferior wolf by an alpha, perhaps in response to overenthusiastic greeting behavior. Group ceremonies provide a means of reaffirming dominance relationships, probably reinforcing both the status of alpha wolves and existing social bonds. Additionally, they may provide reassurance for pack members at critical periods; for example, when the East Pack traveled outside of its normal territory in 1974, subordinate wolves constantly crowded about the alpha wolves in a group greeting. Alpha wolves sometimes retain their dominant position for several years and may be instrumental in maintaining a stable pack. Jordan et al. (1967) recognized the alpha male in the large pack on Isle Royale from 1964 to 1966 and found that pack formation in 1966 coincided with his death. There has been relatively little turnover in the alpha positions in the West and East packs (Fig. 51). The small, gray female with the deformed left front leg held the alpha position for at least 5 years (1968-72) in the Big Pack (West Pack). The black male was associated with this female during all 5 years, apparently first as a subordinate (beta) male with special privileges allowing him to travel and rest near the alpha pair (Wolfe and Allen 1973), and finally as alpha male in 1971 and 1972. This was the only case from Isle Royale in which the previous history of an alpha animal has been known. None of the recognizable alpha wolves on Isle Royale has been seen after a known change in its dominant status, but whether their deaths preceded or followed the change in leadership is unknown. Jordan et al. (1967) found circumstantial evidence that the alpha male in the large pack in 1966 had been killed by his associates. During the present study, three alpha wolves disappeared; all three were last seen in summer. While the alpha male in 1966 apparently was killed after he developed a limp, the alpha female in the West Pack in 1972 managed to maintain her dominant status in spite of a limp which occasionally prevented her from retaining her customary position at the front of the pack. Winter observations on Isle Royale indicated that alpha wolves usually led the pack during its travels. Of 61 cases in which it was possible to determine whether the alpha male or female led the pack, an alpha wolf was first in line 70% (n = 43) of the time. In 33 cases the alpha female was first, the alpha male led in 6 cases, and 4 times the two dominant wolves were side by side. In many instances the alpha male showed obvious sexual interest in the alpha female and consequently followed her. Alpha wolves, usually at the front of the pack, normally choose the direction of travel and specific travel routes. This clearly was the case during an observation of the East Pack in 1974. The alpha female led the pack through the narrows between Wood Lake and Siskiwit Lake, then lagged behind to sniff an old moose track. Other wolves then assumed the lead position until they reached the first peninsula, where they stopped and waited for the alpha female to move to the front. She immediately set the direction of travel, led the way briefly, then fell back into a position in the middle of the pack. The same procedure was followed at the next point of land. A clear example of decision-making on the part of an alpha animal was observed in 1974 when the East Pack encountered a scent post of the West Pack. After the pack had examined the scent mark, the alpha female reversed the direction of travel and led the pack back to more familiar range. In the absence of alpha leadership, subordinate wolves may be indecisive. Once in 1972 we observed six probable pups in the East Pack by themselves when the alpha pair had dropped back several miles. Twice a wolf stopped and watched its back-trail. When the six wolves emerged on the shore of an inland lake, they vacillated for 15 minutes, sniffing snowed-in tracks and making false starts, and finally all started off in the same direction. Alpha wolves appear to provide leadership at critical times such as hunting, encountering novel stimuli, and perhaps when contacting neighboring packs. The position of the alpha wolves was observed in only six encounters with moose, but an alpha wolf led the pack in four of these cases. Certainly the wolves at the front of a pack would be the first to detect and chase prey. In February 1973, when the West Pack was under observation at a moose carcass across the harbor from Windigo, the alpha male detected us. He trotted excitedly toward shore, then back to arouse other members of the pack. After a group ceremony centered around him, he led the pack into thick cover near shore. Later in the same winter we watched the East Pack file along the ice on the north shore of Rock Harbor. When they reached open water, they moved onshore and slowly worked their way long the slippery, ice-coated shoreline, finally congregating on a small point. Rounded chunks of broken, "pancake" ice ranging up to several feet in diameter had frozen loosely together adjacent to the point. One wolf reached out with a foot and pushed on an ice chunk, withdrawing its foot quickly. Next the alpha male, together with an unidentified wolf, walked out on a large chunk of ice and stood for a few seconds. Suddenly they bolted back to shore, apparently after the ice had shifted. The alpha male then led the pack away, continuing the course onshore. No observations were made of confrontations between different packs of wolves on Isle Royale. However, we might expect that the alpha animals would take a leading role in such a situation, much as they did when an alpha wolf led the pack in chasing a fox in two observed cases. Although the alpha male and female did not lead the East Pack during their first foray out of their territory on 15 February 1974, they were obviously key figures. As the pack ventured across Siskiwit Bay, most of these wolves probably were encountering the area for the first time, since this pack was not observed southwest of Malone Bay in the previous two winters. In addition to an unusual amount of scent-marking as they crossed to Houghton Point, the subordinate wolves were constantly clustered around the dominant pair in a sort of mobile group ceremony. Perhaps the intense, active submission directed toward the alpha pair resulted from uncertainty and excitement among subordinate wolves. Courtship and Breeding Largely because of complex social relationships, such as mate preferences and a dominance hierarchy, the breeding potential of a wolf pack rarely is realized. Studies of captive wolves have demonstrated that breeding within a pack usually is limited to a few animals (Rabb et al. 1967), and a similar situation has been observed in packs on Isle Royale. In studying wild wolves, it should be remembered that sexually immature pups may account for a sizable proportion of a pack, a partial explanation for limited breeding activity. Mate preferences are recognized clearly on Isle Royale, and incidence of courtship among Isle Royale wolves indicates that mating is most likely to occur between the dominant male and female in a pack (Fig. 52, Table 11). It is also obvious that alpha wolves interfere with courtship attempts of subordinates. These topics will be presented in depth later in this section. TABLE 11. Minimum number of breeding pairs present, 1971-74. The primary function of courtship behavior is to establish and maintain a pair bond. Unlike many vertebrates, male wolves play an integral role in feeding and raising the pups, and a close relationship between a male and female wolf remains important on a year-round basis. Although there are records of more than one litter born in a pack (Murie 1944; Haber 1968; Clark 1971), one litter per pack is usually the rule (Van Ballenberghe and Mech 1975) and probably a safe assumption when pack sizes are not large. Other adults in a pack help raise the pups, enhancing their chances of survival. On this basis we can predict that natural selection would favor offspring from dominant, breeding wolves that interfered with mating attempts of subordinates in a pack. In species with well-developed threat behavior, such expressions are well hidden during courtship, since they would be detrimental to the formation of a close relationship between male and female (Eibl-Eibesfeldt 1970). Consequently, courting wolves display much greeting behavior, play soliciting, and submissive postures, all of which tend to decrease "social distance" (Fox 1971) (Fig. 53). Courtship behavior was observed among Isle Royale wolves throughout the annual winter study periods, with the peak in sexual activity usually sometime in February. During 52 hours of aerial and ground observation in winters from 1972 through 1974, courtship behavior was recorded 71 times. One "instance" of courtship behavior consisted of a well-defined behavior or sequence of behavior, such as mounting, a mutual greeting between mates, etc. Behavior patterns which were considered as courtship in at least some contexts are described in Appendix F. Most of the courtship behavior recorded consisted of males mounting females or males examining the genital region of females (genital snuffling). Undoubtedly, these behaviors are somewhat overrepresented because they are so easily recognized. Greeting and play behavior were also commonly seen but were not recorded as courtship unless there were other indications of sexual interest. Subtleties of behavioral expression are not seen readily from aircraft. When observing wolves whose sex, age, and relationships are unknown or poorly understood, some ambiguous behavior is difficult to classify. For example, it was not uncommon to see one wolf approach another with tail flagged, posture erect, ears and eyes forward, and the second wolf walk off in a generally submissive posture, tail tucked between its legs. This usually indicated a dominance display, but similar behavior was seen when a male tried to court an uncooperative female. Such ambiguities were resolved by carefully watching for subsequent interaction between the same individuals and their relationships to other wolves. Fortunately, most displays of dominance, submission, and courtship involved recognizable alpha wolves and were interpreted with little difficulty. Mate preference can be a powerful limitation on the amount of breeding within a pack. Clearly, if there is no mutual courtship between a male and female wolf, a mating between the two is unlikely. The breeding potential of the Brookfield wolves was reduced considerably by such "one-sided" courtships (Rabb et al. 1967). Pair bonds between mates may be very stable from year to year, although wolves will mate with other individuals if their preferred mate is not available (Rabb et al. 1967). Wolfe and Allen (1973) indicated a stable pair bond between the alpha male and female in the Big Pack (West Pack) from 1968 through 1970. This male had disappeared by 1971, but the same female mated in 1971, and presumably in 1972, with the black male that assumed the alpha position. The East Pack provided another example of a female accepting a new mate after the probable death of the alpha male. In 1974 the new alpha male courted the incumbent alpha female, who accepted his approaches with friendly greetings, indicating probable receptiveness. In all the packs observed from 1971 through 1974, the alpha pair either mated or showed mutual courtship and was considered a bonded pair. Studies of the Brookfield wolves (Rabb et al. 1967) showed that both males and females sometimes courted members of the opposite sex that were unreceptive, and in these cases courtship action was ignored or rebuffed with threats (Figs. 54, 55). My own observations at Brookfield indicated clear differences between the behavior of females that simply were not ready for copulation and those that were totally rejecting a male. A female that temporarily was rejecting a male responded to his advances with mild threats, or simply pulled away, and elements of greeting and play behavior were still seen between partners. This was typically the situation between alpha males and females on Isle Royale, and also a subordinate pair in the West Pack in 1972 that eventually mated. However, a female that was unreceptive to a particular male responded to his courtship attempts with obvious threats and showed little friendly behavior, except perhaps in the context of a group greeting ceremony. A subordinate male in the West Pack in 1973 frequently showed interest in a female, but she always replied with aggressive snapping, never exhibiting any friendly behavior toward the male. A mating between these wolves seemed unlikely. Little is known of the development of mate preferences among wild wolves, but among Brookfield wolves there were strong indications that future mate preferences are crystallized during the juvenile period (prior to sexual maturity at 22 months). Also, it appears that a young wolf generally develops a preference for the alpha wolf of the opposite sex, or at least a dominant individual (Rabb et al. 1967; Woolpy 1968). It is significant that the alpha female in the West Pack in 1971 accepted the black male as her new matea wolf that had enjoyed a close relationship with the alpha pair for at least 3 years. RESTRICTION OF SEXUAL BEHAVIOR AMONG SUBORDINATE WOLVES During three breeding seasons on Isle Royale, 69% of the observed courtship behavior (n = 71) occurred between alpha wolves. Studies of captive wolves have shown that dominant wolves restrict and, in some cases, eliminate courtship behavior and mating among subordinates (Schenkel 1947; Rabb et al. 1967; Woolpy 1968). Since the reduction of mating among subordinate adults could contribute to population regulation, it is important to try to determine the effectiveness of such restrictions among wild wolves. Rabb et al. (1967) noted an increase in agonistic behavior between dominants and subordinates during the breeding season of Brookfield wolves (Fig. 56). Observations in February on Isle Royale indicated frequent threats to subordinate wolves by the alpha male and female. In many cases a strong assertion of dominance seemed to be stimulated directly by courtship behavior among subordinates. This was further indicated by the lack of overt threats from the alpha male in the East Pack in 1972, when the pack was believed to consist primarily of an alpha pair and their offspring; the latter would have been sexually immature in their first winter. In 1973, however, when pups of the previous year could have been sexually mature, on two occasions the alpha male chased other wolves away from the alpha female. Both instances occurred on 16 February when frequent genital sniffing by the alpha male suggested that his mate was in heat. The West Pack provided the best opportunity to record interference of alpha wolves in the sexual behavior of subordinates. The black alpha male in this pack was very possessive of his mate, the alpha female of long standing (Fig. 57). One of my first observations of this pack was from the ground at Windigo on 29 January, 1972. As they rounded Beaver Island, there was much playful sparring as the pack moved along the ice, and at one point a subordinate wolf mounted the alpha female. She eventually squirmed away and snapped at the other wolf, and this brought the black male on a run. He knocked the subordinate over with a body slam, and then mounted the alpha female himself. This was typical of his behavior when other wolves approached his mate. The most interesting interaction in 1972 spanned several days, beginning 24 February. A subordinate pair managed to stay in the West Pack and mate in the presence of the alpha pair, in spite of repeated punishment from both the alpha male and female. Identification of the subordinate pair was not always positive: the male was a thin-tailed wolf that looked like one other wolf in the pack, and the female was one of three full-tailed wolves in the pack. This complicated the interpretation of observations made at different times, but since there was never any indication of sexual interest in more than two subordinate wolves, in the following account from my field notes, it will be assumed that the thin-tailed male and the full-tailed female were consistently the same individuals. Significant in these observations was the very aggressive attitude of the alpha female toward the full-tailed female when she was courting the thin-tailed male. Having been chased from the pack, however, the subordinate female managed to reinstate herself and mate successfully in spite of her "punishment." The black male usually did not interfere with the subordinate male's courtship activities and showed brief aggression only when the subordinate pair actually tied. In this case the discouraging influence of the alpha pair was not sufficient to prevent mating of subordinates, although the length of their copulatory tie was shorter than normal. A subordinate pair was present in the West Pack in 1973, and the alpha pair actively interfered with their courtship activities. On 6 February they were observed near a carcass at Windigo: While the efforts of the alpha pair to discourage courtship in this subordinate pair were persistent, of greater importance was the subordinate female's apparently irreversible lack of interest in the advances of the male. Before the West Pack fragmented in early February 1974, a presumably subordinate pair was observed mating while the pack rested nearby. The status of the alpha male from 1973 was not established before the mating took place, and immediately afterwards he behaved in a very subdued manner, walking at the rear of the pack with his tail down, while the mating pair led the way. This suggested a change in his status, yet he was still the alpha wolf in both the group of four in which he was later found and the pack of eight that reformed in early March. From the above accounts, it is obvious that alpha wolves usually interfere with attempts at courtship among subordinate wolves, although I did not record a case when they were actually able to prevent mating among subordinates. Such behavior on the part of the alpha wolves may, however, discourage pair-bond formation or initial sexual interest among subordinates. In the wild, of course, a subordinate pair could leave a pack and breed with no disturbance, but in such a case reintegration into the pack might be difficult. Our understanding of the effect of the dominance hierarchy on the formation of breeding pairs within a pack is still inadequate. In several packs, both captive and wild, the alpha male did not father the pups, or was relatively inactive sexually (Murie 1944; Rabb et al. 1967; Haber 1968). In the Brookfield pack, a male reduced his participation in courtship activities after assuming the alpha position. However, alpha wolves in both the East Pack and West Pack on Isle Royale have exhibited the most courtship behavior. Individual personalities and attributes and filial or allegiance bonds among wolves can greatly alter relationships within a pack (Rabb et al. 1967) and ultimately will limit the degree to which we can generalize about mate preference and the restriction of breeding among subordinate wolves. IMPLICATIONS OF SOCIALLY CONTROLLED MATING Behavioral limitations on mating, including mate preferences, may hold the productivity of wolves considerably below the theoretical maximum, and often only one litter of pups is born, even in large packs. The food-gathering abilities of the adults in the pack then contribute to the growth of a relatively small number of pups, enhancing their chances of survival. Since packs are basically family groups, there is obviously a high potential for inbreeding in stable packs. Woolpy (1968) studied the genetic implications of social organization in wolves. He contended that the notion that inbreeding results in deleterious effects probably is of little significance when genes are naturally "preselected" for combinations of adaptive value, as they are in wolves. As a demonstration of this principle he cited a study by Scott and Fuller (1965), who inbred beagles and basenjis with no deleterious effects, after preselecting them for fertility, behavior, and body conformation. We have already seen that the parents of wolf pups in the wild are likely to be dominant wolves, already preselected for traits of leadership and physical attributes (Fox and Andrews 1972). Of course, natural selection will rapidly eliminate inferior pups born in the wild. Woolpy (1968) further concluded that the organization of wolf populations into discrete packs, or subpopulations, was of considerable evolutionary significance. According to his hypothesis, over a period of several years of strong leadership in which most pups are born to a single pair, the expression of available genotypes (gene combinations) within a pack will be greatly reduced. Simultaneously, because of inbreeding, viable recessive gene combinations will appear more frequently. In the long run, this could result in greater variability between wolf packs. Thus, several genetic "lines" of wolves are maintained, with genetic variability partitioned "to give maximum exposure (to recessive gene combinations) at all times and to allow them to compete with each other and thus . . . provide the potential to move the population to new adaptive phenotypes" (Woolpy 1968:32). In a sense, due to wolves' social organization, evolution of the species would be accelerated, resulting in rapid adaptation to different environments. Such adaptability is evidenced by the original widespread distribution of wolves in North America and the description of 23 original subspecies on this continent (Goldman 1944). A major challenge facing us today is whether we can preserve a sufficiently large number of natural ecosystems to allow wolves and other species to achieve their own evolutionary potential. In spite of extensive field studies of wolves in various parts of North America, the precise nature of spatial relationships between adjacent packs was unknown until recently. It was unclear whether wolf packs occupied exclusive, nonoverlapping territories or whether neighboring packs utilized common hunting grounds, simply avoiding each other through direct and indirect communication. Recent studies of radio-marked wolves in many packs in Minnesota helped crystallize a concept of "land tenure" among wolf packs (Van Ballenberghe 1972; Mech 1972, 1973). Individuals within a pack utilize a common territory or "defended area" (Noble 1939), and the home range of individual wolves, defined as the area in which they travel during normal activities (Burt 1943), coincides with the territory of the pack to which they belong. The Minnesota studies revealed that packs usually occupy exclusive, nonoverlapping territories, with territory size and wolf density probably related to food supply. In northern Minnesota wolves began to travel outside their former territory in response to a shortage of their principal prey, white-tailed deer, indicating that territories may be enlarged in response to a decreased food supply (Mech, in press). Along the Minnesota shoreline of Lake Superior, where deer densities are very high, wolf densities reached 1/14 km2, with pack territories among the smallest reported for wolves (Van Ballenberghe 1972). Five resident packs totaling 40 wolves occupied an area the size of Isle Royale. The inherent flexibility of territory size, demonstrated by these Minnesota studies, allows for great adaptability to local conditions. We would expect that a pack's territory would be no larger than necessary to obtain sufficient prey. A shrinkage of territory in response to an expanded food base allows for the establishment of additional packs, as on Isle Royale. Significantly, during the winter prior to the appearance of the East Pack on Isle Royale, the West Pack utilized only half of the island. Simultaneously, the amount of food available to Isle Royale wolves was higher than at any other time during this study (Table 10). In 1972-74 Isle Royale had two primary packs, each occupying about half of the island. Additional duos and trios usually occupied areas along the boundary between the two large territories. Loners followed the large packs and scavenged their kills, or existed independently. As pack sizes increased from 1972 to 1974, the amount of spatial overlap between the two packs increased; there was a simultaneous decline in the calculated amount of prey available to wolves in both packs. Pack territories probably are no larger than the minimum size necessary to provide sufficient food and are sensitive to changes in density of vulnerable prey. This flexibility in territory size is advantageous to wolves in maximizing their hunting efficiency. Restriction of pack activity to a certain area ensures an intimate knowledge of that area (prey location and easiest travel routes) and prevents wasteful overlap in hunting efforts of two packs. A mechanism that spaces packs in relation to prey may be of greatest importance during the pup-rearing season, when pack activity is centered around the relatively immobile pups. Litters distributed so that there is plenty of food in the surrounding area would provide for rapid growth of pups. The advantages accruing from a system of exclusive territories should apply equally well to other species of group-hunting carnivores. Indeed, the spotted hyena (Crocuta crocuta) exhibits a similar pattern of social organization and spacing, at least at high population densities (Kruuk 1972). At lower predator densities the "need" for exclusive territories would be lessened; Kruuk (1972) found that in the Serengeti, where prey are highly mobile in response to environmental factors, the hyena population was relatively low and territories were not as clearly defined as in regions of higher hyena densities. Eaton (1974) suggested that naturally low densities of cheetahs might explain the lack of exclusive territories in this species. Territory size of the East Pack increased from 1972-1974, while there was no consistent trend in the range of the West Pack (Fig. 58; Tables 12, 13). Simultaneous with an increase in the territory of the East Pack there was a consistent decline in availability of food per wolf in both packs. TABLE 12. Numbers, movements, range, and prey availability for the Easts Pack, 1971-74. TABLE 13. Numbers, movements, range, and prey availability for the West Pack, 1972-74. The amount of overlap in pack territories increased from 1972-1974. In 1972, the first winter for the East Pack, approximately 9% of the island was not utilized by either main pack. The following year the two packs overlapped on 6% of the island during winter. In 1974, the amount of overlap increased to 16% of the island, primarily because of the movements of the East Pack into traditional West Pack territory. It seems reasonable that the increased amount of overlap resulted from the growth of the East Pack, with a concurrent increase in the food requirement. EXPANSION OF EAST PACK TERRITORY When the East Pack moved into what had been regarded as West Pack territory in 1974 its behavior and movements were of great interest (Fig. 59). The following account was edited from field notes: Several important aspects of the above account should be emphasized: (1) the East Pack, although it traveled into "foreign" territory, moved into an area that had not been used recently by the West Pack, and it turned back when it encountered significant West Pack activity; (2) the East Pack outnumbered any of the fragments of the West Pack with which it had direct or indirect contact. The alpha pair and two other wolves of the West Pack avoided areas crossed by the East Pack, even areas within their own territory; (3) as the East Pack ventured into areas probably unfamiliar to it, there was an unusual amount of scent-marking, by both dominant and subordinate members of the pack. MAINTENANCE OF TERRITORY Wolves must rely heavily upon indirect means of communication to delineate territorial boundaries. Potentially, any behavior which advertises the presence of a pack and causes another to avoid intrusion is of territorial significance. Scent-marking, howling, direct aggression, and avoidance may all serve to maintain territory. Scent-marking. In addition to other functions discussed later, scent-marking serves to delineate territorial boundaries. The above account of the movement of the East Pack into West Pack territory is the only reported observation of a pack encountering another pack's scent-marks. One additional instance of avoidance behavior along a territorial boundary was deduced from tracks in February 1973 (Fig. 62). Peters and Mech (1975) detailed four cases, determined from tracks, in which a pack responded to foreign scent-marks by avoidance. In one instance a pack chasing a wounded deer ceased pursuit at its territorial boundary. Generally, the outermost kilometer of a pack's territory was scent-marked profusely compared to the center. They concluded that scent-marking was an extremely important part of territorial behavior that contributed to efficient spacing among wolves. Howling. Joslin (1967) suggested that howling may be of territorial significance, since it is an effective form of long-distance communication and may convey enough information to permit identification of howling wolves. Since the interface between packs on Isle Royale is so short and territories are long and narrow, howling may be a less effective method of communication between packs than it would be elsewhere. Responses of wolves to human imitations of wolf howls are variable. In one case in June 1973, Sheldon L. Smith (pers. comm.) was sitting on a ridge less than a mile from where human imitations of wolf howls were being broadcast. Although he did not hear the human howls, he heard at the same time seven or eight brief howls from several wolves that were passing him through adjacent thick vegetation. The wolves seemed to be responding to the human howls, but were traveling in the opposite direction. In another case, in September 1972, we elicited a howling response from several wolves in East Pack territory. We approached the group and howled again, and soon a single wolf approached us. Its body and tail markings suggested it was the alpha male of the East Pack. When the wolf saw us it turned and ran back to the rest of the group, and the pack disappeared. Only the one wolf, quite possibly the dominant male, left the others to confront what he might have thought were foreign wolves. Joslin (1967) was often approached by one or more wolves when he howled within 200 yards of homesites. He interpreted this as active resistance toward intruders. Direct aggression and avoidance. When adjacent packs make visual contact with each other, such as across the ice of a large lake, they must either confront or avoid each other. The response of any two packs probably would depend on their previous history of association and perhaps their numerical strength. The East Pack killed a strange wolf even though it was in unfamiliar surroundings. The outcome of such a confrontation might well have been different if the East Pack had met the entire West Pack, instead of not more than three. Lack of numerical strength may have been the reason for the previously discussed avoidance of the East Pack by four West Pack wolves in Siskiwit Bay, even though the West Pack animals were in their own territory. SMALL PACKS AND "LONERS" In all 3 years that both East and West packs have existed, an additional, small group of wolves has been present (Fig. 63). In 1972, a pack of two or three was seen in the Malone Bay area, ranging over to Houghton Point and possibly as far as Chippewa Harbor. In the next two winters, two wolves (the "Todd duo") traveled the north shore in the vicinity of Todd Harbor and were also seen near Intermediate Lake and Lake Whittlesey. The smaller of the pair was noticeably reddish on its lower flanks and belly. Their friendly greetings suggested a male and female pair. It is significant that in the years when two large packs "divided" the island approximately in half, the small pack usually inhabited an area either between the two packs or overlapped by each. This supports the hypothesis of Mech and Frenzel (1971:33), who believed that wolves in Minnesota were organized into breeding packs occupying exclusive territories, with "loners and other nonbreeding population units" inhabiting nonexclusive areas among the pack territories. These small groups probably survive only by their ability to avoid the large packs. The "loners" are more difficult to locate and observe than the larger groups. Their ecology and social status relative to other wolves in the population is little known. Jordan et al. (1967) described several stages of "dissociation" of single wolves from a pack. They believed that many loners were aged and socially subordinate wolves that were gradually excluded from the pack, although in individual cases it is usually not possible to determine sex, age, or previous social relationships. I have seen only one case in which a single wolf which was following a pack might have been a "dropout." After the West Pack declined from eight wolves to seven in mid-February 1972, a wolf was seen following it, often hesitating and apparently trying to remain hidden from the view of the pack. The pack had left the carcass of a moose near Windigo, traveled a short distance, and then lay down on the ice north of Beaver Island. The single wolf walked up on a 50-m rise on the north side of Beaver Island, then sat down at the edge of a cliff overlooking the pack and watched them intently, hidden from view by trees. The different reactions of a pair of wolves to single wolves were recorded in 1974 (field notes): Communication Among Wolves The highly social nature of wolves and the flexibility of their group structure and hunting habits probably account for the diversity in forms of vocal communication found in this species. Howling, the most widely known and most unique wolf vocalization, is of obvious significance in long-range communication. Individual wolves have distinctly different howls and seem to be capable of distinguishing differences in howls, so there is a high potential for exchange of information via howling (Theberge and Falls 1967). Other widely recognized sounds that are not often heard in the wild include the whimper, growl, and bark (Mech 1970). Howling. Howls can be heard for several miles under certain conditions, and Joslin (1967) reported that howling could advertise the presence of wolves over a 130-km2 area. In addition to possible territorial significance, howling helps to assemble individuals in a pack after they have been separated. On Isle Royale in 1973, howling also was of obvious importance in coordinating moves of a large pack between summer homesites. Spontaneous howling of East Pack wolves was heard 62 times during approximately 383 hours spent near their rendezvous sites (homesites) in 1973. Most of the howling was heard at night (Fig. 64), when more adults were hunting and spatially separate. Such howling may help wolves coordinate hunting efforts. Pups and adults at or near a homesite often howled in response to howls of distant adults. Almost half (45%) of the howls heard near East Pack homesites included adults that howled some distance away. Increased howling at dawn and dusk may be associated with departures and arrivals of adults at the rendezvous areas. Carbyn (1974a) recorded dawn and dusk peaks in howling and general activity at wolf rendezvous sites in Jasper National Park in Alberta. Murie (1944) described how adults assembled at the den before departure for their nightly hunt. Howling at this time accompanied generally friendly behavior, with much greeting among the adults. Group howling and greeting ceremonies often occurred together among members of the captive pack at Brookfield Zoo (Fig. 65). Group howling also is common among coyotes and jackals (Canis aureus) (Kleiman 1967). A group howl was observed at a summer rendezvous of the East Pack in July 1973 (paraphrased from field notes): Although wolves are capable of fine auditory discrimination, they may howl in response to sounds which, to human ears, are quite distinct from actual wolf howls. At Brookfield Zoo, howling often occurs in response to sirens. Human "howling" is often an adequate substitute for prerecorded wolf howls when at tempting to stimulate howling among wolves. The common loon (Gavia immer) has a call that closely resembles a wolf howltwice in 1973 Isle Royale wolves at summer homesites began to howl immediately after hearing loons. Once, the pups were clearly the first to respond. On two occasions I heard loons calling shortly after wolves began to howl. Other vocalization. Only limited information was gathered on Isle Royale on other forms of vocal communication, mainly because they are inaudible at long distances. An adult whimpered when it arrived at a summer rendezvous after the rest of the pack had left. Whimpering, interspersed with occasional high-pitched yipping, was frequently heard from pups as they mobbed adults arriving at rendezvous sites. Joslin (1966) believed that whimpering was a friendly greeting, sometimes conveying a submissive attitude. Whimpering was often part of low-intensity friendly greetings at Brookfield Zoo, especially between pairs during the mating season. Barking was heard only during group howls at rendezvous areas. Much of the pup vocalization during group howls consisted of high-pitched "yips," and adult barking sometimes accompanied these pup vocalizations, especially near the end of a howl, much as Joslin (1966) described. He considered barking to be either of a threatening or alarm nature. The "alarm bark" is short and often seemed to cut off a howling session. Joslin occasionally elicited a threatening bark by howling at close proximity to wolves at a rendezvous. In such cases, the barking was more continuous and interspersed with growling. Humans, with a poor sense of smell, are ill-equipped to appreciate the importance of olfactory communication. Scent-marking helps maintain territories, contributes to pair-bond formation, provides information on social and sexual status and individual identity, and helps orient wolves in their environment (cf. Peters and Mech 1975). In canids, elimination (urination, defecation) and rubbing of certain body areas may have scent-marking significance (Kleiman 1966). Scent-marking differs from simple elimination by its directional and repetitive naturethat is, the same object may repeatedly be scent-marked. Kleiman also suggested that this form of scent-marking developed from autonomic responses to strange or frightening situations. Initially, scent-marking could have reassured an animal entering a strange environment and may have since acquired additional signal value in territoriality and courtship (Fig. 66). Wolves have at least two specialized scent glands (Mech 1970; Fox 1971). The anal gland is located on each side of the anal sphincter; presumably scent deposition takes place with each passage of feces. A tail (precaudal) gland of unknown marking function occurs on the dorsal surface of the tail near the base, under a dark patch of hair ("dorsal spot"). Urine is also of considerable scent-marking importance among wolves. In a field study based on tracking wolves in snow, Peters and Mech (1975) distinguished four types of scent-marks: (1) raised leg urination (RLU); (2) squat urination (SQU); (3) defecation (scats); and (4) scratching. They found that the RLU was the most frequent and significant type of scent-mark. Scent-marking by Isle Royale wolves was observed only in winter, usually from the air. Scent-marking was often difficult to distinguish from normal elimination, which seemed to be most common when packs were resting near kills or just beginning to travel. In these cases I ignored defecation and urination unless clearly directed at an object. Frequency of scent-marking. Obvious differences in frequency of scent marking occurred among Isle Royale wolves (Table 14). In all cases the packs were traveling. When the East Pack first entered "foreign" territory we observed 10 scent-marks in a half-hour of observations, compared to 2 in an equivalent length of time as the same pack reentered its own territory several days later. TABLE 14. Scent-marking frequency in traveling wolves. The highest level of scent-marking occurred when three wolves (McGinty duo + 1) left a kill in full view of four wolves of the West Pack (including the alpha pair) who had bedded down 1 km away after following the tracks of the three for many miles. The West Pack wolves were not watching the trio, one of which glanced in the direction of the sleeping wolves twice as we circled. This wolf made six of the seven scent-marks observed. In this case frequent urinations might have resulted from autonomic responses to fear or apprehension and might not have been actual scent-marking. Peters and Mech (1975) found that when packs traveled within a kilometer of the edge of their territories, the frequency of RLU's was twice as high as when packs traveled in the center of their territories. Thus, an accumulation of marks characterized territorial boundaries. The strongest stimulus to scent-mark was the mark of a neighboring pack. In one case when a pack discovered fresh tracks of a neighboring pack on its territorial boundary, these researchers found 30 RLU's, 10 scratches, 2 SQU's, and 1 scat. During normal travel in winter, wolf packs left a sign every 240 m on the average, including a RLU every 450 m. At their normal rate of travel of about 8 km/hr (Mech 1966), that implies an olfactory mark about every 2 minutes, with a RLU every 3 minutes. Isle Royale wolves demonstrated comparable marking frequency (Table 14). Indicator of sexual and social status. The alpha male and female in all packs observed from 1972 through 1974 accounted for most of the recorded instances of scent-marking (27 of 39 cases). In all cases there was active courtship between the alpha wolves; scent-marking clearly played a role in these activities at times. On four occasions the alpha female was seen urine-marking an object, and the alpha male, usually right behind her, sniffed the location and then urinated on it. Twice the alpha female marked a scent-post of the alpha male. In these cases, scent-marking should be considered part of the mechanism of pair-bond formation, as Schenkel (1947) suggested. Twice an alpha male mounted the alpha female immediately after inspecting her fresh urine-mark. The frequency of genital inspection of females during the mating season indicates the importance of olfactory cues in sexual behavior. Experimental work with domestic dogs reviewed by Johnson (1973) documented that urine from estrous females was more attractive to males than that of anestrous females and stimulated mounting among males. Peters and Mech (1975) found that the frequency of RLU and SQU increased before and during the wolf breeding season. During the breeding season they often found a SQU and RLU together in the snow, indicating a female-male combination as described above. We might surmise that scent-marking in the wild would be an important means of establishing initial contact between potential mates in widely dispersed populations. Peters and Mech (1975) pointed out that a lone wolf would be able to determine where potential mates lived and whether they were already paired off, since mating pairs often mark the same points. Also, territorial boundaries were marked with such clarity and frequency that a newly formed pair could easily tell whether they were in an occupied territory, along a territorial boundary, or in unoccupied space. Limited data suggests that scent-marking is related to social status. Alpha wolves often marked when exhibiting no sexual behavior. In some cases subordinate wolves did not mark when they might have been expected to do so. For example, in 1972 the alpha female in the East Pack squatted and urinated on a ridge of ice as the pack was traveling. The spot was inspected by a subordinate wolf, who continued on its way without marking. The alpha male, however, lifted his leg and urinated on the spot after sniffing it. In 1973, four subordinate wolves of the West Pack followed the alpha pair into the woods next to shore. The alpha male led the way, marking a tree on the shoreline. The alpha female followed suit, but the other four wolves sniffed the scent post and left without adding their own scent. Mech and Frenzel (1971) recorded an instance when a wolf believed to be the alpha male was more active than the others when the pack encountered scent posts. Peters and Mech (1975) reported that in two captive packs only high-ranking wolves raised their legs when urine-marking. Twice when they tracked wild pups, they found several SQU's, but no RLU's. While males tend to lift their legs more often than females when scent-marking (Kleiman 1966), several alpha females did this on Isle Royale. I did not observe subordinate wolves lift a hind leg while scent-marking, at least while alpha wolves were present. Orientation and information exchange. Scent-marking in canids may also serve for orientation and information exchange. Humans naturally regard visual signals as the most important means of orientation but, for wolves, a keen sense of smell would be more valuable. Scent-marks are frequent along pack travel-routes and are especially prominent around kills or other centers of activity. About half of the scent-marks recorded by Peters and Mech (1975) were at trail junctions. Peters (1973) found that fatty acid content of anal gland secretions differed between males and females and that all individuals differed slightly from one another. Peters and Mech (1975) showed that wolves tend to remark fresh scent-marks more often than old marks, and inferred that wolves also could discriminate between marks of different ages. Thus, by simply sniffing a scent-mark, a wolf can probably tell whether the marking wolf was a stranger, a male or female and its reproductive status, and how long ago the mark was made. Territorial marking. Young (1944) reported that wolves became greatly excited and scent-marked frequently when encountering the introduced scent of a strange wolf. Schenkel (1947) believed that scent-marking was of territorial significance; avoidance behavior in response to another pack's scent posts along a territorial boundary occasionally has been documented (see Maintenance of Territory). Rolling and scratching. Kleiman (1966) believes that rolling has some scent marking function (perhaps self-marking); Fox (1971) suggested that this may promote interaction with other pack members by encouraging social investigation of wolves carrying interesting odors. Wolves that are separated from the pack could transport odors on their fur and perhaps transmit information to other pack members. I have seen Isle Royale wolves roll at various places: a moose bed in the snow, the kill site of a fox, the dug-out remains of an adult moose killed 7 months earlier, and in snow next to a fresh moose kill. Also, when the East Pack reached Little Siskiwit Island on its trek into new territory, two wolves rolled on the ice after sniffing it; perhaps this was an old scent post. Wolves often scratch with their feet after urinating and defecating (Fig. 67). Usually, only high-ranking wolves exhibit such marking (Peters and Mech 1975). The function of this behavior is not clear, but Mech (1970) noted that it would increase the visual signal value of a scent-mark. Schenkel (1947) thought that it might be a behavioral rudiment which perhaps has lost its original function. This is reasonable in light of comparative work with other carnivores. Kruuk (1972) found that male hyenas scrape only as a sexual display in courtshipprobably to distribute the scent of their interdigital glands. While interdigital glands have not been described in wolves and coyotes, Fox (1971) stated that such glands are found in red foxes and perhaps existed in primitive canids. The shy and elusive nature of wild wolves makes summer ecological studies difficult. Significant observations in the wild are possible at dens or other centers of activity on the tundra of Alaska and Canada, but in forested areas rarely more than a fleeting glimpse of wolves is possible from the ground (Fig. 68). Radio-tracking has provided detailed knowledge of wolf movements in summer in Ontario and Minnesota (Kolenosky and Johnston 1967; Van Ballenberghe 1972; Mech and Frenzel 1971). During this study, a den used by the East Pack was found in early July 1973. The wolves had abandoned it but were located at a rendezvous area 1 km away. Subsequent movements of the pack were followed to three additional rendezvous sites until pack movements became extensive in late September. One rendezvous site of the West Pack was found in September, about a month after it had been vacated. Direct observations of wolves were possible only at one rendezvous of the East Pack in 1973. Wolves occasionally dig out dens weeks in advance of the birth of pups, which probably takes place in late April on Isle Royale. While wolves usually dig underground dens in sandy soil, they have also used hollow logs, rock cavities, old fox dens, and beaver lodges (Mech 1970). Dens commonly are close to water, perhaps because nursing females have a high water requirement. The whelping den used by the East Pack in 1973 was an abandoned beaver lodge, whose entrance had been exposed when a dam broke. Another abandoned lodge 10 m away and a nearby hole in a sandy bank also appeared to have been used (Fig. 69). All holes were within 20 m of water. Many old scats under leaves and other debris indicated that this den had been used beforepossibly during the first 3 years of the East Pack's existence. Tracks of a wolf were followed to this den in March 1974, and some scratching was found, but the site was not used in 1974. Both lodges had a central chamber; barely large enough for an adult wolf, and many interconnecting tunnels which only pups could use. The only obvious alteration by wolves was the enlargement of at least one entrance and tunnel to the central chamber. Scattered around the den area were bones from at least six beaver, one muskrat, and one adult and one calf moose. In 1975, the East Pack again denned in a beaver lodge. The West Pack denned in a hollow white pine (Pinus strobus) trunk that had fallen to the ground (Fig. 70). The log was 9 m long with the major opening 45 cm high and 55 cm wide. Pups frequently had used smaller openings created by decay of the wood. Three other possible whelping dens also were foundtwo abandoned beaver lodges and a hollow log. Wolves visited all six dens during both summer and winter. Because there are very few opportunities for wolves to dig dens on Isle Royale, they take full advantage of existing structures. In temperate regions pups are usually moved from the den site in late June or early July, after the pups have been weaned (Mech 1970). Thereafter, the activities of the pack center around "rendezvous sites" (Murie 1944:40) or "loafing areas" (Young 1944:103), where the pups remain while the adults make hunting forays. A succession of rendezvous sites are used by a pack until the pups are able to accompany the adults on all their travels. Rendezvous sites, like whelping dens, usually are near water and often are adjacent to bogs (Joslin 1967). In 1973, five rendezvous sites were found on Isle Royale, four of the East Pack and one of the West Pack (Fig. 71). All five were located by abandoned beaver ponds, with water still available nearby. Size varied from 0.4 ha to a drainage 1 km long. Most had a prominent open area where the vegetation had been matted, and holes often had been dug in nearby banks. A small den was found beneath the roots of a cedar tree at one area, and a beaver lodge had been excavated and used at another area. Both dens and rendezvous are frequently reused, with former rendezvous sites possibly serving as den sites at a later date, and vice versa. Over a 3-year period, a pack studied by Carbyn (1974a) used the same den and the same first rendezvous site each year. Rendezvous sites generally are used for shorter periods than den sites. Joslin (1967) found that packs moved an average of every 17 days, possibly influenced by frequent human howling nearby. Baffin Island packs moved to different summer dens (analogous to rendezvous sites) about every 30 days (Clark 1971); (Van Ballenberghe 1972). On Isle Royale, rendezvous areas were occupied from 11 to at least 48 days (Table 15). Wolves have been seen at rendezvous sites as late as October on Isle Royale and in northeastern Minnesota. TABLE 15. Successive rendezvous areas of the East Pack, 1973. OBSERVATIONS AT A MIDSUMMER RENDEZVOUS In July 1973, we observed the East Pack at its second rendezvous site from about 200 m. The pups usually were the only wolves in sight. Adults spent much of the day in the cooler forest surrounding a central open area. Most activity was observed before 11:00 a.m. or after 5:00 p.m. Seven pups, probably the total in the pack, were seen at this rendezvous, along with at least seven different adults. Most of the adults in the pack probably visited the rendezvous periodically. Adults were observed arriving at the rendezvous nine times, always before 10:00 a.m. or after 5:00 p.m. Only once did two wolves enter together, indicating that most of the hunting effort by adults in summer is done by individuals or small groups. Pups often sensed the imminent arrival of adults and ran out as a group to meet them. Such an arrival was an occasion of great excitement for the pups, and they greeted the adults by yipping and jumping at their heads. Excited licking of the mouth acts as a stimulus causing regurgitation of food for the pups. On three occasions the arriving adults regurgitated food for the pups immediately, while being greeted. The pups ate such regurgitated food within a minute. Adults rarely remained in open areas for any length of time. About 11 kg of food per day would be required to feed seven pups; (based on Kuyt 1972); providing this amount is undoubtedly a demanding task. Pup activity alternated with long periods of sleep, but even then pups frequently looked up or stood and readjusted their position. Their ears were in constant motion because of insects, mostly mosquitoes. When resting, pups often sought each other's company, even flopping down directly on another sleeping pup. Rest was interrupted with jaw-wrestling, scruff-holding, and occasional nibbling of legs and tails of nearby pups. Many of the pup activities were group-oriented, such as play-fighting and competition for bones or sticksappropriately termed "trophies" by Crisler (1958). Pups probably did most of the digging found at rendezvous sites. Many items were chewed extensivelymoose bones, antlers (especially those in velvet), sticks, and at one site, an aluminum canteen. One evening, five pups gathered around a rotten birch log. They attacked the log in much the same way that they would later treat a moose carcass; each pup lay on its belly, chewing on its portion of the log and snapping at any encroaching sibling. They ripped enthusiastically at loose pieces of rotten wood and occasionally wandered off with a chunk for more peaceful chewing. Six of the pups were of uniform appearance and impossible to tell apart. The seventh, called "7-up," was much lighter in color than the others. Its activities often set it apart. When first distinguished, this pup was the scapegoat during vigorous play-fighting of four pups. With tail firmly planted between its legs, "7-up" continually was the object of chases and alternately was submissive and defensive. Another time, this pup was chewing on a calf-leg bone when two others walked up and stood over "7-up" with a dominant attitude; one finally grabbed the bone. After a spirited defense, "7-up" ended up on its back, entirely submissive. On at least two occasions, "7-up" was the only pup at the rendezvous; once it appeared that no other wolves were present. During this time a cow moose walked slowly into the open area while the pup was out of sight. She stopped and sniffed the ground thoroughly. Undoubtedly, the scent of wolf pervaded the area, and she seemed hesitant, her movements very slow. Every few steps she stopped and looked about, frequently sniffing the ground in matted places. Finally she walked down the drainage and disappeared. Almost immediately, "7-up," with nose to the ground, scampered into the opening and followed the moose briefly. It is quite possible that the pup had had the moose under surveillance but was reluctant to show itself when the moose was nearby. Crisler (1958) and Fentress (1967) reported that their captive wolves were initially afraid of large animals, even traditional prey. Considerable experience is probably necessary before pups become effective predators of ungulates as large as moose. Moose commonly exhibit no fear of wolves. They were seen several times browsing on the edge of a rendezvous. Once I watched a bull, apparently unconcerned, browsing within 100 m of some pups and adults that were howling just out of sight. MOVEMENTS BETWEEN RENDEZVOUS SITES Wolves move to different rendezvous sites for seldom-known reasons. The accumulation of feces and debris eventually may render dens less desirable (Young 1944; Rutter and Pimlott 1968); perhaps the same applies to rendezvous sites. In some cases wolves might move the pups to a fresh kill. At two of the five rendezvous examined in 1973, a moose-kill was found in the center of the activity area (Fig. 72). At the rendezvous that the pack reused in 1974, a fresh kill was found in almost the same location as a kill that had been made the previous year. We watched the East Pack abandon its second rendezvous of 1973. Howling helped to coordinate the move to a new site, as shown by the following field notes: Earlier in the summer, five pups were observed en route from the first to the second rendezvous. In this case adults were howling periodically at both locations, and the five pups went to the next site by themselves. The following day, a sixth pup was still present at the first site. Two nights after initial occupancy of the second rendezvous, adults present at the first site were heard howling in response to pups and adults at the new site, indicating that several days may be necessary for complete relocation. Although pup mortality is widely regarded as an important factor controlling wolf populations, information on pup production and survival on Isle Royale is very limited (Table 16). Sometimes the minimum number of pups in packs can be estimated during winter, but this is not a valid year-to-year index. TABLE 16. Pup production on Isle Royale, 1970-73. Pup condition may provide some indication of the extent of mortality. A dead, emaciated pup was found in 1964 (Jordan et al. 1967), suggesting that inadequate food supply early in life might be a critical factor on Isle Royale. A decrease in food supply seems to be an important reason for poor pup condition and low survival in Minnesota (Mech 1973; Van Ballenberghe and Mech 1975; Seal et al. 1975). Kuyt's (1972) data suggested lower pup survival in areas where tundra wolves relied heavily on small mammals when caribou were absent. A visual comparison of pups on Isle Royale and in Minnesota suggests that the pups on the island were faring well. I first saw the East Pack pups in late July. Subsequently I saw four pups, weighing between 8 and 13 kg, live-trapped in northern Minnesota in late September. By comparison, the Isle Royale pups seen 2 months earlier weighed about 9-12 kg. This is within the range of weight of captive pups of the same age (Kuyt 1972), and is higher than weights of pups caught in Minnesota, where there was a food shortage (Van Ballenberghe and Mech 1975). Two pups of the East Pack were seen about a month later. Growth in the tntervening period was obvious; weight was estimated at 16 kg. They appeared full-bodied, with well-developed coat and guard hairs. These two pups were larger and appeared heavier than the four pups caught in Minnesota a full month later. These observations of Isle Royale pups suggest that the midsummer food supply, at least in 1973, was sufficient for normal growth and development. However, there can be great differences in pup weights even within a single litter (Van Ballenberghe and Mech 1975). There was some evidence of retarded winter pelage development among some pups in the East Pack in February 1974 (Fig. 73). Nonetheless, winter observations of this pack since 1972 indicate rapid numerical growth, suggesting high pup survival from 1971 through 1973 (three successive litters). In winter, wolves encounter scavengers for which moose carcasses are a principal source of food. Besides the red fox, many birds also utilize wolf-killed mooseprimarily the raven, gray jay, black-capped chickadee, and an occasional eagle. Only the fox and raven will be considered here. While wolves were seen chasing foxes six times in winter 1972-74, none was caught. Foxes can often run on light snow crusts where wolves break through, and they invariably outrun wolves when chased overland in snow. In the only chase seen on ice, the fox had such a long head start that it reached the shore with no trouble. In 1972, the East Pack was observed just leaving a fox it had killed on the open ice of Malone Bay. The area was matted with wolf tracks, and much hair had been pulled from the fox, though it was not eaten. The fox's ability to outrun wolves in most snow conditions may be an important reason for its continued coexistence with wolves on Isle Royale. Coyotes, however, disappeared from the island around 1957, less than a decade after the arrival of the wolf. Foxes have thrived recently on Isle Royale, and perhaps even increased after the disappearance of coyotes. While foxes have been observed on Isle Royale since the mid-1920s, long-time island residents report that foxes were uncommon, at least relative to coyotes, before wolves became established. Moreover, less competition for food resources exists between wolves and foxes than between wolves and coyotes. Johnson (1969) reported that snowshoe hares were the most important year-round food for Isle Royale foxes, and that at certain seasons they made extensive use of insects and fruit. Coyotes relied heavily on moose carcasses. Wolves apparently eliminated coyotes on Isle Royale (Mech 1966; Krefting 1969; Wolfe and Allen 1973), probably through direct killing and competition for food. Wolves occasionally were indifferent to the presence of foxes. In 1973, the West Pack bedded down on the ice after feeding on a moose carcass. Soon a fox approached, cautiously staying out of sight of the wolves when possible. At the carcass, the fox chased away several ravens and woke the wolves in the process, but they merely raised their heads for a brief look. During winter periods when foxes were unable to catch snowshoe hares because of deep snow they relied heavily on carcasses of wolf-killed moose (Fig. 74). Foxes have difficulty penetrating the thick hide of a moosethey depend on wolves not only to kill the moose but also to open it up. In winters when utilization of kills by wolves is less than usual, moose carcasses may attract a large number of foxesas many as 10 at one time in 1972 (Appendix G). Ravens on Isle Royale in winter are almost entirely dependent on food indirectly provided by wolves (Fig. 75). Ravens often accompany the large packs in their travels, sitting in trees when the wolves stop to rest. Fresh kills draw ravens from miles28 ravens were seen once on a moose carcass. Ravens also eat wolf scats, especially fresh ones with much incompletely digested meat. Similarly, they feed not only on fresh mountain ash fruit but also on fox scats that are loaded with fruit remains. Since ravens and wolves often feed on the same carcasses, there is much interaction. Ravens seem to tease resting wolves, swooping low over their heads, landing nearby and hopping close, further arousing the wolves. (Murie 1944; Crisler 1958; Mech 1966). Wolves, in turn, leap at ravens in the air, stalk them on the ground, and scatter them from kills. In February 1974, Don Murray and I were circling a kill of the West Pack, with four wolves resting nearby. Suddenly a wolf made a couple of quick boundsit had caught a raven, something Murray had not seen in 16 winters of flying on Isle Royale (Fig. 76). The wolf shook the raven vigorously in its mouth, then trotted by two other wolves, lay down on its belly and shook it again. Another wolf followed with great interest but was repulsed by a snap from the prize-holder. Finally, the wolf with the raven buried it in snow among some alders and trotted out to greet the other wolves. Next, it dug out the raven and paraded around with it in its mouth, always refusing to let the other wolves inspect it closely. After 15 minutes of this activity we left, but returned an hour later to find the wolves still playing with the raven's carcass. One wolf buried it below a shelf of ice next to shore, then stood above it while another wolf closed in on the buried trophy. When the wolf below came within 2 m, the one above leaped off the ledge and rolled the other over. A brief chase ensued, and then the whole pattern was repeated. The following day the wolves were gone, leaving the raven carcass in the snow. Last Updated: 06-Nov-2007
http://www.nps.gov/history/history/online_books/science/11/chap2.htm
13
67
Required math: calculus Required physics: time-independent Schrödinger equation Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Sec 2.2. To get the feel of how to solve the time-independent Schrödinger equation in one dimension, the most commonly used example is that of the infinite square well, sometimes known as the ‘particle in a box’ problem. First, recall the Schrödinger equation itself: Remember that the ‘time-independent’ bit refers to the potential function which is taken to be a function of position only; the wave function itself, which is the solution of the equation, will in general be time-dependent. The infinite square well is defined by a potential function as follows: An area with an infinite potential means simply that the particle is not allowed to exist there. In classical physics, you can think of an infinite potential as an infinitely high wall, which no matter how much kinetic energy a particle has, it can never leap over. Although examples from classical physics frequently break down when applied to quantum mechanics, in this case, the comparison is still valid: an infinitely high potential barrier is an absolute barrier to a particle in both cases. We saw in our study of the time independent Schrödinger equation that separation of variables reduces the problem to solving the spatial part of the equation, which is where the constant represents the possible energies that the system can have. It is important to note that both and are unknown before we solve the equation. In classical physics, we would be allowed to specify since it is just the kinetic energy that the particle has inside the well. Classically, can be any positive quantity, and the particle would just bounce around inside the well without ever changing its speed (assuming the walls were perfectly elastic and there was no friction). In quantum physics, as we will see, can have only certain discrete values, and these values arise in the course of solving the equation. In an infinite square well, the infinite value that the potential has outside the well means that there is zero chance that the particle can ever be found in that region. Since the probability density for finding the particle at a given location is , this condition can be represented in the mathematics by requiring if or . This condition is forced from the Schrödinger equation since if , any non-zero value for would result in an infinite term in the equation. However, it is certainly not rigorous mathematics, since multiplying infinity by zero can be done properly only by using a limiting procedure, which we haven’t done here. A proper treatment of the infinite square well is as a limiting case of the finite square well, where can have a large but finite value outside the well. However, the mathematics for solving the finite square well is considerably more complicated and tends to obscure the physics. Readers who are worried, however, can be reassured that the energy levels in the finite square well do become those in the infinite square well when the proper limit is taken. Inside the well, so the Schrödinger equation becomes This differential equation has the general solution for unspecified (yet) constants and . If you don’t believe this, just substitute the solution back into the equation. How can we determine and ? To do this, we need to appeal to Born’s conditions on the wave function. Born’s first condition is clearly satisfied here: is single-valued. The second condition of being square integrable we’ll leave for a minute. The third condition is that must be continuous. We have argued above that outside the well, so in particular, this means that at the boundaries and we must have . If we impose that condition on our general solution above, we get: where is an integer. The useful values of are just the positive integers. To see this, note that if , then everywhere which besides being very boring, is also no good as a probability density since its square modulus cannot integrate to 1. Negative integers don’t really give new solutions, since so the negative sign can be absorbed into the (still undetermined) constant . Also, the energies depend only on the square of so the sign of doesn’t matter physically. We can now return to the square-integrable condition and use it to determine . Remember that the integral is over all space in which the particle can be found, so in this case we are interested in . Since it is only the square modulus of the wave function that has physical significance, we can ignore the negative root and take the final form of the wave function as Notice what has happened here. Applying the first boundary condition at allowed us to eliminate . But the other boundary condition at ended up giving us a condition on rather than . (Well, ok, we could have used the second boundary condition to set but then we would have everywhere again.) Not only that, but the energy levels are discrete; thus the infinite square well is the first case in which the Schrödinger equation has actually predicted quantization in a system. So the boundary conditions on the differential equation have put restrictions on the allowable energies. The acceptable solutions for are determined by the condition and so the various functions are just lobes of the sine function. The lowest energy, called the ground state, occurs when and is half a sine wave, consisting of the bit between and . The next state at corresponds to a single complete cycle of the sine wave; contains 1.5 cycles and so on. Eagle-eyed readers will have noticed that in all the excitement over discovering quantization, we have neglected to look at Born’s fourth condition: that of continuous first derivatives. Clearly this condition is violated, since the derivative of outside the well is 0 (since outside the well), but inside the derivative is which is at and at (the sign depends on whether is odd or even). Neither of these derivatives is zero. The reason is, of course, because of the infinite potential function which is not physically realistic. In fact, what happens in the (real-world) finite square well is that the wave function inside the potential barrier (that is, just off the ends of the well) is not zero, but a decaying exponential which tends to zero the further into the barrier you go. In that case, it is possible to make both the wave function and its first derivative continuous at both ends of the well (and it is precisely that condition which makes the mathematics so much more complicated in the finite square well). In fact, this effect happens in any potential where the energy of the particle is less than that of a (finite) potential barrier: the particle’s wave function extends into the barrier region. So does that mean that the particle has a probability of appearing inside a barrier? Technically yes, but in practice it usually doesn’t do the particle much good, since the probability of being outside the barrier is usually a lot greater. However, there is one case where this barrier penetration effect does occur, and that is if the barrier is thin enough for the wave function to have a significant magnitude on the other side of the barrier. That is, if we have a particle in a finite well, but the wall of the well is fairly thin and there is another well (or just open space) on the other side of the barrier, then the wave function starts off with a respectable magnitude inside the main well, extends into the barrier (but gets attenuated exponentially in doing so), but before the attenuation gets so severe that the wave function becomes very small, it bursts through to the other side of the barrier. That means that, yes, there is a definite probability that the particle can spontaneously appear outside the well without having to jump over the barrier. In effect, it tunnels through the barrier and escapes. The effect, not surprisingly, is known as quantum tunneling and is one of the main causes of some forms of radioactive decay. But that’s a topic for another post.
http://physicspages.com/2011/01/26/the-infinite-square-well-particle-in-a-box/
13
52
Chapter 4: Scripts and Explicit Functions What is called a "script" is a sequence of lines of J where the whole sequence can be replayed on demand to perform a computation. The themes of this chapter are scripts, functions defined by scripts, and scripts in files. Here is an assignment to the variable txt: txt =: 0 : 0 What is called a "script" is a sequence of lines of J. ) The expression 0 : 0 means "as follows", that is, 0 : 0 is a verb which takes as its argument, and delivers as its result, whatever lines are typed following it, down to the line beginning with the solo right- parenthesis. The value of txt is these two lines, in a single character string. The string contains line-feed (LF) characters, which cause txt to be displayed as several lines. txt has a certain length, it is rank 1, that is, just a list, and it contains 2 line-feed characters. txt What is called a "script" is a sequence of lines of J. 4.2 Scripts for Procedures Here we look at computations described as step-by-step procedures to be followed. For a very simple example, the Fahrenheit-to-Celsius conversion can be described in two steps. Given some temperature T say in degrees Fahrenheit: T =: 212 then the first step is subtracting 32. Call the result t, say t =: T - 32 The second step is multiplying t by 5%9 to give the temperature in degrees Celsius. t * 5 % 9 100 Suppose we intend to perform this computation several times with different values of T. We could record this two-line procedure as a script which can be replayed on demand. The script consists of the lines of J stored in a text variable, thus: script =: 0 : 0 t =: T - 32 t * 5 % 9 ) Scripts like this can be executed with the built-in J verb 0 !: 111 which we can call, say, do. do =: 0 !: 111 do script We should now see the lines on the screen just as though they had been typed in from the keyboard: t =: T - 32 t * 5 % 9 100 We can run the script again with a different value for T 4.3 Explicitly-Defined Functions Functions can be defined by scripts. Here is an example, the Fahrenheit-to-Celsius conversion as a verb. Celsius =: 3 : 0 t =: y - 32 t * 5 % 9 ) Let us look at this definition more closely The function is introduced with the expression 3 : 0 which means: "a verb as follows". (By contrast, recall that 0 : 0 means "a character string as follows"). The colon in 3 : 0 is a conjunction. Its left argument (3) means "verb". Its right argument (0) means "lines following". For more details, see Chapter 12. A function introduced in this way is called "explicitly-defined", or just "explicit". The expression (Celsius 32 212) applies the verb Celsius to the argument 32 212, by carrying out a computation which can be described, or modelled, like this: y =: 32 212 t =: y - 32 t * 5 % 9 0 100 Notice that, after the first line, the computation proceeds according to the script. 4.3.3 Argument Variable(s) The value of the argument (32 212) is supplied to the script as a variable named y . This "argument variable" is named y in a monadic function. (In a dyadic function, as we shall see below, the left argument is named x and the right is y) 4.3.4 Local Variables Here is our definition of Celsius repeated: Celsius =: 3 : 0 t =: y - 32 t * 5 % 9 ) We see it contains an assignment to a variable t. This variable is used only during the execution of Celsius. Unfortunately this assignment to t interferes with the value of any other variable also called t, defined outside Celsius, which we happen to be using at the time. To demonstrate: t =: 'hello' Celsius 212 100 t 180 We see that the variable t with original value ('hello') has been changed in executing Celsius. To avoid this undesirable effect, we declare that t inside Celsius is to be a strictly private affair, distinct from any other variable called t. For this purpose there is a special form of assignment, with the symbol =. (equal dot). Our revised definition becomes: Celsius =: 3 : 0 t =. y - 32 t * 5 % 9 ) and we say that t in Celsius is a local variable, or that t is local to Celsius. By contrast, a variable defined outside a function is said to be global. Now we can demonstrate that in Celsius assignment to local variable t does not affect any global variable t t =: 'hello' Celsius 212 100 t hello The argument-variable y is also a local variable. Hence the evaluation of (Celsius 32 212) is more accurately modelled by the computation: y =. 32 212 t =. y - 32 t * 5 % 9 0 100 4.3.5 Dyadic Verbs Celsius is a monadic verb, introduced with 3 : 0 and defined in terms of the single argument y. By contrast, a dyadic verb is introduced with 4 : 0. The left and right arguments are always named x and y respectively Here is an example. The "positive difference" of two numbers is the larger minus the smaller. posdiff =: 4 : 0 larger =. x >. y smaller =. x <. y larger - smaller ) A one-line script can be written as a character string, and given as the right argument of the colon conjunction. 4.3.7 Control Structures In the examples we have seen so far of functions defined by scripts, execution begins with the expression on the first line, proceeds to the next line, and so on to the last. This straight-through path is not the only path possible. A choice can be made as to which expression to execute next. For an example, here is a function to compute a volume from given length, width and height. Suppose the function is to check that its argument is given correctly as a list of 3 items (length, width and height). If so, a volume is computed. If not, the result is to be the character-string 'ERROR'. volume =: 3 : 0 if. 3 = # y do. * / y else. 'ERROR' end. ) Everything from if. to end. together forms what is called a "control structure". Within it if. do. else. and end. are called "control words". See Chapter 12 for more on control structures. 4.4 Tacit and Explicit Compared We have now seen two different styles of function definition. The explicit style, introduced in this chapter, is so called because it explicitly mentions variables standing for arguments. Thus in volume above, the variable y is an explicit mention of an argument. By contrast, the style we looked at in the previous chapter is called "tacit", because there is no mention of variables standing for arguments. For example, compare explicit and tacit definitions of the positive-difference function: epd =: 4 : '(x >. y) - (x <. y)' tpd =: >. - <. Many functions defined in the tacit style can also be defined explicitly, and vice versa. Which style is preferable depends on what seems most natural, in the light of however we conceive the function to be defined. The choice lies between breaking down the problem into, on the one hand, a scripted sequence of steps or, on the other hand, a collection of smaller functions. The tacit style allows a compact definition. For this reason, tacit functions lend themselves well to systematic analysis and transformation. Indeed, the J system can, for a broad class of tacit functions, automatically compute such transformations as inverses and derivatives. 4.5 Functions as Values A function is a value, and a value can be displayed by entering an expression. An expression can be as simple as a name. Here are some values of tacit and explicit functions: - & 32 +-+-+--+ |-|&|32| +-+-+--+ epd +-+-+-------------------+ |4|:|(x >. y) - (x <. y)| +-+-+-------------------+ Celsius +-+-+-----------+ |3|:|t =. y - 32| | | |t * 5 % 9 | +-+-+-----------+ The value of each function is here represented as a boxed structure. Other representations are available: see Chapter 27. 4.6 Script Files We have seen scripts (lines of J) used for definitions of single variables: text variables or functions. By contrast, a file holding lines of J as text can store many definitions. Such a file is called a script file, and its usefulness is that all its definitions together can be executed by reading the file. Here is an example. Using a text-editor of your choice, create a file on your computer, containing 2 lines of text like the following. squareroot =: %: z =: 1 , (2+2) , (4+5) A J script file has a filename ending with .ijs by convention, so suppose the file is created (in Windows) with the full pathname c:\temp\myscript.ijs for example. Then in the J session it will be convenient to identify the file by defining a variable F say to hold this filename as a string. F =: 'c:\temp\myscript.ijs' Having created this 2-line script file, we can execute it by typing at the keyboard: 0!:1 < F and we should now see the lines on the screen just as though they had been typed from the keyboard. squareroot =: %: z =: 1 ,(2+2), (4+5) We can now compute with the definitions we have just loaded in from the file: z 1 4 9 squareroot z 1 2 3 The activities in a J session will be typically a mixture of editing script files, loading or reloading the definitions from script files, and initiating computations at the keyboard. What carries over from one session to another is only the script files. The state, or memory, of the J system itself disappears at the end of the session, along with all the definitions entered during the session. Hence it is a good idea to ensure, before ending a J session, that any script file is up to date, that is, it contains all the definitions you wish to preserve. At the beginning of a session the J system will automatically load a designated script file, called the "profile". (See Chapter 26 for more details). The profile can be edited, and is a good place to record any definitions of your own which you find generally useful. We have now come to the end of Chapter 4 and of Part 1. The following chapters will treat, in more depth and detail, the themes we have touched upon in Part 1. Table of Contents This chapter last updated 25 Mar 2006 . The examples in this chapter were executed using J version 601 beta. Copyright © Roger Stokes 2006. This material may be freely reproduced, provided that this copyright notice is also reproduced.
http://jsoftware.com/help/learning/04.htm
13
62
If you are not familiar with the Pythagorean theorem, please read this article. It is important that you are familiar with this theorem because finding the length of a vector is based on it. This picture shows a 2-dimensional vector with red lines overlaid to show the x and y displacement. From this picture you should be able to infer how the length is obtained. You square the x and y term of the vector and then take the square root of their sum. In pseudo code it would look like this: sqrt( x*x + y*y) Or, in three dimensions: sqrt (x*x + y*y + z*z) The length of the vector is sometimes called the norm. If we have a vector A, you will often see the norm of vector A notated as ||A|| or just |A|. There is a special class of vectors that will be VERY important to us. These are called unit length vectors and any vector that is unit length is said to be normalized. In order for a vector to qualify as unit length it must have a length of exactly 1. We can normalize any vector by first finding its length and dividing each component of the vector by the length (norm). In three dimensions it would look like this: length = sqrt(x*x + y*y + z*z) x = x / length y = y / length z = z / length If the vector were only 2-dimensional you would take out the z component. We will now discuss the most important and widely used operation on vectors in all of game programming - the dot product. Given two vectors A and B, we define the dot product of A and B as ||A||||B||cos theta. Theta is simply the angle between the two vectors. The dot product will return a scalar value between 1 and -1. We obtain the dot product by simply multiplying the two vectors' components. So, if we have vectors A and B that are 2-dimensional vectors, we find the dot product like so: A.x*B.x + A.y*B.y If the dot product of A and B is greater than zero, then we know the angle is less than 90 degrees. If the dot product is equal to zero then we know the two vectors are perpendicular or orthogonal to one another. If the scalar value returned is less than zero, we know the angle is greater than 90 degrees. If you need the actual angle between the two vectors, take the inverse cosine of the scalar value returned by the dot product. We will now move on to the perpendicular product and the cross product; two similarly related concepts. For a vector A in 2-dimensions, there are two vectors that are exactly perpendicular to that vector. This picture will help demonstrate. The green vector is vector A. We say that the blue vector is the left hand normal of vector A and the red vector is the right hand normal of vector A. If we were to take the dot product of vector A and the left hand normal or the right hand normal, it would be exactly 0. For any 2-dimensional vector, to find the left hand normal switch the x and y components and negate the x component. In pseudo code it would look like this: left_normal.x = vector.y left_normal.y = -vector.x The right hand normal is similar: right_normal.x = -vector.y right_normal.y = vector.x The cross product is similar to the perpendicular product but it is used in 3-dimensions. Given three points: A, B, C, we can construct 2 vectors from these points. We will name the vectors V1 and V2. The following diagram will help in my explanation. Vector V1 in constructed by taking the point B and subtracting it from A. Vector V2 is constructed the same way except by taking the point C and subtracting it from A. Similar to the perpendicular product, we can get two different normals that will be orthogonal to the plane constructed from the original 3 points. It is common to face the normal out of the plane. We calculate the normal vector like so: normal.x = (V1.y * V2.z) - (V2.y * V1.z) normal.y = (V1.z * V2.x) - (V2.z * V1.x) normal.z = (V1.x * V2.y) - (V2.x * V1.y) The normal vector is represented in the picture by the blue vector which is facing out. The cross product is usually written as normal = V1 x V2. We now move onto our last topic which is vector projection. Have a look at this picture: Lets say the red vector is vector A and the green vector is vector B. We define the projection of A onto B as: Projection = Dot(A, B) / (||B|| * ||B||) * B The projection vector will be parallel to vector B. Notice the projection drops a perpendicular onto B from A. If vector B is normalized we can reduce the projection vector calculation to: Projection = Dot(A, B) * B This is the end of this tutorial. If you have read over both tutorials you should have a good understanding of vectors and be able to start applying them to your game projects. Back to Vectors - part 1 Back to Main Page
http://gameprogrammingtutorials.blogspot.com/2009/11/vectors-part-two.html
13
208
Florida Math Standards - 5th GradeMathScore aligns to the Florida Math Standards for 5th Grade. The standards appear below along with the MathScore topics that match. If you click on a topic name, you will see sample problems at varying degrees of difficulty that MathScore generated. When students use our program, the difficulty of the problems will automatically adapt based on individual performance, resulting in not only true differentiated instruction, but a challenging game-like experience. Number Sense, Concepts, and Operations Benchmark MA.A.1.2.1: The student names whole numbers combining 3-digit numeration (hundreds, tens, ones) and the use of number periods, such as ones, thousands, and millions and associates verbal names, written word names, and standard numerals with whole numbers, commonly used fractions, decimals, and percents. 1. Reads, writes, and identifies whole numbers, fractions, and mixed numbers. (Fraction Pictures ) 2. Reads, writes, and identifies decimals through thousandths. (Decimal Place Value ) 3. Reads, writes, and identifies common percents including 10%, 20%, 25%, 30%, 40%, 50%, 60%, 70%, 75% , 80%, 90%, and 100%. (Percentage Pictures ) Benchmark MA.A.1.2.2: The student understands the relative size of whole numbers, commonly used fractions, decimals, and percents. 1. Uses symbols (>, <, =) to compare numbers in the same and different forms such as 0.5 < 3/4. (Number Comparison , Order Numbers , Order Large Numbers , Compare Mixed Values ) 2. Compares and orders whole numbers using concrete materials, number lines, drawings, and numerals. (Order Numbers , Order Large Numbers ) 3. Compares and orders commonly used fractions, percents, and decimals to thousandths using concrete materials, number lines, drawings, and numerals. (Order Decimals , Compare Mixed Values , Positive Number Line , Fraction Comparison , Compare Decimals ) 4. Locates whole numbers, fractions, mixed numbers, and decimals on the same number line. (Positive Number Line ) Benchmark MA.A.1.2.3: The student understands concrete and symbolic representations of whole numbers, fractions, decimals, and percents in real-world situations. 1. Translates problem situations into diagrams, models, and numerals using whole numbers, fractions, mixed numbers, decimals, and percents. Benchmark MA.A.1.2.4: The student understands that numbers can be represented in a variety of equivalent forms using whole numbers, decimals, fractions, and percents. 1. Knows that numbers in different forms are equivalent or nonequivalent, using whole numbers, decimals, fractions, mixed numbers, and percents. (Basic Fraction Simplification , Fraction Simplification , Fractions to Decimals , Decimals To Fractions , Percentages , Percentage Pictures ) Benchmark MA.A.2.2.1: The student uses place-value concepts of grouping based upon powers of ten (thousandths, hundredths, tenths, ones, tens, hundreds, thousands) within the decimal number system. 1. Knows that place value relates to powers of 10. 2. Expresses numbers to millions or more in expanded form using powers of ten, with or without exponential notation. Benchmark MA.A.2.2.2: The student recognizes and compares the decimal number system to the structure of other number systems such as the Roman numeral system or bases other than ten. 1. Explains the similarities and differences between the decimal (base 10) number system and other number systems that do or do not use place value. Benchmark MA.A.3.2.1: The student understands and explains the effects of addition, subtraction, and multiplication on whole numbers, decimals, and fractions, including mixed numbers, and the effects of division on whole numbers, including the inverse relationship of multiplication and division. 1. Explains and demonstrates the multiplication of common fractions using concrete materials, drawings, story problems, symbols, and algorithms. (Fraction Multiplication ) 2. Explains and demonstrates the multiplication of decimals to hundredths using concrete materials, drawings, story problems, symbols, and algorithms. (Money Multiplication ) 3. Predicts the relative size of solutions in the following: a. addition, subtraction, multiplication, and division of whole numbers b. addition, subtraction, and multiplication of fractions, decimals, and mixed numbers, with particular attention given to fraction and decimal multiplication (for example, when two numbers less than one are multiplied, the result is a number less than either factor) 4. Explains and demonstrates the inverse nature of multiplication and division, with particular attention to multiplication by a fraction (for example, multiplying by 3/4 yields the same result as dividing by 4). (Fraction Division ) 5. Explains and demonstrates the commutative, associative, and distributive properties of multiplication. (Associative Property 2 , Commutative Property 2 , Distributive Property , Basic Distributive Property ) Benchmark MA.A.3.2.2: The student selects the appropriate operation to solve specific problems involving addition, subtraction, and multiplication of whole numbers, decimals, and fractions, and division of whole numbers. 1. Uses problem-solving strategies to determine the operation(s) needed to solve one- and two- step problems involving addition, subtraction, multiplication, and division of whole numbers, and addition, subtraction, and multiplication of decimals and fractions. (Multiplication By One Digit , Long Multiplication , Long Division By One Digit , Long Division , Division with Remainders , Long Division with Remainders , Small Decimal Division , Fraction Addition , Fraction Subtraction , Fraction Multiplication , Decimal Addition , Decimal Subtraction , Decimal Multiplication ) Benchmark MA.A.3.2.3: The student adds, subtracts, and multiplies whole numbers, decimals, and fractions, including mixed numbers, and divides whole numbers to solve real-world problems, using appropriate methods of computing, such as mental mathematics, paper and pencil, and calculator. 1. Solves real-world problems involving addition, subtraction, multiplication, and division of whole numbers, and addition, subtraction, and multiplication of decimals, fractions, and mixed numbers using an appropriate method (for example, mental math, pencil and paper, calculator). (Arithmetic Word Problems , Basic Word Problems 2 , Making Change , Making Change 2 , Money Multiplication , Counting Money , Unit Cost , Fraction Word Problems , Fraction Word Problems 2 ) Benchmark MA.A.4.2.1: The student uses and justifies different estimation strategies in a real- world problem situation and determines the reasonableness of results of calculations in a given problem situation. 1. Chooses, describes, and explains estimation strategies used to determine the reasonableness of solutions to real-world problems. (Rounding Numbers , Money Addition , Money Subtraction ) 2. Estimates quantities of objects to 1000 or more and justifies and explains the reasoning for the estimate (for example, using benchmark numbers, unitizing). Benchmark MA.A.5.2.1: The student understands and applies basic number theory concepts, including primes, composites, factors, and multiples. 1. Finds factors of numbers to 100 to determine if they are prime or composite. (Prime Numbers , Factoring ) 2. Expresses a whole number as a product of its prime factors. (Prime Factoring ) 3. Determines the greatest common factor of two numbers. (Greatest Common Factor ) 4. Determines the least common multiple of two numbers up to 100 or more. (Least Common Multiple ) 5. Multiplies by powers of 10 (100, 1,000, and 10,000) demonstrating patterns. (Multiply By Multiples Of 10 ) 6. Identifies and applies rules of divisibility for 2, 3, 4, 5, 6, 9, and 10. (Divisibility Rules ) 7. Uses models to identify perfect squares to 144. (Perfect Squares ) Benchmark MA.B.1.2.1: The student uses concrete and graphic models to develop procedures for solving problems related to measurement including length, weight, time, temperature, perimeter, area, volume, and angle. 1. Knows measurement concepts and can use oral and written language to communicate them. 2. Extends conceptual experiences into patterns to develop formulas for determining perimeter, area, and volume. 3. Knows varied units of time that include centuries and seconds. 4. Classifies angle measures as acute, obtuse, right, or straight. 5. Investigates measures of circumference using concrete materials (for example, uses string or measuring tape to measure the circumference of cans or bottles). (Requires outside materials ) Benchmark MA.B.1.2.2: The student solves real-world problems involving length, weight, perimeter, area, capacity, volume, time, temperature, and angles. 1. Solves real-world problems involving measurement of the following: a. length (for example, eighth-inch, kilometer, mile) b. weight or mass (for example, milligram, ton) c. temperature (comparing temperature changes within the same scale using either a Fahrenheit or a Celsius thermometer) d. angles (acute, obtuse, straight) 2. Solves real-world problems involving perimeter, area, capacity, and volume using concrete, graphic or pictorial models. 3. Uses schedules, calendars, and elapsed time to solve real-world problems. Benchmark MA.B.2.2.1: The student uses direct (measured) and indirect (not measured) measures to calculate and compare measurable characteristics. 1. Finds the length or height of "hard-to-reach" objects by using the measure of a portion of the objects (for example, find the height of a room or building by finding the height of one block or floor and multiplying by the number of blocks or floors). 2. Uses customary and metric units to compare length, weight or mass, and capacity or volume. 3. Uses multiplication and division to convert units of measure within the customary or metric system. (Distance Conversion , Time Conversion , Volume Conversion , Weight Conversion , Temperature Conversion ) Benchmark MA.B.2.2.2: The student selects and uses appropriate standard and nonstandard units of measurement, according to type and size. 1. Knows an appropriate unit of measure to determine the dimension(s) of a given object (for example, standard - student chooses feet or yards instead of inches to measure a room; nonstandard - student chooses a length of yarn instead of a pencil to measure a room). 2. Knows an appropriate unit of measure (standard or nonstandard) to measure weight, mass, and capacity. Benchmark MA.B.3.2.1: The student solves real-world problems involving estimates of measurements, including length, time, weight, temperature, money, perimeter, area, and volume. 1. Knows how to determine whether an accurate or estimated measurement is needed for a solution. 2. Solves real-world problems involving estimated measurements, including the following: a. length to nearest quarter-inch, centimeter b. weight to nearest ounce, gram c. time to nearest one-minute interval d. temperature to nearest five-degree interval e. money to nearest $1.00 3. Knows how to estimate the area and perimeter of regular and irregular polygons. 4. Knows how to estimate the volume of a rectangular prism. Benchmark MA.B.4.2.1: The student determines which units of measurement, such as seconds, square inches, dollars per tankful, to use with answers to real-world problems. 1. Selects an appropriate measurement unit for labeling the solution to real-world problems. (Unit Cost , Perimeter and Area Word Problems ) Benchmark MA.B.4.2.2: The student selects and uses appropriate instruments and technology, including scales, rulers, thermometers, measuring cups, protractors, and gauges, to measure in real-world situations. 1. Selects and uses the appropriate tool for situational measures (for example, measuring sticks, scales and balances, thermometer, measuring cups, gauges, protractors). Geometry and Spatial Sense Benchmark MA.C.1.2.1: The student given a verbal description, draws and/or models two- and three-dimensional shapes and uses appropriate geometric vocabulary to write a description of a figure or a picture composed of geometric figures. 1. Uses appropriate geometric vocabulary to describe properties and attributes of two- and three- dimensional figures (for example, obtuse and acute angles; radius; equilateral, scalene, and isosceles triangles.). (Circle Measurements , Triangle Types ) 2. Draws and classifies two-dimensional figures having up to ten or more sides and three- dimensional figures (for example, cubes, rectangular prisms, pyramids). (Polygon Names ) 3. Knows the characteristics of and relationships among points, lines, line segments, rays, and planes. Benchmark MA.C.2.2.1: The student understands the concepts of spatial relationships, symmetry, reflections, congruency, and similarity. 1. Uses manipulatives to solve problems requiring spatial visualization. 2. Knows symmetry, congruency, and reflections in geometric figures. 3. Knows how to justify that two figures are similar or congruent. (Congruent And Similar Triangles ) Benchmark MA.C.2.2.2: The student predicts, illustrates, and verifies which figures could result from a flip, slide, or turn of a given figure. 1. Identifies and performs flips, slides, and turns given angle (90°, 180°, 270°) and direction (clockwise or counterclockwise) of turn. 2. Knows the effect of a flip, slide or turn (90°, 180°, 270°) on a geometric figure. 3. Explores tessellations. Benchmark MA.C.3.2.1: The student represents and applies a variety of strategies and geometric properties and formulas for two- and three-dimensional shapes to solve real-world and mathematical problems. 1. Compares the concepts of area, perimeter, and volume using concrete materials (for example, geoboards, grid paper) and real-world situations (for example, tiling a floor, bordering a room, packing a box). 2. Applies the concepts of area, perimeter, and volume to solve real-world and mathematical problems using student-developed formulas. (Perimeter and Area Word Problems ) 3. Knows how area and perimeter are affected when geometric figures are combined, rearranged, enlarged, or reduced (for example, What happens to the area of a square when the sides are doubled?). (Area And Volume Proportions ) Benchmark MA.C.3.2.2: The student identifies and plots positive ordered pairs (whole numbers) in a rectangular coordinate system (graph). 1. Knows how to identify, locate, and plot ordered pairs of whole numbers on a graph or on the first quadrant of a coordinate system. (Ordered Pairs ) Benchmark MA.D.1.2.1: The student describes a wide variety of patterns and relationships through models, such as manipulatives, tables, graphs, rules using algebraic symbols. 1. Describes, extends, creates, predicts, and generalizes numerical and geometric patterns using a variety of models (for example, lists, tables, graphs, charts, diagrams, calendar math). 2. Poses and solves problems by identifying a predictable visual or numerical pattern such as: Day 1 2 3 4...n Number of Calls 4 7 10 ? ? (Patterns: Numbers , Patterns: Shapes ) 3. Explains and expresses numerical relationships and pattern generalizations, using algebraic symbols (for example, in the problem above, the number of calls on the nth day can be expressed as 3n+1). (Function Tables , Function Tables 2 ) Benchmark MA.D.1.2.2: The student generalizes a pattern, relation, or function to explain how a change in one quantity results in a change in another. 1. Knows mathematical relationships in patterns (for example, Fibonacci numbers: 1, 1, 2, 3, 5, 8, .... 2. Analyzes and generalizes number patterns and states the rule for relationships (for example, 1, 4, 9, 16, ... the rule: +3, +5, +7, ... or "squares of the whole numbers"). (Function Tables , Function Tables 2 ) 3. Applies the appropriate rule to complete a table or a chart, such as: IN 1 2 3 9 OUT 1 4 9 ? (Function Tables , Function Tables 2 ) Benchmark MA.D.2.2.1: The student represents a given simple problem situation using diagrams, models, and symbolic expressions translated from verbal phrases, or verbal phrases translated from symbolic expressions, etc. 1. Solves problems involving simple equations or inequalities using diagrams or models, symbolic expressions, or written phrases. (Missing Factor , Missing Term , Missing Operator , Compare Expressions , Arithmetic Word Problems , Basic Word Problems 2 , Single Variable Inequalities , Algebraic Word Problems ) 2. Uses a variable to represent a given verbal expression (for example, 5 more than a number is n + 5). (Phrases to Algebraic Expressions ) 3. Translates equations into verbal and written problem situations. Benchmark MA.D.2.2.2: The student uses informal methods, such as physical models and graphs to solve real-world problems involving equations and inequalities. 1. Uses concrete or pictorial models and graphs (for example, drawings, number lines) to solve equations or inequalities. 2. Uses information from concrete or pictorial models or graphs to solve problems. Data Analysis and Probability Benchmark MA.E.1.2.1: The student solves problems by generating, collecting, organizing, displaying, and analyzing data using histograms, bar graphs, circle graphs, line graphs, pictographs, and charts. 1. Knows which types of graphs are appropriate for different kinds of data (for example, bar graphs, line, or circle graphs). 2. Interprets and compares information from different types of graphs including graphs from content-area materials and periodicals. (Tally and Pictographs , Bar Graphs , Line Graphs ) 3. Chooses reasonable titles, labels, scales and intervals for organizing data on graphs. 4. Generates questions, collects responses, and displays data on a graph. 5. Interprets and completes circle graphs using common fractions or percents. 6. Analyzes and explains orally or in writing the implications of graphed data. Benchmark MA.E.1.2.2: The student determines range, mean, median, and mode from sets of data. 1. Uses a stem-and-leaf plot from a set of data to identify the range, median, mean, and mode. 2. Uses range and measures of central tendency in real-world situations. (Mean, Median, Mode ) Benchmark MA.E.1.2.3: The student analyzes real-world data to recognize patterns and relationships of the measures of central tendency using tables, charts, histograms, bar graphs, line graphs, pictographs, and circle graphs generated by appropriate technology, including calculators and computers. 1. Uses a calculator to determine the range and mean of a set of data. 2. Uses computer applications to examine and evaluate data. 3. Uses computer applications to construct labeled graphs. 4. Uses computer-generated spreadsheets to record and display real-world data. Benchmark MA.E.2.2.1: The student uses models, such as tree diagrams, to display possible outcomes and to predict events. 1. Determines the number of possible combinations of given items and displays them in an organized way. 2. Represents all possible outcomes for a simple probability situation or event using models such as organized lists, charts, or tree diagrams. 3. Calculates the probability of a particular event occurring from a set of all possible outcomes. (Probability ) Benchmark MA.E.2.2.2: The student predicts the likelihood of simple events occurring. 1. Identifies and records the possible outcomes of an experiment using concrete materials (for example, spinners, marbles, number cubes). 2. Explains and predicts which outcomes are most likely to occur and expresses the probabilities as fractions. (Probability ) 3. Conducts experiments to test predictions. Benchmark MA.E.3.2.1: The student designs experiments to answer class or personal questions, collects information, and interprets the results using statistics (range, mean, median, and mode) and pictographs, charts, bar graphs, circle graphs, and line graphs. 1. Designs a survey to collect data. 2. As a class project, discusses ways to choose a sample representative of a large group such as a sample representative of the entire school. 3. Creates an appropriate graph to display data, including titles, labels, scales, and intervals. 4. Interprets the results using statistics (range and measures of central tendency). Benchmark MA.E.3.2.2: The student uses statistical data about life situations to make predictions and justifies reasoning. 1. Uses statistical data to predict trends. 2. Applies statistical data to make generalizations. 3. Justifies and explains generalizations. Learn more about our online math practice software.
http://www.mathscore.com/math/standards/Florida/5th%20Grade/
13
57
Global Positioning System The Global Positioning System (GPS) is currently the only fully functional Global Navigation Satellite System (GNSS). More than two dozen GPS satellites are in medium Earth orbit, transmitting signals allowing GPS receivers to determine the receiver's location, speed and direction. Since the first experimental satellite was launched in 1978, GPS has become an indispensable aid to navigation around the world, and an important tool for map-making and land surveying. GPS also provides a precise time reference used in many applications including scientific study of earthquakes, and synchronization of telecommunications networks. Developed by the United States Department of Defense, it is officially named NAVSTAR GPS (NAVigation Satellite Timing And Ranging Global Positioning System). The satellite constellation is managed by the United States Air Force 50th Space Wing. The cost of maintaining the system is approximately US$750 million per year, including the replacement of aging satellites, and research and development. Despite this fact, GPS is free for civilian use as a public good. Simplified method of operation A GPS receiver calculates its position by measuring the distance between itself and three or more GPS satellites. Measuring the time delay between transmission and reception of each GPS radio signal gives the distance to each satellite, since the signal travels at a known speed. The signals also carry information about the satellites' location. By determining the position of, and distance to, at least three satellites, the receiver can compute its position using trilateration,, or the determination of a distance from three points. Receivers typically do not have perfectly accurate clocks and therefore track one or more additional satellites to correct the receiver's clock error. The current GPS consists of three major segments: the space segment (SS), a control segment (CS), and a user segment (US). The SS is composed of the orbiting GPS satellites, or Space Vehicles (SV) in GPS parlance. The GPS design calls for 24 SVs to be distributed equally among six circular orbital planes. The orbital planes are centered on the Earth, not rotating with respect to the distant stars. The six planes have approximately 55° inclination (tilt relative to Earth's equator) and are separated by 60° right ascension of the ascending node (angle along the equator from a reference point to the orbit's intersection). Orbiting at an altitude of approximately 20,200 kilometers (12,600 miles or 10,900 nautical miles), each SV makes two complete orbits each sidereal day (the length of time for Earth to make a full rotation with respect to a fixed star, namely, 23 hours, 56 minutes, and 4.1 seconds), so it passes over the same location on Earth once each day. The orbits are arranged so that at least six satellites are always within line of sight from almost anywhere on Earth. As of February 2007, there are 30 actively broadcasting satellites in the GPS constellation. The additional satellites improve the precision of GPS receiver calculations by providing redundant measurements. With the increased number of satellites, the constellation was changed to a nonuniform arrangement. Such an arrangement was shown to improve reliability and availability of the system, relative to a uniform system, when multiple satellites fail. The flight paths of the satellites are tracked by US Air Force monitoring stations in Hawaii, Kwajalein, Ascension Island, Diego Garcia, and Colorado Springs, Colorado, along with monitor stations operated by the National Geospatial-Intelligence Agency (NGA). The tracking information is sent to the Air Force Space Command's master control station at Schriever Air Force Base, Colorado Springs, Colorado, which is operated by the 2d Space Operations Squadron (2 SOPS) of the United States Air Force (USAF). 2 SOPS contacts each GPS satellite regularly with a navigational update (using the ground antennas at Ascension Island, Diego Garcia, Kwajalein, and Colorado Springs). These updates synchronize the atomic clocks on board the satellites to within one microsecond and adjust the ephemeris (table of positions of celestial bodies) of each satellite's internal orbital model. The updates are created by a Kalman Filter which uses inputs from the ground monitoring stations, space weather information, and other various inputs. The user's GPS receiver is the user segment (U.S.) of the GPS system. In general, GPS receivers are composed of an antenna, tuned to the frequencies transmitted by the satellites, receiver-processors, and a highly-stable clock (often a crystal oscillator). They may also include a display for providing location and speed information to the user. A receiver is often described by its number of channels: This signifies how many satellites it can monitor simultaneously. Originally limited to four or five, this has progressively increased over the years so that, as of 2006, receivers typically have between twelve and twenty channels. Many GPS receivers can relay position data to a PC or other device using the NMEA 0183 protocol. NMEA 2000 is a newer and less widely adopted protocol. Both are proprietary and controlled by the US-based National Marine Electronics Association. References to the NMEA protocols have been compiled from public records, allowing open source tools like gpsd to read the protocol without violating intellectual property laws. Other proprietary protocols exist as well, such as by SiRF Technology Inc. Receivers can interface with other devices using methods including a serial connection, USB or Bluetooth. GPS satellites broadcast three different types of data in the primary navigation signal. The first is the almanac which sends coarse time information along with status information about the satellites. The second is the ephemeris, which contains orbital information that allows the receiver to calculate the position of the satellite. This data is included in the 37,500 bit Navigation Message, which takes 12.5 minutes to send at 50 bps. The satellites also broadcast two forms of clock information, the Coarse / Acquisition code, or C/A which is freely available to the public, and the restricted Precise code, or P-code, usually reserved for military applications. The C/A code is a 1,023 bit long pseudo-random code broadcast at 1.023 MHz, repeating every millisecond. Each satellite sends a distinct C/A code, which allows it to be uniquely identified. The P-code is a similar code broadcast at 10.23 MHz, but it repeats only once a week. In normal operation, the so-called "anti-spoofing mode" (spoofing, in GPS means a fake signal), the P code is first encrypted into the Y-code, or P(Y), and then decrypted by units with a valid decryption key. Frequencies used by GPS include: - L1 (1575.42 MHz): Mix of Navigation Message, coarse-acquisition (C/A) code and encrypted precision P(Y) code. - L2 (1227.60 MHz): P(Y) code, plus the new L2C code on the Block IIR-M and newer satellites. - L3 (1381.05 MHz): Used by the Defense Support Program to signal detection of missile launches, nuclear detonations, and other high-energy infrared events. - L4 (1379.913 MHz): Being studied for additional ionospheric correction. - L5 (1176.45 MHz): Proposed for use as a civilian safety-of-life (SoL) signal (see GPS modernization). This frequency falls into an internationally protected range for aeronautical navigation, promising little or no interference under all circumstances. The first Block IIF satellite that would provide this signal is set to be launched in 2008. The coordinates are calculated according to the World Geodetic System WGS84 coordinate system. To calculate its position, a receiver needs to know the precise time. The satellites are equipped with extremely accurate atomic clocks, and the receiver uses an internal crystal oscillator-based clock that is continually updated using the signals from the satellites. The receiver identifies each satellite's signal by its distinct C/A code pattern, then measures the time delay for each satellite. To do this, the receiver produces an identical C/A sequence using the same seed number as the satellite. By lining up the two sequences, the receiver can measure the delay and calculate the distance to the satellite, called the pseudorange. The orbital position data from the Navigation Message is then used to calculate the satellite's precise position. Knowing the position and the distance of a satellite indicates that the receiver is located somewhere on the surface of an imaginary sphere centered on that satellite and whose radius is the distance to it. When four satellites are measured simultaneously, the intersection of the four imaginary spheres reveals the location of the receiver. Earth-based users can substitute the sphere of the planet for one satellite by using their altitude. Often, these spheres will overlap slightly instead of meeting at one point, so the receiver will yield a mathematically most-probable position (and often indicate the uncertainty). Calculating a position with the P(Y) signal is generally similar in concept, assuming one can decrypt it. The encryption is essentially a safety mechanism; if a signal can be successfully decrypted, it is reasonable to assume it is a real signal being sent by a GPS satellite. In comparison, civil receivers are highly vulnerable to spoofing since correctly formatted C/A signals can be generated using readily available signal generators. RAIM features will not help, since RAIM only checks the signals from a navigational perspective. Accuracy and error sources The position calculated by a GPS receiver requires the current time, the position of the satellite and the measured delay of the received signal. The position accuracy is primarily dependent on the satellite position and signal delay. To measure the delay, the receiver compares the bit sequence received from the satellite with an internally generated version. By comparing the rising and trailing edges of the bit transitions, modern electronics can measure signal offset to within about 1 percent of a bit time, or approximately 10 nanoseconds for the C/A code. Since GPS signals propagate nearly at the speed of light, this represents an error of about 3 meters. This is the minimum error possible using the GPS C/A signal. Position accuracy can be improved by using the higher-speed P(Y) signal. Assuming the same 1 percent accuracy, the faster P(Y) signal results in an accuracy of about 30 centimeters. Electronics errors are one of several accuracy-degrading effects outlined in the table below. When taken together, autonomous civilian GPS horizontal position fixes are typically accurate to about 15 meters (50 ft). These effects also reduce the more precise P(Y) code's accuracy. |Ionospheric effects||± 5 meter| |Ephemeris errors||± 2.5 meter| |Satellite clock errors||± 2 meter| |Multipath distortion||± 1 meter| |Tropospheric effects||± 0.5 meter| |Numerical errors||± 1 meter or less| Changing atmospheric conditions change the speed of the GPS signals as they pass through the Earth's atmosphere and ionosphere. Correcting these errors is a significant challenge to improving GPS position accuracy. These effects are minimized when the satellite is directly overhead, and become greater for satellites nearer the horizon, since the signal is affected for a longer time. Once the receiver's approximate location is known, a mathematical model can be used to estimate and compensate for these errors. Because ionospheric delay affects the speed of radio waves differently based on frequency, a characteristic known as dispersion (a phenomenon where a light signal is separated into its constituent elements), both frequency bands can be used to help reduce this error. Some military and expensive survey-grade civilian receivers compare the different delays in the L1 and L2 frequencies to measure atmospheric dispersion, and apply a more precise correction. This can be done in civilian receivers without decrypting the P(Y) signal carried on L2, by tracking the carrier wave instead of the modulated code. To facilitate this on lower cost receivers, a new civilian code signal on L2, called L2C, was added to the Block IIR-M satellites, first launched in 2005. It allows a direct comparison of the L1 and L2 signals using the coded signal instead of the carrier wave. The effects of the ionosphere are generally slow-moving, and can be averaged over time. The effects for any particular geographical area can be easily calculated by comparing the GPS-measured position to a known surveyed location. This correction is also valid for other receivers in the same general location. Several systems send this information over radio or other links to allow only L1 receivers to make ionospheric corrections. The ionospheric data are transmitted via satellite in Satellite Based Augmentation Systems such as Wide Area Augmentation System (WAAS), which transmits it on the GPS frequency using a special pseudo-random number (PRN), so only one antenna and receiver are required. Humidity also causes a variable delay, resulting in errors similar to ionospheric delay, but occurring in the troposphere. This effect is much more localized, and changes more quickly than the ionospheric effects, making precise compensation for humidity more difficult. Altitude also causes a variable delay, as the signal passes through less atmosphere at higher elevations. Since the GPS receiver measures altitude directly, this is a much simpler correction to apply. GPS signals can also be affected by multipath issues, where the radio signals reflect off surrounding terrain such as buildings, canyon walls, and hard ground. These delayed signals can cause inaccuracy. A variety of techniques, most notably narrow correlator spacing, have been developed to mitigate multipath errors. For long delay multipath, the receiver itself can recognize the wayward signal and discard it. To address shorter delay multipath from the signal reflecting off the ground, specialized antennas may be used. Short delay reflections are harder to filter out since they are only slightly delayed, causing effects almost indistinguishable from routine fluctuations in atmospheric delay. Multipath effects are much less severe in moving vehicles. When the GPS antenna is moving, the false solutions using reflected signals quickly fail to converge and only the direct signals result in stable solutions. Ephemeris and clock errors The navigation message from a satellite is sent out only every 12.5 minutes. In reality, the data contained in these messages tend to be "out of date" by a greater distance. Consider the case when a GPS satellite is boosted back into a proper orbit; for some time following the maneuver, the receiver's calculation of the satellite's position will be incorrect until it receives another ephemeris update. The onboard clocks are extremely accurate, but they do suffer from some clock drift. This problem tends to be very small, but may add up to 2 meters (~6 ft) of inaccuracy. This class of error is more "stable" than ionospheric problems and tends to change over days or weeks rather than minutes. This makes correction fairly simple by sending out a more accurate almanac on a separate channel. The GPS includes a feature called Selective Availability (SA) that introduces intentional errors between 0 meters and up to a hundred meters (300 ft) into the publicly available navigation signals, making it difficult to use for guiding long range missiles to precise targets. Additional accuracy was available in the signal, but in an encrypted form that was only available to the United States military, its allies, and a few others, mostly government users. SA typically added signal errors of up to about 10 meters (30 ft) horizontally and 30 meters (100 ft) vertically. The inaccuracy of the civilian signal was deliberately encoded so as not to change very quickly, for instance the entire eastern U.S. area might read 30 m off, but 30 m off everywhere and in the same direction. In order to improve the usefulness of GPS for civilian navigation, Differential GPS was used by many civilian GPS receivers to greatly improve accuracy. During the Gulf War, the shortage of military GPS units and the wide availability of civilian ones among personnel resulted in a decision to disable Selective Availability. This was ironic, as SA had been introduced specifically for these situations, allowing friendly troops to use the signal for accurate navigation, while at the same time denying it to the enemy. But since SA was also denying the same accuracy to thousands of friendly troops, turning it off or setting it to an error of 0 meters (effectively the same thing) presented a clear benefit. In the 1990s, the FAA started pressuring the military to turn off SA permanently. This would save the FAA millions of dollars every year in maintenance of their own radio navigation systems. The military resisted for most of the 1990s, but SA was eventually "discontinued;" the amount of error added was "set to zero" at midnight on May 1, 2000 following an announcement by U.S. President Bill Clinton, allowing users access to an undergraded L1 signal. Per the directive, the induced error of SA was changed to add no error to the public signals (C/A code). Selective Availability is still a system capability of GPS, and error could, in theory, be reintroduced at any time. In practice, in view of the hazards and costs this would induce for U.S. and foreign shipping, it is unlikely to be reintroduced, and various government agencies, including the FAA, have stated that it is not intended to be reintroduced. The US military has developed the ability to locally deny GPS (and other navigation services) to hostile forces in a specific area of crisis without affecting the rest of the world or its own military systems. According to the theory of relativity, due to their constant movement and height relative to the Earth-centered inertial reference frame, the clocks on the satellites are affected by their speed (special relativity) as well as their gravitational potential (general relativity). For the GPS satellites, general relativity predicts that the atomic clocks at GPS orbital altitudes will tick more rapidly, by about 45,900 nanoseconds (ns) per day, because they are in a weaker gravitational field than atomic clocks on Earth's surface. Special relativity predicts that atomic clocks moving at GPS orbital speeds will tick more slowly, by about 7,200 ns per day, than stationary ground clocks. When combined, the discrepancy is 38 microseconds per day; a difference of 4.465 parts in 1010. To account for this, the frequency standard onboard each satellite is given a rate offset prior to launch, making it run slightly slower than the desired frequency on Earth; specifically, at 10.22999999543 MHz instead of 10.23 MHz. Another relativistic effect to be compensated for in GPS observation processing is the Sagnac effect. The GPS time scale is defined in an inertial system but observations are processed in an Earth-centered, Earth-fixed (co-rotating) system; a system in which simultaneity is not uniquely defined. The Lorentz transformation between the two systems modifies the signal run time, a correction having opposite algebraic signs for satellites in the Eastern and Western celestial hemispheres. Ignoring this effect will produce an East-West error on the order of hundreds of nanoseconds, or tens of meters in position. The atomic clocks on board the GPS satellites are precisely tuned, making the system a practical engineering application of the scientific theory of relativity in a real-world system. GPS interference and jamming Since GPS signals at terrestrial receivers tend to be relatively weak, it is easy for other sources of electromagnetic radiation to overpower the receiver, making acquiring and tracking the satellite signals difficult or impossible. Solar flares are one such naturally occurring emission with the potential to degrade GPS reception, and their impact can affect reception over the half of the Earth facing the sun. GPS signals can also be interfered with by naturally occurring geomagnetic storms, predominantly found at near the poles of the Earth's magnetic field. Human-made interference can also disrupt, or jam, GPS signals. In one well-documented case, an entire harbor was unable to receive GPS signals due to unintentional jamming caused by a malfunctioning TV antenna preamplifier. Intentional jamming is also possible. Generally, stronger signals can interfere with GPS receivers when they are within radio range, or line of sight. In 2002, a detailed description of how to build a short range GPS L1 C/A jammer was published in the online magazine Phrack. The U.S. government believes that such jammers were used occasionally during the Afghanistan War and the U.S. military claimed to destroy a GPS jammer with a GPS-guided bomb during the Iraq War.. While a jammer is relatively easy to detect and locate, making it an attractive target for anti-radiation missiles, there is the possibility that these jammers would then be located near non-combatant infrastructure and used to attract the precision-guided munitions towards; a tactic known as using a human shield. Due to the potential for both natural and man-made noise, numerous techniques continue to be developed to deal with the interference. The first is to not rely on GPS as a sole source. According to John Ruley, "IFR pilots should have a fallback plan in case of a GPS malfunction." Receiver Autonomous Integrity Monitoring (RAIM) is a feature now included in some receivers, which is designed to provide a warning to the user if jamming or another problem is detected. The military has also deployed the Selective Availability / Anti-Spoofing Module (SAASM) in its Defense Advanced GPS Receiver, which as shown in demonstration videos, is able to detect the jamming and maintain a lock on the GPS signals. Techniques to improve accuracy Augmentation methods of improving accuracy rely on external information being integrated into the calculation process. There are many such systems in place and they are generally named or described based on how the GPS sensor receives the information. Some systems transmit additional information about sources of error (such as clock drift, ephemeris, or ionospheric delay), others provide direct measurements of how much the signal was off in the past, while a third group provide additional navigational or vehicle information to be integrated in the calculation process. Examples of augmentation systems include the WAAS, Differential GPS, and Inertial Navigation Systems. The accuracy of a calculation can also be improved through precise monitoring and measuring of the existing GPS signals in additional or alternate ways. The first is called Dual Frequency monitoring, and refers to systems that can compare two or more signals, such as the L1 frequency to the L2 frequency. Since these are two different frequencies, they are affected in different, yet predictable ways by the atmosphere and objects around the receiver. After monitoring these signals, it is possible to calculate how much error is being introduced and then nullify that error. Receivers that have the correct decryption key can relatively easily decode the P(Y)-code transmitted on both L1 and L2 to measure the error. Receivers that do not possess the key can still use a process called codeless to compare the encrypted information on L1 and L2 to gain much of the same error information. However, this technique is currently limited to specialized surveying equipment. In the future, additional civilian codes are expected to be transmitted on the L2 and L5 frequencies. When these become operational, non-encrypted users will be able to make the same comparison and directly measure some errors. A second form of precise monitoring is called Carrier-Phase Enhancement (CPGPS). The error, which this corrects, arises because the pulse transition of the PRN is not instantaneous, and thus the correlation (satellite-receiver sequence matching) operation is imperfect. The CPGPS approach utilizes the L1 carrier wave, which has a period 1000 times smaller than that of the C/A bit period, to act as an additional clock signal and resolve the uncertainty. The phase difference error in the normal GPS amounts to between 2 and 3 meters (6 to 10 feet) of ambiguity. CPGPS working to within 1 percent of perfect transition reduces this error to 3 cm (1 inch) of ambiguity. By eliminating this source of error, CPGPS coupled with DGPS normally realizes between 20 and 30 cm (8 to 12 in) of absolute accuracy. Relative Kinematic Positioning (RKP) is another approach for a precise GPS-based positioning system. In this approach, determination of range signal can be resolved to an accuracy of less than 10 cm (4 in). This is done by resolving the number of cycles in which the signal is transmitted and received by the receiver. This can be accomplished by using a combination of differential GPS (DGPS) correction data, transmitting GPS signal phase information and ambiguity resolution techniques via statistical tests—possibly with processing in real-time (real-time kinematic positioning, RTK). GPS time and date While most clocks are synchronized to Coordinated Universal Time (UTC), the Atomic clocks on the satellites are set to GPS time. The difference is that GPS time is not corrected to match the rotation of the Earth, so it does not contain leap seconds or other corrections which are periodically added to UTC. GPS time was set to match Coordinated Universal Time (UTC) in 1980, but has since diverged. The lack of corrections means that GPS time remains synchronized with the International Atomic Time (TAI). The GPS navigation message includes the difference between GPS time and UTC, which as of 2006 is 14 seconds. Receivers subtract this offset from GPS time to calculate UTC and 'local' time. New GPS units may not show the correct UTC time until after receiving the UTC offset message. The GPS-UTC offset field can accommodate 255 leap seconds (eight bits) which, at the current rate of change of the Earth's rotation, is sufficient to last until the year 2330. As opposed to the year, month, and day format of the Julian calendar, the GPS date is expressed as a week number and a day-of-week number. The week number is transmitted as a ten-bit field in the C/A and P(Y) navigation messages, and so it becomes zero again every 1,024 weeks (19.6 years). GPS week zero started at 00:00:00 UTC (00:00:19 TAI) on January 6 1980 and the week number became zero again for the first time at 23:59:47 UTC on August 21 1999 (00:00:19 TAI on August 22, 1999). In order to determine the current Gregorian date, a GPS receiver must be provided with the approximate date (to within 3,584 days) in order to correctly translate the GPS date signal. To address this concern the modernized GPS navigation messages use a 13-bit field, which repeats every 8,192 weeks (157 years), and will not return to zero until near the year 2137. Having reached the program's requirements for Full Operational Capability (FOC) on July 17, 1995, the GPS completed its original design goals. However, additional advances in technology and new demands on the existing system led to the effort to "modernize" the GPS system. Announcements from the Vice Presidential and the White House in 1998 heralded the beginning of these changes, and in 2000 the U.S. Congress reaffirmed the effort; referred to it as GPS III. The project aims to improve the accuracy and availability for all users and involves new ground stations, new satellites, and four additional navigation signals. New civilian signals are called L2C, L5, and L1C; the new military code is called M-Code. Initial Operational Capability (IOC) of the L2C code is expected in 2008. GPS allows accurate targeting of various military weapons including cruise missiles and precision-guided munitions. To help prevent GPS guidance from being used in enemy or improvised weaponry, the US Government controls the export of civilian receivers. A US-based manufacturer cannot generally export a receiver unless the receiver contains limits restricting it from functioning when it is simultaneously (1) at an altitude above 18 kilometers (60,000ft) and (2) traveling at over 515 m/s (1,000 knots). The GPS satellites also carry nuclear detonation detectors, which form a major portion of the United States Nuclear Detonation Detection System. - Automobiles can be equipped with GPS receivers at the factory or as after-market equipment. Units often display moving maps and information about location, speed, direction, and nearby streets and landmarks. - Aircraft navigation systems usually display a "moving map" and are often connected to the autopilot for en-route navigation. Cockpit-mounted GPS receivers and glass cockpits are appearing in general aviation aircraft of all sizes, using technologies such as WAAS or Local Area Augmentation System (LAAS) to increase accuracy. Many of these systems may be certified for instrument flight rules navigation, and some can also be used for final approach and landing operations. Glider pilots use GNSS Flight Recorders to log GPS data verifying their arrival at turn points in gliding competitions. Flight computers installed in many gliders also use GPS to compute wind speed aloft, and glide paths to waypoints such as alternate airports or mountain passes, to aid en route decision making for cross-country soaring. - Boats and ships can use GPS to navigate all of the world's lakes, seas, and oceans. Maritime GPS units include functions useful on water, such as “man overboard” (MOB) functions that allow instantly marking the location where a person has fallen overboard, which simplifies rescue efforts. GPS may be connected to the ships self-steering gear and Chartplotters using the NMEA 0183 interface. GPS can also improve the security of shipping traffic by enabling AIS. - Heavy Equipment can use GPS in construction, mining and precision agriculture. The blades and buckets of construction equipment are controlled automatically in GPS-based machine guidance systems. Agricultural equipment may use GPS to steer automatically, or as a visual aid displayed on a screen for the driver. This is very useful for controlled traffic and row crop operations and when spraying. Harvesters with yield monitors can also use GPS to create a yield map of the paddock being harvested. - Bicycles often use GPS in racing and touring. GPS navigation allows cyclists to plot their course in advance and follow this course, which may include quieter, narrower streets, without having to stop frequently to refer to separate maps. Some GPS receivers are specifically adapted for cycling with special mounts and housings. - Hikers, climbers, and even ordinary pedestrians in urban or rural environments can use GPS to determine their position, with or without reference to separate maps. In isolated areas, the ability of GPS to provide a precise position can greatly enhance the chances of rescue when climbers or hikers are disabled or lost (if they have a means of communication with rescue workers). - GPS equipment for the visually impaired is available. For more detailed information see the article GPS for the visually impaired - Spacecraft are now beginning to use GPS as a navigational tool. The addition of a GPS receiver to a spacecraft allows precise orbit determination without ground tracking. This, in turn, enables autonomous spacecraft navigation, formation flying, and autonomous rendezvous. The use of GPS in MEO, GEO, HEO, and highly elliptical orbits is feasible only if the receiver can acquire and track the much weaker (15 - 20 dB) GPS side-lobe signals. This design constraint, and the radiation environment found in space, prevents the use of COTS receivers. Surveying and mapping - Surveying—Survey-Grade GPS receivers can be used to position survey markers, buildings, and road construction. These units use the signal from both the L1 and L2 GPS frequencies. Even though the L2 code data are encrypted, the signal's carrier wave enables correction of some ionospheric errors. These dual-frequency GPS receivers typically cost US$10,000 or more, but can have positioning errors on the order of one centimeter or less when used in carrier phase differential GPS mode. - Mapping and geographic information systems (GIS)—Most mapping grade GPS receivers use the carrier wave data from only the L1 frequency, but have a precise crystal oscillator which reduces errors related to receiver clock jitter. This allows positioning errors on the order of one meter or less in real-time, with a differential GPS signal received using a separate radio receiver. By storing the carrier phase measurements and differentially post-processing the data, positioning errors on the order of 10 cm are possible with these receivers. - Geophysics and geology—High precision measurements of crustal strain can be made with differential GPS by finding the relative displacement between GPS sensors. Multiple stations situated around an actively deforming area (such as a volcano or fault zone) can be used to find strain and ground movement. These measurements can then be used to interpret the cause of the deformation, such as a dike or sill beneath the surface of an active volcano. - Precise time reference—Many systems that must be accurately synchronized use GPS as a source of accurate time. GPS can be used as a reference clock for time code generators or NTP clocks. Sensors (for seismology or other monitoring application), can use GPS as a precise time source, so events may be timed accurately. TDMA communications networks often rely on this precise timing to synchronize RF generating equipment, network equipment, and multiplexers. - Mobile Satellite Communications—Satellite communications systems use a directional antenna (usually a "dish") pointed at a satellite. The antenna on a moving ship or train, for example, must be pointed based on its current location. Modern antenna controllers usually incorporate a GPS receiver to provide this information. - Emergency and Location-based services—GPS functionality can be used by emergency services to locate cell phones. The ability to locate a mobile phone is required in the United States by E911 emergency services legislation. However, as of September 2006 such a system is not in place in all parts of the country. GPS is less dependent on the telecommunications network topology than radiolocation for compatible phones. Assisted GPS reduces the power requirements of the mobile phone and increases the accuracy of the location. A phone's geographic location may also be used to provide location-based services including advertising, or other location-specific information. - Location-based games—The availability of hand-held GPS receivers has led to games such as Geocaching, which involves using a hand-held GPS unit to travel to a specific longitude and latitude to search for objects hidden by other geocachers. This popular activity often includes walking or hiking to natural locations. Geodashing is an outdoor sport using waypoints. - Aircraft passengers—Most airlines allow passenger use of GPS units on their flights, except during landing and take-off when other electronic devices are also restricted. Even though consumer GPS receivers have a minimal risk of interference, a few airlines disallow use of hand-held receivers during flight. Other airlines integrate aircraft tracking into the seat-back television entertainment system, available to all passengers even during takeoff and landing. - Heading information—The GPS system can be used to determine heading information, even though it was not designed for this purpose. A "GPS compass" uses a pair of antennas separated by about 50 cm to detect the phase difference in the carrier signal from a particular GPS satellite. Given the positions of the satellite, the position of the antenna, and the phase difference, the orientation of the two antennas can be computed. More expensive GPS compass systems use three antennas in a triangle to get three separate readings with respect to each satellite. A GPS compass is not subject to magnetic declination as a magnetic compass is, and doesn't need to be reset periodically like a gyrocompass. It is, however, subject to multipath effects. - GPS tracking systems use GPS to determine the location of a vehicle, person, or pet and to record the position at regular intervals in order to create a log of movements. The data can be stored inside the unit, or sent to a remote computer by radio or cellular modem. Some systems allow the location to be viewed in real-time on the Internet with a web-browser. - Weather Prediction Improvements—Measurement of atmospheric bending of GPS satellite signals by specialized GPS receivers in orbital satellites can be used to determine atmospheric conditions such as air density, temperature, moisture and electron density. Such information from a set of six micro-satellites, launched in April 2006, called the Constellation of Observing System for Meteorology, Ionosphere and Climate COSMIC has been proven to improve the accuracy of weather prediction models. - Photograph annotation—Combining GPS position data with photographs taken with a (typically digital) camera, allows one to lookup the locations where the photographs were taken in a gazeteer, and automatically annotate the photographs with the name of the location they depict. The GPS device can be integrated into the camera, or the timestamp of a picture's metadata can be combined with a GPS track log. - Skydiving—Most commercial drop zones use a GPS to aid the pilot to "spot" the plane to the correct position relative to the dropzone that will allow all skydivers on the load to be able to fly their canopies back to the landing area. The "spot" takes into account the number of groups exiting the plane and the upper winds. In areas where skydiving through cloud is permitted the GPS can be the sole visual indicator when spotting in overcast conditions, this is referred to as a "GPS Spot." - Marketing—Some market research companies have combined GIS systems and survey based research to help companies to decide where to open new branches, and to target their advertising according to the usage patterns of roads and the socio-demographic attributes of residential zones. The design of GPS is based partly on the similar ground-based radio navigation systems, such as LORAN and the Decca Navigator developed in the early 1940s, and used during World War II. Additional inspiration for the GPS system came when the Soviet Union launched the first Sputnik in 1957. A team of U.S. scientists led by Dr. Richard B. Kershner were monitoring Sputnik's radio transmissions. They discovered that, because of the Doppler effect, the frequency of the signal being transmitted by Sputnik was higher as the satellite approached, and lower as it continued away from them. They realized that since they knew their exact location on the globe, they could pinpoint where the satellite was along its orbit by measuring the Doppler distortion. The first satellite navigation system, Transit, used by the United States Navy, was first successfully tested in 1960. Using a constellation of five satellites, it could provide a navigational fix approximately once per hour. In 1967, the U.S. Navy developed the Timation satellite which proved the ability to place accurate clocks in space, a technology the GPS system relies upon. In the 1970s, the ground-based Omega Navigation System, based on signal phase comparison, became the first world-wide radio navigation system. The first experimental Block-I GPS satellite was launched in February 1978. The GPS satellites were initially manufactured by Rockwell International and are now manufactured by Lockheed Martin. - In 1983, after Soviet interceptor aircraft shot down the civilian airliner KAL 007 in restricted Soviet airspace, killing all 269 people on board, U.S. President Ronald Reagan announced that the GPS system would be made available for civilian uses once it was completed. - By 1985, ten more experimental Block-I satellites had been launched to validate the concept. - On February 14, 1989, the first modern Block-II satellite was launched. - In 1992, the 2nd Space Wing, which originally managed the system, was de-activated and replaced by the 50th Space Wing. - By December 1993 the GPS system achieved initial operational capability - By January 17, 1994 a complete constellation of 24 satellites was in orbit. - In 1996, recognizing the importance of GPS to civilian users as well as military users, U.S. President Bill Clinton issued a policy directive declaring GPS to be a dual-use system and establishing an Interagency GPS Executive Board to manage it as a national asset. - In 1998, U.S. Vice President Al Gore announced plans to upgrade GPS with two new civilian signals for enhanced user accuracy and reliability, particularly with respect to aviation safety. - On May 2, 2000 "Selective Availability" was discontinued, allowing users outside the US military to receive a full quality signal. - In 2004, U.S. President George W. Bush updated the national policy, replacing the executive board with the National Space-Based Positioning, Navigation, and Timing Executive Committee. - The most recent launch was on November 17, 2006. The oldest GPS satellite still in operation was launched in August 1991. Two GPS developers have received the National Academy of Engineering Charles Stark Draper prize year 2003: - Ivan Getting, emeritus president of The Aerospace Corporation and engineer at the Massachusetts Institute of Technology, established the basis for GPS, improving on the World War II land-based radio system called LORAN (Long-range Radio Aid to Navigation). - Bradford Parkinson, professor of aeronautics and astronautics at Stanford University, conceived the present satellite-based system in the early 1960s and developed it in conjunction with the U.S. Air Force. On February 10, 1993, the National Aeronautic Association selected the Global Positioning System Team as winners of the 1992 Robert J. Collier Trophy, the most prestigious aviation award in the United States. This team consists of researchers from the Naval Research Laboratory, the U.S. Air Force, the Aerospace Corporation, Rockwell International Corporation, and IBM Federal Systems Company. The citation accompanying the presentation of the trophy honors the GPS Team "for the most significant development for safe and efficient navigation and surveillance of air and spacecraft since the introduction of radio navigation 50 years ago." - GLONASS (GLObal NAvigation Satellite System) is operated by Russia, although with only twelve active satellites as of 2004. In Russia, Northern Europe and Canada, at least four GLONASS satellites are visible 45 percent of time. There are plans to restore GLONASS to full operation by 2008 with assistance from India. - Galileo is being developed by the European Union, joined by China, Israel, India, Morocco, Saudi Arabia and South Korea, Ukraine planned to be operational by 2010. - Beidou may be developed independently by China. - Degree Confluence Project - Geo (microformat) - ↑ HowStuffWorks, How GPS Receivers Work. Retrieved May 14, 2006. - ↑ Köhne, et al. - ↑ Global Security, GPS. Retrieved June 20, 2007. - ↑ Peter H. Dana, GPS Orbital Planes. Retrieved February 18, 2009. - ↑ Metaresearch, What the Global Positioning System Tells Us about Relativity. Retrieved January 2, 2007. - ↑ USCG Navcen, GPS Frequently Asked Questions. Retrieved January 3, 2007. - ↑ Paul Massatt and Wayne Brady, Optimizing performance through constellation management, Crosslink. Retrieved June 20, 2007. - ↑ US Coast Guard, General GPS News 9-9-05. Retrieved June 20, 2007. - ↑ Britannica Concise Encyclopedia, Ephemeris. Retrieved June 20, 2007. - ↑ USNO, NAVSTAR Global Positioning System. Retrieved May 14, 2006. - ↑ Global Positioning System, Ephemeris and Clock Errors. Retrieved June 13, 2007. - ↑ 12.0 12.1 Office of Science and Technology Policy, Presidential statement to stop degrading GPS. Retrieved May 1, 2000. - ↑ GPS, Selective Availability. Retrieved June 13, 2007. - ↑ Chris Rizos, GPS Satellite Signals, Univ. of New South Wales. - ↑ ATI Courses, The Global Positioning System by Robert A. Nelson Via Satellite. Retrieved February 18, 2009. - ↑ Neil Ashby, Relativity and GPS, Physics Today. Retrieved February 18, 2009. - ↑ Space Environment Center, SEC Navigation Systems GPS Page. Retrieved February 18, 2009. - ↑ GPS World, The hunt for an unintentional GPS jammer. Retrieved January 1, 2003. - ↑ Phrack, Low Cost and Portable GPS Jammer. Retrieved June 20, 2007. - ↑ American Forces Press Service, CENTCOM charts progress. Retrieved March 25, 2003. - ↑ John Ruley, GPS jamming. Retrieved February 12, 2003. - ↑ GPS Receivers, Facts for the Warfighter. Retrieved April 10, 2007. - ↑ GPS, GPS Interference and Jamming. Retrieved June 13, 2007. - ↑ GSP, Precise Monitoring. Retrieved June 13, 2007. - ↑ GPS, GPS Time and Date. Retrieved June 13, 2007.. - ↑ U.S. Coast Guard, Global Positioning System Fully Operational. Retrieved June 20, 2007. - ↑ GPS, GPS Modernization. Retrieved June 13, 2007. - ↑ Arms Control Association, Missile Technology Control Regime. Retrieved May 17, 2006. - ↑ Sandia National Laboratory, Nonproliferation programs and arms control technology. Retrieved June 20, 2007. - ↑ Joe Mehaffey, Is it Safe to use a handheld GPS Receiver on a Commercial Aircraft? Retrieved February 18, 2009. - ↑ JRC America, JLR-10 GPS Compass. Retrieved Jan. 6, 2007. - ↑ Diomidis Spinellis, Position-annotated photographs: A geotemporal web. Retrieved February 18, 2009. - ↑ K. Iwasaki, K. Yamazawa, and N. Yokoya, An indexing system for photos based on shooting position and orientation with geographic database. Retrieved June 20, 2007. - ↑ Hydrographic Society Journal, Developments in Global Navigation Satellite Systems. Retrieved April 5, 2007. - ↑ United States Department of Defense, Announcement of Initial Operational Capability. Retrieved February 18, 2009. - ↑ National Archives and Records Administration, U.S. GLOBAL POSITIONING SYSTEM POLICY. Retrieved February 18, 2009. - ↑ United States Naval Research Laboratory, National Medal of Technology for GPS. Retrieved February 18, 2009. - ↑ The Inquirer, Chinese threaten to dump Galileo GPS. Retrieved June 20, 2007. - Brain, Marshall, and Tom Harris. How GPS Receivers Work. HowStuffWorks.com. Retrieved May 14, 2006. - Dana, Peter H. 1999. "Global Positioning System Overview." The Geographer's Craft Project. Department of Geography, The University of Colorado at Boulder. Retrieved May 28, 2007. - Köhne, Anja, and Michael Wössner. 2005. GPS-Explained. Kowoma. Retrieved June 6, 2007. - Nelson, Robert A. 1999. The Global Positioning System. Applied Technology Insitute. Retrieved May 28, 2007. - GPS.gov—General public education website created by the U.S. Government. Retrieved June 20, 2007. - The GPS Joint Program Office (GPS JPO)—Responsible for designing and acquiring the system on behalf of the US Government. Retrieved June 20, 2007. - USCG Navigation Center—Status of the GPS constellation, government policy, and links to other references. Also includes satellite almanac data. Retrieved June 20, 2007. - U.S. Naval Observatory's GPS constellation status. Retrieved June 20, 2007. - U.S. Army Corps of Engineers manual: NAVSTAR HTML and PDF (22.6 MB, 328 pages). Retrieved June 20, 2007. - National Space-Based PNT Executive Committee—Established in 2004 to oversee management of GPS and GPS augmentations at a national level. Retrieved June 20, 2007. - PNT Selective Availability Announcements. Retrieved June 20, 2007. - GPS SPS Signal Specification, 2nd Edition—The official Standard Positioning Signal specification. Retrieved June 20, 2007. Introductory / tutorial links - HowStuffWorks' Simplified explanation of GPS. Retrieved June 20, 2007. - Trimble's Online GPS Tutorial. Retrieved June 20, 2007. - u-blox GPS Tutorial (PDF)—Tutorial designed to introduce you to the principles behind GPS. Retrieved June 20, 2007. Technical, historical, and ancillary topics links - Dana, Peter H. "Global Positioning System Overview". Retrieved June 20, 2007. - Satellite Navigation: GPS & Galileo (PDF)—16-page paper about the history and working of GPS, touching on the upcoming Galileo. Retrieved June 20, 2007. - History of GPS, including information about each satellite's configuration and launch. Retrieved June 20, 2007. - Chadha, Kanwar. "The Global Positioning System: Challenges in Bringing GPS to Mainstream Consumers" Technical Article (1998). Retrieved June 20, 2007. - GPS Weapon Guidance Techniques. Retrieved June 20, 2007. - RAND history of the GPS system (PDF). Retrieved June 20, 2007. - GPS Anti-Jam Protection Techniques. Retrieved June 20, 2007. - Crosslink Summer 2002 issue by The Aerospace Corporation on satellite navigation. Retrieved June 20, 2007. - Improved weather predictions from COSMIC GPS satellite signal occultation data. Retrieved June 20, 2007. - David L. Wilson's GPS Accuracy Web Page A thorough analysis of the accuracy of GPS. Retrieved June 20, 2007. - Innovation: Spacecraft Navigator, Autonomous GPS Positioning at High Earth Orbits Example of GPS receiver designed for high altitude spaceflight. Retrieved June 20, 2007. - The Navigator GPS Receiver GSFC's Navigator spaceflight receiver. Retrieved June 20, 2007. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
http://www.newworldencyclopedia.org/p/index.php?title=Global_Positioning_System&oldid=929809
13
88
So far in this Clojure tutorial/NLP tutorial, we’ve mainly been looking at Clojure’s functions, but the last posting actually included a good chunk of code for the Porter Stemmer. Today, we’ll review functions. Some of this we’ve seen before, but some of it will be new. As with any functional language, functions are the building blocks of Clojure. I’m going to briefly summarize what we’ve learned about functions so far, and then we’ll explore one important topic—recursion—in more detail. Generally, functions are defined using defn, followed by the name of the function, a vector listing the parameters it takes, and the expressions in the body of the function. You can also include a documentation string between the function name and the parameter vector: user=> (defn greetings "Say hello." [name] (str "Hello, " name)) #'user/greetings user=> (greetings 'Eric) "Hello, Eric" str function here just creates strings out of all its arguments and concatenates them together.) Higher-order functions are functions that take other functions as values. These may use the other function in a calculation, or they may create a new function from the existing one. For example, map calls a function on each element in a sequence: user=> (map greetings '(Eric Elsa)) ("Hello, Eric" "Hello, Elsa") complement, however, creates a new function that is equivalent to user=> (def not-zero? (complement zero?)) #'user/not-zero? user=> (not-zero? 0) false user=> (not-zero? 1) true Sometimes, particularly when you’re using a higher-order function, you may want to create a short function just for that one place, and giving it a name would clutter up your program. Or you may want to define a function inside another function, in a let expression, to using only within that function. For either of these, use a function literal. A function literal looks like a regular function definition, except instead of fn; the name is optional; and generally it cannot have a documentation string. For example, greeting, which we defined above, doesn’t exist, and we want to greet a list of people. We could do this by creating a function literal and passing it to user=> (map (fn [n] (str "Hello, " n)) '(Eric Elsa)) ("Hello, Eric" "Hello, Elsa") Or, we could temporarily define let, and pass that to user=> (let [greet (fn [n] (str "Hello, " n))] (map greet '(Eric Elsa))) ("Hello, Eric" "Hello, Elsa") I’ve already mentioned that Clojure allows you to provide different versions of a function for different argument lists. Do this by grouping each set of parameter vector and expressions in its own list. The documentation string, if there is one, comes before all of the groups: user=> (defn count-parameters "This returns the number of parameters passed to the function." ( 0) ([a] 1) ([a b] 2) ([a b c] 3) ([a b c d] 4) ([a b c d e] 5)) #'user/count-parameters user=> (count-parameters 'p0 'p1 'p2) 3 user=> (count-parameters) 0 This also works in function literals. This creates a shortened version of cp and calls it twice within a vector. The two calls are collected in a vector because let only returns one value, and we want to see the value of both calls to user=> (let [cp (fn ( 0) ([a] 1) ([a b] 2) ([a b c] 3))] [(cp 'p0 'p1) (cp)]) [2 0] Many problems can be broken into smaller versions of the same problem. For example, you can test whether a list contains a particular item by looking at the first element of the list. At each place in the list, ask yourself: is the list empty? If it is, the item is not in the list. However, if the first element is what you’re looking for, good; if it’s not, strip the first element off the list and start over again. This way of attacking problems—by calling the solution within itself—is called recursion. Recursive problems are fundamental to functional programming, so let’s look at it in more detail. The main pitfall in creating a recursive problem is to make sure that it will end eventually. To do this, you need to make sure that you have an end condition. In the list-membership problem I described above, the end conditions are the empty list and the first element being the item you are looking for. Let’s look at what this would look like in Clojure. user=> (defn member? [sequence item] (if (not (seq sequence)) nil (if (= (peek sequence) item) sequence (member? (pop sequence) item)))) #'user/member? In this, the first if tests whether the sequence is empty (that is, if cannot create a sequence out of it; if it is, it returns false. The second if tests whether the first element in the sequence is what we’re looking for; if it is, this returns the sequence at that point. This is a common idiom in Clojure. Since the sequence has at least one item, it will evaluate as true, fulfilling its contract as a test, but it returns more information than that. This function is useful for more than just determining whether an item is in a sequence: it also finds the item and returns it. Finally, if the first element is not the item being sought, this member? again with everything except the first item of the list*. This is the recursive part of the function. Let’s test it. user=> (member? '(1 2 3 4) 2) (2 3 4) user=> (member? '(1 2 3 4) 5) nil Sometimes it’s helpful to map this out. In the diagram below, a call within another call is indented under it. A function call returning is indicated by a “=>” and a value at the same level of indentation as the original call. Here’s the sequence of calls in the first example above. (member? '(1 2 3 4) 2) (member? '(2 3 4) 2) => '(2 3 4) => '(2 3 4) The second example call to member? would graph out like this: (member? '(1 2 3 4) 5) (member? '(2 3 4) 5) (member? '(3 4) 5) (member? '(4) 5) (member? '() 5) => nil => nil => nil => nil => nil In this case, we’re just returning the results of the membership test back unchanged, but we could modify the results as they’re being returned. For example, if we wanted to count how many times as item occurs in a sequence, we could do this: user=> (defn count-item [sequence item] (if (not (seq sequence)) 0 (if (= (peek sequence) item) (inc (count-item (pop sequence) item)) (count-item (pop sequence) item)))) #'user/count-item In many ways, this is very similar to member?. It first tests whether the sequence is empty, and if it is, it returns zero. If the first element is the item, this calls itself (it recurses) and increments the result by one, to count the current item. Otherwise, it recurses, but it does not increment the result. Let’s see this in action. user=> (count-item '(1 2 3 2 1 2 3 4 3 2 1) 2) 4 user=> (count-item '(1 2 3 2 1 2 3 4 3 2 1) 3) 3 user=> (count-item '(1 2 3 2 1 2 3 4 3 2 1) 5) 0 Here, the first example (in shortened form) graphs out like so: (count-item '(1 2 3 2 1) 2) (count-item '(2 3 2 1) 2) (count-item '(3 2 1) 2) (count-item '(2 1) 2) (count-item '(1) 2) (count-item '() 2) => 0 => 0 => 1 => 1 => 2 => 2 You see that the results gets incremented as the computer leaves each function call where 2 is the first element in the input list. While these two examples of recursion are superficially similar, on a deeper level they are very different. In member?, once you’ve computed the result, you can return it immediately back to the function that originally called member?. If Clojure had some way to jump completely out of a function from any level, you could do this. On the other hand, when it is not finished with its calculation yet. It has to wait for itself to return and possibly add one to the result. (Sentences like that remind me why recursion can be confusing. Don’t worry. Eventually your brain will get used to being twisted into a pretzel.) There’s a term to describe functions like member? that are finished with their calculations when they recurse. It’s called tail-call recursion or tail-recursive. This is a very good quality. The computer can optimize these calls to make them very fast and very efficient. But the Java Virtual Machine doesn’t recognize tail recursion on its own, so Clojure needs a little help to make these optimizations. You signal a tail-recursive function call by using the recur built-in instead of the function name when you recurse. Thus, we could re-write member? to be tail-recursive like this. user=> (defn member? [sequence item] (if (not (seq sequence)) nil (if (= (peek sequence) item) sequence (recur (pop sequence) item)))) #'user/member? recur in the last line? That’s all that has changed. This should work exactly the same (and it does—try it), but for long lists, it should be much more efficient. In fact, it’s often worth putting in a little extra work to make non-tail-recursive functions tail-recursive. There’s a straightforward transformation you can use to make almost any function tail recursive. Just add an extra parameter and use it to accumulate the results before you make the recursive function call. The first time you call the function, you need to pass the base value into the function for that parameter. For example, count-item with tail recursion would look like this: user=> (defn count-item [sequence item accum] (if (not (seq sequence)) accum (if (= (peek sequence) item) (recur (pop sequence) item (inc accum)) (recur (pop sequence) item accum)))) #'user/count-item The difference here is the accum parameter. When item equals the first element of the sequence, accum is incremented before the recursive functional call is made. When the function starts again on the shorter list, accum is incremented. Finally, when the end of the list is reached, is returned. It contains the counts accumulated as count-item walked down Of course, now we have to call count-item differently also: user=> (count-item '(1 2 3 2 3 4 3 2 1) 2 0) 3 user=> (count-item '(1 2 3 2 3 4 3 2 1) 1 0) 2 user=> (count-item '(1 2 3 2 3 4 3 2 1) 5 0) 0 Whenever we call count-item, we have to include a superfluous zero that we really don’t care about. That seems messy and error-prone. How can we get rid Essentially, we want to hide this version of count-item and replace it with a new version that handles the extra zero for us. Then, we call the new version and forget about this one. There are several ways to actually do this: Have the public function named count-itemand create a private version named something like letto define the private function inside loop? Yes, this is new. loop is a cross between a function call and It looks a lot like let because it allows you to define variables. But it also acts as a target for recur. How would count-item look with (defn count-item [sequence item] (loop [sq sequence, accum 0] (if (not (seq sq)) accum (if (= (peek sq) item) (recur (pop sq) (inc accum)) (recur (pop sq) accum))))) #'user/count-item Notice that the first line of loop looks a lot like the first line of Both declare and initialize a series of variables. Just to keep things clear, sq within the item isn’t included in the list of variables that loop declares, since it doesn’t change. Finally, at the end of the loop are two recur statements. When they are evaluated, they cause the program to jump back to the loop statement, but this time, instead of the original values used to initialize the variables, the values in the recur call are used. Now we can again call count-item without the extra parameter: user=> (count-item '(1 2 3 2 3 4 3 2 1) 2) 3 user=> (count-item '(1 2 3 2 3 4 3 2 1) 1) 2 user=> (count-item '(1 2 3 2 3 4 3 2 1) 5) 0 With recursion, we can define another utility to use on the structures. This function will take a predicate and a stemmer. For each step, it will test the stemmer with the predicate, and if true, it will pop one character from the word and recurse. If there are no letters left in the word or if the predicate evaluates to false, it will return the stemmer the way it (defn pop-stemmer-on "This is an amalgam of a number of different functions: pop (it walks through the :word sequence using pop); drop-while (it drops items off while testing the sequence against drop-while); and maplist from Common Lisp (the predicate is tested against the entire current stemmer, not just the first element)." [predicate stemmer] (if (and (seq (:word stemmer)) (predicate stemmer)) (recur predicate (pop-word stemmer)) stemmer)) I’m ready for a break. Next time, We’ll look at some more of the functions that Clojure provides, and we’ll define a few predicates of our own.
http://writingcoding.blogspot.jp/2008/07/stemming-part-5-functions-and-recursion.html
13
62
Video:What to Know When Measuring Circleswith Zoya Popova Measuring circles uses several geometric formulas. Learn what circle values are important to understand the formulas and solve basic circle geometry.See Transcript Transcript:What to Know When Measuring Circles Hi, Zoya Popova for About.com, and today going to show you how to measure circles. Important Circle Measurements: A circle is a geometrical shape consisting of points which lie at an equal distance from a given point, called the center of a circle. This distance between the center and any of the points on the circle is called a radius. The radius is very important in calculating all the other characteristics of a circle, such as its diameter, circumference, and area. The diameter is a line segment passing through the center of a circle and having its endpoints on the circle. The circumference of a circle is the length around it. If we start at point A, and go all the way around the clock until we arrive back at point A, this will be our circumference. Finally, every circle has an area. The area is the number of square units inside that circle. Solving Circle Formulas: So how does the radius of a circle help us determine its diameter, circumference, and area? Well, the relationship between the radius and the diameter is pretty self-evident. Because the diameter passes through the center, it consists of two radii. So the diameter is two times the radius:d=2r So what about the relationship between the radius and the circumference? The larger the radius of a circle, the greater it is, and the more its circumference. Ancient Greeks actually realized that the relationship between the circle diameter and its circumference is a constant, which means, that in any circle, big or small, C – circumference, divided by d – diameter, is always the same number. They called that constant number (Pi), but they didn't actually know what the value of is. = C/d Circumference of a Circle: Archimedes was the first one to find out that equals approximately 3.14, and he was right. In our time, we know that is an irrational number, which means it has an infinite amount of digits after its decimal point and those digits never end and never repeat. For most calculations and measurements in geometry, we use rounded to 3.14, or simply write it down as. So now we know that the circumference of a circle equals its diameter times Pi Area of a Circle: It was also Archimedes who came up with the formula for the area of a circle, and it also involves. Archimedes proved that the area of a circle equals pi times the radius squared. And these are the basics for measuring circles. Thank you for watching, and for more information, please visit us at About.com.
http://video.about.com/math/What-to-Know-When-Measuring-Circles.htm
13
57
Science Fair Project Encyclopedia The expression centrifugal force is used to express that if an object is being swung around on a string the object seems to be pulling on the string. In actual fact the person holding the string is doing the pulling. When an object is at speed, then if no force is exerted the object will continue in a straight line. To make the object deviate from that straight line a force must be exerted. When a stone is being swung around on the end of a rope the tension in the rope is transmitting the force directed to the center that is being exerted by the person swinging the rope. On the other end of the rope the stone is attached and since the stone itself is not attached to anything it cannot resist the force and the direction of motion is bent; towards the center. With a planet orbiting around a star the same dynamics are at play. Without any force the planet would move in a straight line. The sun's gravity is bending the motion away from that straight line. Because the planet has a lot of velocity perpendicular to the bending force the distance to the sun doesn't decrease. In general, the force maintaining the circular motion of an object is called the centripetal force. Centrifugal force in calculations When performing calculations, for example on the stresses in the blades of a helicopter, it is convenient to use a coordinate system in which the blades are stationary (called a rotating reference frame). When a transform is made to that coordinate system, a force term appears which points radially outward from the axis of the blades. The force directed away from the center that corresponds to an amount of mass m at a distance r from the center is given by (where m is mass, v is velocity, r is radius of the circle, ω = v / r is the angular velocity, and the r is the vector pointing from the center to the tip) This force term is a "fictitious" force because it only appears due to a coordinate transformation. The true non-rotating reference frame can always be discerned by an observer as the one in which there is no centrifugal force. The expression 'centrifugal force' is useful to express in a concise way the dynamics of a system. For example a centrifugal governor, a mechanical feedback device for maintaining a particular revolution rate of a machine. The centrifugal governor of a steam engine usually relies on gravity to provide the centripetal force. When the revolution rate of the centrifugal governor increases, a stronger centripetal force would be needed to maintain the same diameter. The gravity provides only so much centripetal force, so the arms of the governor swing out to a wider angle, to a new equilibrium. Any explanation of dynamics that is given in terms of 'centrifugal force getting stronger (or weaker)' can be reformulated in terms of 'not enough (or too much) centripetal force than would be necessary for dynamic equilibrium'. But usually 'centrifugal force' is good shorthand for explaining what is going on. Another example of this is the expression tidal force. Tidal force is an apparent force rather than an independently existing force, but the expression is useful physics shorthand. When an object is moved in circular motion, inertia manifests itself. Inertia is a form of resistance to change, in this case change of velocity. Inertia manifests itself in response to acceleration. Inertia does not prevent acceleration, as it depends on it. Inertia is very different from force, because force cause change, and inertia opposes change. When an electric car designed to regain energy on decelerating is switched to braking, the manifestation of inertia is driving the generators, charging the electric car's battery system. In this example, inertia is exerting a force, but inertia cannot keep the car going: inertia only manifests itself when the velocity is changing. When an object is moving in a straight line, then to change the direction of motion a force perpendicular to the direction of motion must be exerted. The resulting acceleration in that direction is the same as would have occurred when accelerating from a stationary start: motions that are perpendicular to each other are independent. In the case of circular motion: as the centripetal force is causing deviation from moving in a straight line inertia is manifesting itself, but it does not prevent the centripetal force from maintaining the circular motion. When examining the effects of rotation from the perspective of an observer rotating along with the system, the action of the centripetal force shows up as an apparent force term acting in a direction radially away from the center of rotation, and this is the manifestation of the centrifugal force. This term is often called a "fictitious force" because it is actually a manifestation of inertia which only appears as a radially outward force when observing the system from within a rotating reference frame, whereas from a non-rotating frame it is simply observed as the centripetal force producing a circular motion. The appearance of the centrifugal force is one argument used in general relativity for the absoluteness of rotating reference frames, in comparison to the relativity of linear reference frames. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Centrifugal_force
13
110
Language and Notation of the Circle Formal definition of a circle. Tangent and secant lines. Diameters and radii. major and minor arcs Language and Notation of the Circle ⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles. - Let's start again with a point let's call that - point, "Point A." And what I'm curious about - is all of the points on my screen that are exactly - 2cm away from "Point A." So 2cm on my screen - is about that far. So clearly if I start at "A" and I go 2cm - in that direction, this point is 2cm from "A." If I call that - "Point B" then I could say line segment AB is 2cm - the length is 2cm. Remember, this would refer to - the actual line segment. I could say this looks nice but if - I talk about it's length, I would get rid of that - line on top, and I would just say, "'AB' is equal to - 2. If I wanted to put units, I would say 2cm. - But I'm not curious just about B, I want to think - about ALL the points. The set of ALL of the points - that are exactly 2cm away from "A." So I could go - 2cm in the other direction, maybe get to point - "C" right over here. So "AC" is also going to be equal - to 2cm. But I could go 2cm in any direction. And so - if I find that of all of the points that are exactly 2cm - away from "A," I will get a very familiar looking - shape, like this: (I'm drawing this free-hand,) so I would - get a shape that looks like this. Actually, let me draw - it in I don't want to make you think that it's only the - points where there's white, it's ALL of these points - right over here. Let me clear out all of these and I will - just draw a solid line. It could look something like - that (my best attempt.) And this set of all the points - that are exactly 2cm away from "A," is a circle, which - I'm sure you are already familiar with. But that is - the formal definition, the set of all points that are - a fixed distance from "A." If I said, "The set of all points - that are 3cm from "A," it might look something like this:" - That would give us another circle. (I think you get the - general idea.) Now, what I want to introduce to you - in this video is ourselves to some of the concepts - and words that we use when dealing with circles. - So let me get rid of 3cm circle. So first of all, let's - think about this distance, or one of these line segments - that join "A" which we would call the center of the - circle. So we will call "A" the center of the circle, - which makes sense just from the way we use the word - 'center' in everyday life, what I want to do is think - about what line segment "AB" is. "AB" connects the - center and it connects a point on the circle itself. - Remember, the circle itself is all the points that - are equal distance from the center. So "AB," any point, - line segment, I should say, that connects the center to - a point on the circle we would call a radius. And so - the length of the radius is 2cm. And you're probably - already familiar with the word 'radius,' but I'm just - being a little bit more formal. And what's interesting - about geometry, at least when you start learning at the - high-school level is that it's probably the first - class where you're introduced to a slightly more formal - mathematics where we're a little more careful about - giving our definitions and then building on those - definitions to come up with interesting results and - proving to ourselves that we definitely know what we - think we know. And so that's why we're being a little - more careful with our language over here. So "AB" is - a radius, line segment "AB," and so is line segment, - (let me put another point on here) let's say this is - "X" so line segment "AX" is also a radius. Now you - can also have other forms of lines and line segments - that interact in interesting ways with the circle. - So you could have a line that just intersects that - circle exactly one point. So let's call that point - right over there and let's call that "D." And let's say - you have a line and the only point on the circle that - the only point in the set of all the points that are - equal distance from "A," the only point on that - circle that is also on that line is point "D." And we - could call that line, "line L." So sometimes you will - see lines specified by some of the points on them. - So for example, if I have another point right over here - called "E," we could call this line, "line DE," or we - could just put a little script letter here with an "L" and - say this "line L." But this line that only has one point - in common with our circle, we call this 'a tangent line.' - So "Line L" is tangent. Tangent to the circle. So let me - write it this way, "'line L' is tangent to the circle - centered at "A"" So this tells us that this is the circle - we're talking about, because who knows? maybe we had - another circle over here that is centered at "M." - So we have to specify. It's not tangent to that one, - it's tangent to this one. So this circle with a dot in - the middle tells we're talking about circle, and this - is a circle centered at point "A." I want to be very - clear. Point "A" is not on the circle, point "A" is - the center of the circle. The points on the circle are - the points equal distant from point "A." Now, "L" is - tangent because it only intersects the circle in one - point. You could just as easily imagine a line that - intersects the circle at two points. So we could call, - maybe this is "F" and this is "G." You could call that - line "FG." And the line that intersects at two points - we call this a secant of circle "A." It is a secant line - to this circle right here. Because it intersects it - in two points. Now, if "FG" was just a segment, if it - didn't keep on going forever like lines do, if we only - spoke about this line segment, between "FG," and not - thinking about going on forever, then all the sudden, - we have a line segment, which we would specify there, - and we would call this a chord of the circle. A chord of - "circle "A." It starts on a point of the circle, a point - that is, in this case, 2cm away and then it finishes - at a point on the circle. So it connects two points on - the circle. Now, you can have cords like this, and you - can also have a chord, as you can imagine, a chord that - actually goes through the center of the circle. So let's - call this, "point 'H.'" And you have a straight line connecting - "F" to "H" through "A." (That's about as straight as I could - draw it.) So if you have a chord like that, that contains - actual center of the circle, of course it goes from one - point to another point of the circle, and it goes through - the center of the circle, we call that a diameter of a - circle. And you've probably seen this in tons of problems - before when we were not talking about geometry as formally, - but a diameter is made of two radiuses. We know that a radius - connects a point to the center, so you have one radius - right over here that connects "F" and "A" that's one radius - and you have another radius connecting "A" and "H," - a point connecting to the center of the circle. So the - diameter is made of these two radiuses (or radii as I - should call it I think that's the plural for radius) and so - the length of a diameter is going to be twice the length - of a radius. So we could say, "the length of the diameter, - so the length of "FH" (and once again I don't put the - line on top of it when I'm talking about the length) is - going to be equal to "FA," the length of segment "FA" plus - the length of segment "AH."" Now there's one last thing - I want to talk about, when we're dealing with circles, - and that's the idea of an arc. So we also have the parts - of the circle itself. (So let me draw another circle over - here) Let's center this circle at "B." And I'm going to find - some points, all the points that are a given distance - from "B." So, it has some radius, I'm not going to specify - it right over here. And let me pick some random points - on this circle. Let's call this, "J," "K," "S," "T," and "U." - Let me center "B" a little bit more in the center here... - Now, one interesting thing is, "what do you call - the length of the circle that goes between two points?" - Well, you could imagine in every language, we would - call something like that an "arc," which is also what it's - called in geometry. We would call this "JK," the two end - points of the arc, the two points on the circle that are - the inputs of the arc, and you would use a little notation like - that, a little curve on top instead of a strait line. - Now, you can also have another arc that connects "J" - and "K," this is called the 'minor arc,' it is the shortest - way upon the circle to connect "J" and "K." But you could - also go the other way around. You could also have this - thing, that goes all the around the circle. And that is - called the 'major arc.' And usually when we specify the - major arc, just to show that you're going kind of the long - way around, it's not the shortest way to go between - "J" and "K," you will often specify another point that you're - going through. So for example, this major arc we could - specify. We started "J," we went through, we could have - said "U," "T," or "S," but I will put "T" right over there. - We went through "T" and then we went all the way to "K." - And so this specifies the major arc. And this thing could - have been the same thing as if I wrote "JUK" these are - specifying the same thing, or "JSK." So there are - multiple ways to specify this major arc. The one thing I want - to make clear is that the minor arc is the shortest distance, - so this is the minor arc, and the longer distance around is - the major arc. I will leave you there. Maybe the next few - videos we will starts playing with some of this notation. Be specific, and indicate a time in the video: At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger? Have something that's not a question about this content? This discussion area is not meant for answering homework questions. Share a tip When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831... Have something that's not a tip or feedback about this content? This discussion area is not meant for answering homework questions. Discuss the site For general discussions about Khan Academy, visit our Reddit discussion page. Flag inappropriate posts Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians. - disrespectful or offensive - an advertisement - low quality - not about the video topic - soliciting votes or seeking badges - a homework question - a duplicate answer - repeatedly making the same post - a tip or feedback in Questions - a question in Tips & Feedback - an answer that should be its own question about the site
http://www.khanacademy.org/math/geometry/circles/v/language-and-notation-of-the-circle
13
119
Topics covered: Introduction to Fourier Series; Basic Formulas for Period 2(pi) Instructor/speaker: Prof. Arthur Mattuck Well, let's get started. The topic for today is -- Sorry. Thank you. For today and the next two lectures, we are going to be studying Fourier series. Today will be an introduction explaining what they are. And, I calculate them, but I thought before we do that I ought to least give a couple minutes oversight of why and where we're going with them, and why they're coming into the course at this place at all. So, the situation up to now is that we've been trying to solve equations of the form y'' + a y', constant coefficient second-order equations, and the f of t was the input. So, we are considering inhomogeneous equations. This is the input. And so far, the response, then, is the solution equals the corresponding solution, y(t), maybe with some given initial conditions to pick out a special one we call the response, the response to that particular input. And now, over the last few days, the inputs have been, however, extremely special. For input, the basic input has been an exponential, or sines and cosines. And, the trouble is that we learn how to solve those. But the point is that those seem extremely special. Now, the point of Fourier series is to show you that they are not as special as they look. The reason is that, let's put it this way, that any reasonable f(t) which is periodic, it doesn't have to be even very reasonable. It can be somewhat discontinuous, although not terribly discontinuous, which is periodic with period, maybe not the minimal period, but some period two pi. Of course, sin(t) and cos(t) have the exact period two pi, but if I change the frequency to an integer frequency like sin(2t) or sin(26t), two pie would still be a period, although would not be the period. The period would be shorter. The point is, such a thing can always be represented as an infinite sum of sines and cosines. So, it's going to look like this. There's a constant term you have to put out front. And then, the rest, instead of writing, it's rather long to write unless you use summation notation. So, I will. So, it's a sum from n equal one to infinity integer values of n, in other words, of a sine and a cosine. It's customary to put the cosine first, and with the frequency, the n indicates the frequency of the thing. And, the bn = sin(nt). Now, why does that solve the problem of general inputs for periodic functions, at least if the period is two pi or some fraction of it? Well, you could think of it this way. I'll make a little table. I'll make a little table. Let's look at, let's put over here the input, and here, I'll put the response. Okay, suppose the input is the function sin(nt). Well, in other words, if you just solve the problem, you put a sin(nt) here, you know how to get the answer, find a particular solution, in other words. In fact, you do it by converting this to a complex exponential, and then all the rigmarole we've been going through. So, let's call the response something. Let's call it y. I'd better index it by n because it, of course, is a response to this particular periodic function. So, n(t), and if the input is cos(nt), that also will have a response, yn. Now, I really can't call them both by the same name. So, why don't we put a little s up here to indicate that that's the response to the sine. And here, I'll put a little c to indicate what the answer to the cosine. You're feeding cos(nt), what you get out is this function. Now what? Well, by the way, notice that if n is zero, it's going to take care of a constant term, too. In other words, the reason there is a constant term out front is because that corresponds to cos(0t), which is one. Now, suppose I input instead (a)n cos(nt). All you do is multiply the answer by an. Same here. Multiply the input by bn. You multiply the response. That's because the equation is a linear equation. And now, what am I going to do? I'm going to add them up. If I add them up from the different ends and take a count also, the n equals zero corresponding to this first constant term, the sum of all these according to my Fourier formula is going to be f(t). What's the sum of this, the corresponding responses? Well, that's going to be summation sum[(a)n (y)n^c(t) + (b)n (y)n], the response to the sine. That will be the sum from one to infinity, and there will be some sort of constant term here. Let's just call it c1. So, in other words, if this input produces that response, and these are things which we can calculate, we're led by this formula, Fourier's formula, to the response to things which otherwise we would have not been able to calculate, namely, any periodic function of period two pi will have, the procedure will be, you've got a periodic function of period two pi. Find its Fourier series, and I'll show you how to do that today. Find its Fourier series, and then the response to that general f of t will be this infinite series of functions, where these things are things you already know how to calculate. They are the responses to sines and cosines. And, you just formed the sum with those coefficients. Now, why does that work? It works by the superposition principle. So, this is true. The reason I can do the adding and multiplying by constant, I'm using the superposition principle. If this input produces that response, then the sum of a bunch of inputs produces the sum of the corresponding responses. And, why is that? Why can I use the superposition principle? Because the ODE is linear. It's okay, since the ODE is linear. That's what makes all this work. Now, so what we're going to do today is I will show you how to calculate those Fourier series. I will not be able to use it to actually solve any differential equation. It will take us pretty much all the period to show how to calculate a Fourier series. And, okay, so I'm going to solve differential equations on Monday. Wrong. I probably won't even get to it then because the calculation of a Fourier series is a sufficient amount of work that you really want to know all the possible tricks and shortcuts there are. Unfortunately, they are not very clever tricks. They are just obvious things. But, it will take me a period to point out those obvious things, obvious in my sense if not in yours. And, finally, the third day, we'll solve differential equations. I will actually carry out the program. But the main thing we're going to get out of it is another approach to resonance because the things that we are going to be interested in are picking out which of these terms may possibly produce resonance, and therefore a very crazy response. Some of the terms in the response suddenly get a much bigger amplitude than this than you would normally have thought they had because it's picking out resonant terms in the Fourier series of the input. Okay, well, that's a big mouthfu. Let's get started on calculating. So, the program today is calculate the Fourier series. Given f(t) periodic, having two pi as a period, find its Fourier series. How, in other words, do I calculate those coefficients, an and bn. Now, the answer is not immediately apparent, and it's really quite remarkable. I think it's quite remarkable, anyway. It's one of the basic things of higher mathematics. And, what it depends upon are certain things called the orthogonality relations. So, this is the place where you've got to learn what such things are. Well, I think it would be a good idea to have a general definition, rather than immediately get into the specifics. So, I'm going to call u(x), u(t), I think I will use, since Fourier analysis is most often applied when the variable is time, I think I will stick to independent variable t all period long, if I remember to, at any rate. So, these are two continuous, or not very discontinuous functions on minus pi. Let's make them periodic. Let's say two pi is a period. So, functions, for example like those guys, sin(t), sin(nt), sin(22t), and so on, say two pi is a period. Well, I want them really on the whole real axis, not there. Define for all real numbers. Then, I say that they are orthogonal, perpendicular. But nobody says perpendicular. Orthogonal is the word, orthogonal on the interval [-pi, pi] if the integral, so, two are orthogonal. Well, these two functions, if the integral from minus pi to pi of u(t) v(t), the product is zero, that's called the orthogonality condition on [-pi, pi]. Now, well, it's just the definition. I would love to go into a little song and dance now on what the definition really means, and what its application, why the word orthogonal is used, because it really does have something to do with two vectors being orthogonal in the sense in which you live it in 18.02. I'll have to put that on the ice for the moment, and whether I get to it or not depends on how fast I talk. But, you probably prefer I talk slowly. So, let's compromise. Anyway, that's the condition. And now, what I say is that that Fourier, that blue Fourier series, -- -- what finding the coefficients an and bn depends upon is this theorem that the collection of functions, as I look at this collection of functions, sin(nt) for any value of the integer, n, of course I can assume n is a positive integer because sine of minus nt is the same as sin(-nt) = sin(nt). And, cosine mt, let's give it a different, so I don't want you to think they are exactly the same integers. So, this is a big collection of functions, as n runs from one to infinity-- Here, I could let m be run from zero to infinity because cos(0t) means something. It's a constant, one-- that any two distinct ones, two distinct, you know, how can two things be not different? Well, you know, you talk about two coincident roots. I'm just killing, doing a little overkill. Any two distinct ones of these, two distinct members of the set of this collection of, I don't know, there's no way to say that, any two distinct ones are orthogonal on this interval. Of course, they all have two pi as a period for all of them. So, they form into this general category that I'm talking about, but any two distinct ones are orthogonal on the interval for [-pi, pi]. So, if I take the integral from -2pi to pi of [sin(3t) cos(4t) dt] = 0. If I integrate sin(3t) cos(60t), answer is zero. The same thing with two cosines, or a sine and a cosine. The only time you don't get zero is if you integrate, if you make the two functions the same. Now, how do you know that you could not possibly get the answer is zero if the two functions are the same? If the two functions are the same, then I'm integrating a square. A square is always positive. I'm integrating a square. A square is always positive, and therefore I cannot get the answer, zero. But, in the other cases, I might get the answer zero. And the theorem is you always do. Okay, so, why is this? Well, there are three ways to prove this. It's like many fundamental facts in mathematics. There are different ways of going about it. By the way, along with the theorem, I probably should have included, so, I'm far away. But you might as well include, because we're going to need it. What happens if you use the same function? If I take U equal to V, and in that case, as I've indicated, you're not going to get the answer, zero. But, what you will get is, so, in other words, I'm just asking, what is the (sin(nt))^2. That's a case where two of them are the same. I use the same function. What's that? Well, the answer is, it's the same as what you will get if you take the integral of [(cos(nt))^2 dt]. And, the answer to either one of these is pi. That's something you know how to do from 18.01 or the equivalent thereof. You can integrate sine squared. It's one of the things you had to learn for whatever exam you took on methods of integration. Anyway, so I'm not going to calculate this out. The answer turns out to be pi. All right, now, the ways to prove it are you can use trig identities. And, I'm asking you in one of the early problems in the problem set, identities, identities for the product of sine and cosine, expressing it in a form in which it's easy to integrate, and you can prove it that way. Or, you can use, if you have forgotten the trigonometric identities and want to get some more exercise with complex-- you can use complex exponentials. So, I'm asking you how to, in another part of the same problem I'm asking you how to do it, do one of these, at any rate, using complex exponentials. And now, I'm going to use a mysterious third method another way. I'm going to use the ODE. I'm going to do that because this is the method. It's not just sines and cosines which are orthogonal. There are masses of orthogonal functions out there. And, the way they are discovered, and the way you prove they're orthogonal is not with trig identities and complex exponentials because those only work with sines and cosines. It is, instead, by going back to the differential equation that they solve. And that's, therefore, the method here that I'm going to use here because this is the method which generalizes to many other differential equations other than the simple ones satisfied by sines and cosines. But anyway, that is the source. So, the way the proof of these orthogonality conditions goes, so I'm not going to do that. And, I'm going to assume that m is different from n so that I'm not in either of these two cases. What it depends on is, what's the differential equation that all these functions satisfy? Well, it's a different differential equation depending upon the value of n, -- -- but they look at essentially the same. These satisfy the differential equation, in other words, what they have in common. The differential equation is, let's call it u. It looks better. It's going to look better if you let me call it u. u double prime plus, well, n squared, so for the function sin(nt), cos(nt), satisfy u'' + n^2 u. In other words, the frequency is n, and therefore, this is a square of the frequency is what you put here, equals zero. In other words, what these functions have in common is that they satisfy differential equations that look like that. And the only thing that's allowed to vary is the frequency, which is allowed to change. The frequency is in this coefficient of u. Now, the remarkable thing is that's all you need to know. The fact that they satisfy the differential equation, that's all you need to know to prove the orthogonality relationship. Okay, let's try to do it. Well, I need some notation. So, I'm going to let un and vm be any two of the functions. In other words, I'll assume m is different from n. For example, this one could be sin(nt), and that could be sin(mt), or this could be sin(nt) and that could be cos(mt). You get the idea. Any two of those in the subscript indicates whether what the n or the m is that are in that. Any two, and I mean really two, distinct, well, if I say that m is not n, then they positively have to be different. So, again, it's overkill with my two's-ness. And, what I'm going to calculate, well, first of all, from the equation, I'm going to write the equation this way. It says that u'' = -n^2 u. That's true for any of these guys. Of course, here, it would be v'' = -m^2 v. You have to make those simple adjustments. And now, what we're going to calculate is the integral from -pi to pi of [un'' vm dt]. Now, just bear with me. Why am I going to do that? I can't explain what I'm going to do that. But you won't ask me the question in five minutes. But the point is, this is highly un-symmetric. The u is differentiated twice. The v isn't. So, those two functions-- but there is a way of turning them into an expression which looks extremely symmetric, where they are the same. And the way to do that is I want to get rid of one of these primes here and put one on here. The way to do that is if you want to integrate one of these guys, and differentiate this one to make them look the same, that's called integration by parts, the most important theoretical method you learned in 18.01 even though you didn't know that it was the most important theoretical method. Okay, we're going to use it now as a basis for Fourier series. Okay, so I'm going to integrate by parts. Now, the first thing you do, of course, when you integrate by parts is you just do the integration. You don't do differentiation. So, the first thing looks like this. And, that's to be evaluated between negative pi and pi. In doing integration by parts between limits, minus what you get by doing both. You do both, the integration and the differentiation. And, again, evaluate that between limits. Now, I'm just going to BS my way through this. This is zero. I don't care what the un's, which un you picked and which vm you picked. The answer here is always going to be zero. Instead of wasting six boards trying to write out the argument, let me wave my hands. Okay, it's clear, for example, that a v is a sine, sin(mt). Of course it's zero because the sine vanishes at both pi and minus pi. If the un were a cosine, after I differentiate it, it became a sine. And so, now it's this side guy that's zero at both ends. So, the only case in which we might have a little doubt is if this is a cosine, and after differentiation, this is also a cosine. In other words, it might look like cosine, after, this cos(nt) cos(mt) . But, I claim that that's zero, too. Why? Because the cosines are even functions, and therefore, they have the same value at both ends. So, if I subtract the value evaluated at pi, and subtract the value of minus pi, again zero because I have the same value at both ends. So, by this entirely convincing argument, no matter what combination of sines and cosines I have here, the answer to that part will always be zero. So, by calculation, but thought calculation; it's just a waste of time to write anything out. You stare at it until you agree that it's so. And now, I've taken, by this integration by parts, I've taken this highly un-symmetric expression and turned it into something in which the u and the v are treated exactly alike. Well, good, that's nice, but why? Why did I go to this trouble? Okay, now we're going to use the fact that this satisfies the differential equation, in other words, that u'' = -n, I'm sorry, I should have subscripted this. If that's the solution, then this is equal to, times. You have to put in a subscript otherwise. The n wouldn't matter. All right, I'm now going to take that expression, and evaluate it differently. un'' vm dt is equal to, well, un double prime, because it satisfies the differential equation is equal to that. So, what is this? This is -n^2 times the integral from negative pi to pi, and I'm replacing un'' by minus -n^2 un. I pulled the minus -n^2 out. So, it's un here, and the other factor is vm dt. Now, that's the proof. Huh? What do you mean that's the proof? Okay, well, I'll first state it, why intuitively that's the end of the argument. And then, I'll spell it out a little more detail, but the more detail you make for this, the more obscure it gets instead of, look, I just showed you that this is symmetric in u and v, after you massage it a little bit. Here, I'm calculating it a different way. Is this symmetric in u and v? Well, the answer is yes or no. Is this symmetric at u and v? No. Why? Because of the n. The n favors u. We have what is called a paradox. This thing is symmetric in u and v because I can show it is. And, it's not symmetric in u and v because I can show it is. I can show it's not symmetric because it favors the n. Now, there's only one possible resolution of that paradox. Both would be symmetric if what were true? Pardon? Negative pi. All right, let me write it this way. Okay, never mind. You see, the only way this can happen is if this expression is zero. In other words, the only way something can be both symmetric and not symmetric is if it's zero all the time. And, that's what we're trying to prove, that this is zero. But, instead of doing it that way, let me show you. This is equal to that, and therefore, two things according to Euclid, two things equal to the same thing are equal to each other. So, this equals that, which, in turn, therefore, equals what I would have gotten. I'm just saying the symmetry of different way, what I would have gotten if I had done this calculation. And, that turns out to be minus -m^2 * integral from -pi to pi of [un vm dt]. So, these two are equal because they are both equal to this. This is equal to that. This equals that. Therefore, how can this equal that unless the integral is zero? How's that? Remember, m is different from n. So, what this proves is, therefore, the integral from -pi to pi of [un vm dt] = 0, at least if m is different from n. Now, there is one case I didn't include. Which case didn't I include? un times un is not supposed to be zero. So, in that case, I don't have to worry about, but there is a case that I didn't. For example, something like the cos(nt) sin(nt). Here, the m is the same as the n. Nonetheless, I am claiming that this is zero because these aren't the same function. One is a cosine. Why is that zero? Can you see mentally that that's zero? Mentally? Well, this is trying to be in another life, it's trying to be 1/2 sin(2 nt), right? And obviously the integral from -pi to pi of [sin(2nt)] because you integrate it, and it turns out to be zero. You integrate it to a cosine, which has the same value of both ends. Well, that was a lot of talking. If this proof is too abstract for you, I won't ask you to reproduce it on an exam. You can go with the proofs using trigonometric identities, and/or complex exponentials. But, you ought to know at least one of those, and for the problem set I'm asking you to fool around a little with at least two of them. Okay, now, what has this got to do with the problem we started with originally? The problem is to explain this blue series. So, our problem is, how, from this, am I going to get the terms of this blue series? So, given f(t), two pi s a period. Find the an and the bn. Okay, let's focus on the an. The bn is the same. Once you know how to do one, you know how to do the other. So, here's the idea. Again, it goes back to the something you learned at the very beginning of 18.02, but I don't think it took. But maybe some of you will recognize it. So, what I'm going to do is write it. Here's the term we're looking for here, this one. Okay, and there are others. It's an infinite series that goes on forever. And now, to make the argument, I've got to put it one more term here. So, I'm going to put in ak cos(kt). I don't mean to imply that that k could be more than n, in which case I should have written it here. I could have also used equally well bk sin(kt) here, and I could have put it there. This is just some other term. This is the an, and this is the one we want. And, this is some other term. Okay, all right, now, what you do is, to get the an, what you do is you multiply everything through by, you focus on the one you want, so it's dot, dot, dot, dot, dot, and you multiply by cos(nt). So, it's ak cos(kt) cos(nt). Of course, that gets multiplied, too. But, the one we want also gets multiplied, an. And, it becomes, when I multiply by cos(nt), (cos(nt))^2, and now, I hope you can see what's going to happen. Now, oops, I didn't multiply the f(t), sorry. It's the oldest trick in the book. I now integrate everything from minus, so I don't endlessly recopy. I'll integrate by putting it up in yellow chalk, and you are left to your own devices. This is definitely a colored pen type of course. Okay, so, you want to integrate from minus pi to pi? Good. Just integrate everything on the right hand side, also, from minus pi to pi. Plus, these are the guys just to indicate that I haven't, they are out there, too. And now, what happens? What's this? Zero. Every term is zero because of the orthogonality relations. They are all of the form, a constant times cosine nt times something different from cos(nt), sin(kt), cos(kt), or even that constant term. All of the other terms are zero, and the only one which survives is this one. And, what's its value? The integral from minus pi to pi of cosine squared, I put that up somewhere. It's right here, down there? It is pi. So, this term turns into an pi, an, dragged along, but this, the integral of the square of the cosine turns out to be pi. And so, the end result is that we get a formula for an. What is an? an is, well, an times pi, all these terms of zero, and nothing is left but this left-hand side. And therefore, an * pi = Integral from -pi to pi of [f(t) cos(nt) dt]. But, that's an times pi. Therefore, if I want just an, I have to divide it by pi. And, that's the formula for the coefficient an. The argument is exactly the same if you want bn, but I will write it down for the sake of completeness, as they say, and to give you a chance to digest what I've done, you know, 30 seconds to digest it. sin(nt) dt. And, that's because the argument is the same. And, the integral of sin^2(nt) is also pi. So, there's no difference there. Now, there's only one little caution. It have to be a little careful. This is n one, two, and so on, and this is also n one, two, and unfortunately, the constant term is a slight exception. We better look at that specifically because if you forget it, you can get them to gross, gross, gross errors. How about the constant term? Suppose I repeat the argument for that in miniature. There is a constant term plus other stuff, a typical other stuff, an cosine, let's say. How am I going to get that constant term? Well, if you think of this as sort of like a constant times, the reason is the constant is because it's being multiplied by cos(0t). So, that suggests I should multiply by one. In other words, what I should do is simply take the integral from -pi to pi of [f(t) dt]. What's the answer? Well, this integrated from minus pi to pi is how much? It's 2 pi c0, right? And, the other terms all give me zero. Every other term is zero because if you integrate cos(nt) or sin(nt) over a complete period, you always get zero. There is as much area above the axis or below. Or, you can look at two special cases. Anyway, you always get zero. It's the same thing with sine here. So, the answer is that c0 is equal to, is a little special. You don't just put n = 0 here because then you would lose a factor of two. So, c0 should be 1/(2pi) times this integral. Now, there are two kinds of people in the world, the ones who learn two separate formulas, and the ones who just learn two separate notations. So, what most people do is they say, look, I want this to be always the formula for a zero. That means, even when n = 0, I want this to be the formula. Well, then you are not going to get the right leading term. Instead of getting c0, you're going to get twice it, and therefore, the formula is, the Fourier series, therefore, isn't written this way. It's written-- If you want an a0 there, calculate it by this formula. Then, you've got to write not c0, but a0 / 2. I think you will be happiest if I have to give you advice. I think you'll be happiest remembering a single formula for the an's and bn's, in which case you have to remember that the constant leading term is a0 / 2 if you insist on using that formula. Otherwise, you have to learn a special formula for the leading coefficient, namely 1/(2pi) instead of 1/pi. Well, am I really going to calculate a Fourier series in four minutes? Not very likely, but I'll give it a brave college try. Anyway, you will be doing a great deal of it, and your book has lots and lots of examples, too many, in fact. It ruined all the good examples by calculating them for you. But, I will at least outline. Do you want me to spend three minutes outlining a calculation just so you have something to work on in the next boring class you are in? Let's see, so I'll just put a few key things on the board. I would advise you to sit still for this. Otherwise you're going to hack it, and take twice as long as you should, even though I knew you've been up to 3:00 in the morning doing your problem set. Cheer up. I got up at 6:00 to make up the new one. So, we're even. This should be zero here. So, here's minus pi. Here's pi. Here's one, negative one. The function starts out like that, and now to be periodic, it then has to continue on in the same way. So, I think that's enough of its path through life to indicate how it runs. This is a typical square-away function, sometimes it's called. It's an odd function. It goes equally above and below the axis. Now, the integrals, when you calculate them, the an is going to be, okay, look, the an = 0. Let me, instead, and you will get that with a little hacking. I'm much more worried about what you'll do with the bn's. Also, next Monday you'll see intuitively that the an is zero, in which case you won't even bother trying to calculate it. How about the bn, though? Well, you see, because the function is discontinuous, so, this is my input. My f(t) is that orange discontinuous function. The bn is going to be, I have to break it into two parts. In the first part, the function is negative one. And there, I will be taking the integral from -pi to pi of [-1 * sin(nt) dt]. And then, there's another part, sorry, minus pi to zero. The other part I integrate from zero to pi of what? Well, f(t) = +1. And so, I simply integrate sin(nt) dt. Now, each of these is a perfectly simple integral. The only question is how you combine them. So, this is, after you calculate it, it will be (1 - cos(n pi)) / n. And, this part will turn out to be (1 - cos(n pi)) / n. And therefore, the answer will be two minus two cosine, two over n times, right, two minus, 2*(1 - cos(n pi))/n. No, okay, now, what's this? This is minus one if n is odd. It's plus one if n is even. Now, either you can work with it this way, or you can combine the two of them into a single expression. Its (-1)^n takes care of both of them. But, the way the answer is normally expressed, it would be minus two over n, two over n times, if n is even, I get zero. If n is odd, I get two. So, times two, if n is odd, and zero if n is even. So, it's four over n, or it's zero, and the final series is a sum of those coefficients times the appropriate-- cosine or sine? Sine terms because the cosine terms were all coefficients, all turned out to be zero. I'm sorry I didn't have the chance to do that calculation in detail. But, I think that's enough sketch for you to be able to do the rest of it yourself.
http://ocw.mit.edu/courses/mathematics/18-03-differential-equations-spring-2010/video-lectures/lecture-15-introduction-to-fourier-series/
13
253
Topics covered: Determinants; cross product Instructor: Prof. Denis Auroux Lecture Notes - Week 1 Summary (PDF) Thank you. Let's continue with vectors and operations of them. Remember we saw the topic yesterday was dot product. And remember the definition of dot product, well, the dot product of two vectors is obtained by multiplying the first component with the first component, the second with the second and so on and summing these and you get the scalar. And the geometric interpretation of that is that you can also take the length of A, take the length of B multiply them and multiply that by the cosine of the angle between the two vectors. We have seen several applications of that. One application is to find lengths and angles. For example, you can use this relation to give you the cosine of the angle between two vectors is the dot product divided by the product of the lengths. Another application that we have is to detect whether two vectors are perpendicular. To decide if two vectors are perpendicular to each other, all we have to do is compute our dot product and see if we get zero. And one further application that we did not have time to discuss yesterday that I will mention very quickly is to find components of, let's say, a vector A along a direction u. So some unit vector. Let me explain. Let's say that I have some direction. For example, the horizontal axis on this blackboard. But it could be any direction in space. And, to describe this direction, maybe I have a unit vector along this axis. Let's say that I have any of a vector A and I want to find out what is the component of A along u. That means what is the length of this projection of A to the given direction? This thing here is the component of A along u. Well, how do we find that? Well, we know that here we have a right angle. So this component is just length A times cosine of the angle between A and u. But now that means I should be able to compute it very easily because that's the same as length A times length u times cosine theta because u is a unit vector. It is a unit vector. That means this is equal to one. And so that's the same as the dot product between A and u. That is very easy. And, of course, the most of just cases of that is say, for example, we want just to find the component along i hat, the unit vector along the x axis. Then you do the dot product with i hat, which is 100. What you get is the first component. And that is, indeed, the x component of a vector. Similarly, say you want the z component you do the dot product with k that gives you the last component of your vector. But the same works with a unit vector in any direction. So what is an application of that? Well, for example, in physics maybe you have seen situations where you have a pendulum that swings. You have maybe some mass at the end of the string and that mass swings back and forth on a circle. And to analyze this mechanically you want to use, of course, Newton's Laws of Mechanics and you want to use forces and so on, but I claim that components of vectors are useful here to understand what happens geometrically. What are the forces exerted on this pendulum? Well, there is its weight, which usually points downwards, and there is the tension of the string. And these two forces together are what explains how this pendulum is going to move back and forth. Now, you could try to understand the equations of motion using x, y coordinates or x, z or whatever you want to call them, let's say x, y. But really what causes the pendulum to swing back and forth and also to somehow stay a constant distance are phenomenal relative to this circular trajectory. For example, maybe instead of taking components along the x and y axis, we want to look at two other unit vectors. We can look at a vector, let's call it T, that is tangent to the trajectory. Sorry. Can you read that? It's not very readable. T is tangent to the trajectory. And, on the other hand, we can introduce another vector. Let's call that N. And that one is normal, perpendicular to the trajectory. And so now if you think about it you can look at the components of the weight along the tangent direction and along the normal direction. And so the component of F along the tangent direction is what causes acceleration in the direction along the trajectory. It is what causes the pendulum to swing back and forth. And the component along N, on the other hand. That is the part of the weight that tends to pull our mass away from this point. It is what is going to be responsible for the tension of the string. It is why the string is taut and not actually slack and with things moving all over the place. That one is responsible for the tension of a string. And now, of course, if you want to compute things, well, maybe you will call this angle theta and then you will express things explicitly using sines and cosines and you will solve for the equations of motion. That would be a very interesting physics problem. But, to save time, we are not going to do it. I'm sure you've seen that in 8.01 or similar classes. And so to find these components we will just do dot products. Any questions? No. OK. Let's move onto our next topic. Here we have found things about lengths, angles and stuff like that. One important concept that we have not understood yet in terms of vectors is area. Let's say that we want to find the area of this pentagon. Well, how do we compute that using vectors? Can we do it using vectors? Yes we can. And that is going to be the goal. The first thing we should do is probably simplify the problem. We don't actually need to bother with pentagons. All we need to know are triangles because, for example, you can cut that in three triangles and then sum the areas of the triangles. Perhaps easier, what is the area of a triangle? Let's start with a triangle in the plane. Well, then we need two vectors to describe it, say A and B here. How do we find the area of a triangle? Well, we all know base times height over two. What is the base? What is the height? The area of this triangle is going to be one-half of the base, which is going to be the length of A. And the height, well, if you call theta this angle, then this is length B sine theta. Now, that looks a lot like the formula we had there, except for one little catch. This is a sine instead of a cosine. How do we deal with that? Well, what we could do is first find the cosine of the angle. We know how to find the cosine of the angle using dot products. Then solve for sine using sine square plus cosine square equals one. And then plug that back into here. Well, that works but it is kind of a very complicated way of doing it. So there is an easier way. And that is going to be determinants, but let me explain how we get to that maybe still doing elementary geometry and dot products first. Let's see. What we can do is instead of finding the sine of theta, well, we're not good at finding sines of angles but we are very good now at finding cosines of angles. Maybe we can find another angle whose cosine is the same as the sine of theta. Well, you have already heard about complimentary angles and how I take my vector A, my vector B here and I have an angle theta. Well, let's say that I rotate my vector A by 90 degrees to get a new vector A prime. A prime is just A rotated by 90 degrees. Then the angle between these two guys, let's say theta prime, well, theta prime is 90 degrees or pi over two gradients minus theta. So, in particular, cosine of theta prime is equal to sine of theta. In particular, that means that length A, length B, sine theta, which is what we would need to know in order to find the area of this triangle is equal to, well, A and A prime have the same length so let me replace that by length of A prime. I am not changing anything, length B, cosine theta prime. And now we have something that is much easier for us. Because that is just A prime dot B. That looks like a very good plan. There is only one small thing which is we don't know yet how to find this A prime. Well, I think it is not very hard. Let's see. Actually, why don't you guys do the hard work? Let's say that I have a plane vector A with two components a1, a2. And I want to rotate it counterclockwise by 90 degrees. It looks like maybe we should change some signs somewhere. Maybe we should do something with the components. Can you come up with an idea of what it might be? I see a lot of people answering three. I see some other answers, but the majority vote seems to be number three. Minus a2 and a1. I think I agree, so let's see. Let's say that we have this vector A with components a1. So a1 is here. And a2. So a2 is here. Let's rotate this box by 90 degrees counterclockwise. This box ends up there. It's the same box just flipped on its side. This thing here becomes a1 and this thing here becomes a2. And that means our new vector A prime is going to be -- Well, the first component looks like an a2 but it is pointing to the left when a2 is positive. So, actually, it is minus a2. And the y component is going to be the same as this guy, so it's going to be a1. If you wanted instead to rotate clockwise then you would do the opposite. You would do a2 minus a1. Is that reasonably clear for everyone? OK. Let's continue the calculation there. A prime, we have decided, is minus a2, a1 dot product with let's call b1 and b2, the components of B. Then that will be minus a2, b1 plus a1, b2 plus a1, b2. Let me write that the other way around, a1, b2 minus a2, b1. And that is a quantity that you may already know under the name of determinant of vectors A and B, which we write symbolically using this notation. We put A and B next to each other inside a two-by-two table and we put these verticals bars. And that means the determinant of these numbers, this guy times this guy minus this guy times this guy. That is called the determinant. And geometrically what it measures is the area, well, not of a triangle because we did not divide by two, but of a parallelogram formed by A and B. It measures the area of the parallelogram with sides A and B. And, of course, if you want the triangle then you will just divide by two. The triangle is half the parallelogram. There is one small catch. The area usually is something that is going to be positive. This guy here has no reason to be positive or negative because, in fact, well, if you compute things you will see that where it is supposed to go negative it depends on whether A and B are clockwise or counterclockwise from each other. I mean the issue that we have -- Well, when we say the area is one-half length A, length B, sine theta that was assuming that theta is positive, that its sine is positive. Otherwise, if theta is negative maybe we need to take the absolute value of this. Just to be more truthful, I will say the determinant is either plus or minus the area. Any questions about this? Yes. Sorry. That is not a dot product. That is the usual multiplication. That is length A times length B times sine theta. What does that equal? And so that is equal to the area of a parallelogram. Sorry. Let me explain that again. If I have two vectors A and B, I can form a parallelogram with them or I can form a triangle. And so the area of a parallelogram is equal to length A, length B, sine theta, is equal to the determinant of A and B. While the area of a triangle is one-half of that. And, again, to be truthful, I should say these things can be positive or negative. Depending on whether you count the angle positively or negatively, you will get either the area or minus the area. The area is actually the absolute value of these quantities. Is that clear? OK. Yes. If you want to compute the area, you will just take the absolute value of the determinant. I should say the area of a parallelogram so that it is completely clear. Sorry. Do you have a question? Explain again, sorry, was the question how a determinant equals the area of a parallelogram? OK. The area of a parallelogram is going to be the base times the height. Let's take this guy to be the base. The length of a base will be length of A and the height will be obtained by taking B but only looking at the vertical part. That will be length of B times the sine of theta. That is how I got the area of a parallelogram as length A, length B, sine theta. And then I did this manipulation and this trick of rotating to find a nice formula. Yes. You are asking ahead of what I am going to do in a few minutes. You are asking about magnitude of A cross B. We are going to learn about cross products in a few minutes. And the answer is yes, but cross product is for vectors in space. Here I was simplifying things by doing things just in the plane. Just bear with me for five more minutes and we will do things in space. Yes. That is correct. The way you compute this in practice is you just do this. That is how you compute the determinant. Yes. What about three dimensions? Three dimensions we are going to do now. More questions? Should we move on? OK. Let's move to space. There are two things we can do in space. And you can look for the volume of solids or you can look for the area of surfaces. Let me start with the easier of the two. Let me start with volumes of solids. And we will go back to area, I promise. I claim that there is also a notion of determinants in space. And that is going to tell us how to find volumes. Let's say that we have three vectors A, B and C. And then the definition of their determinants going to be, the notation for that in terms of the components is the same as over there. We put the components of A, the components of B and the components of C inside verticals bars. And, of course, I have to give meaning to this. This will be a number. And what is that number? Well, the definition I will take is that this is a1 times the determinant of what I get by looking in this lower right corner. The two-by-two determinant b2, b3, c2, c3. Then I will subtract a2 times the determinant of b1, b3, c1, c3. And then I will add a3 times the determinant b1, b2, c1, c2. And each of these guys means, again, you take b2 times c3 minus c2 times b3 and this times that minus this time that and so on. In fact, there is a total of six terms in here. And maybe some of you have already seen a different formula for three-by-three determinants where you directly have the six terms. It is the same definition. How to remember the structure of this formula? Well, this is called an expansion according to the first row. So we are going to take the entries in the first row, a1, a2, a3 And for each of them we get the term. Namely we multiply it by a two-by-two determinant that we get by deleting the first row and the column where we are. Here the coefficient next to a1, when we delete this column and this row, we are left with b2, b3, c2, c3. The next one we take a2, we delete the row that is in it and the column that it is in. And we are left with b1, b3, c1, c3. And, similarly, with a3, we take what remains, which is b1, b2, c1, c2. Finally, last but not least, there is a minus sign here for the second guy. It looks like a weird formula. I mean it is a little bit weird. But it is a formula that you should learn because it is really, really useful for a lot of things. I should say if this looks very artificial to you and you would like to know more there is more in the notes, so read the notes. They will tell you a bit more about what this means, where it comes from and so on. If you want to know a lot more then some day you should take 18.06, Linear Algebra where you will learn a lot more about determinants in N dimensional space with N vectors. And there is a generalization of this in arbitrary dimensions. In this class, we will only deal with two or three dimensions. Yes. Why is the negative there? Well, that is a very good question. It has to be there so that this will actually equal, well, what I am going to say right now is that this will give us the volume of [a box?] with sides A, B, C. And the formula just doesn't work if you don't put the negative. There is a more fundamental reason which has to do with orientation of space and the fact that if you switch two coordinates in space then basically you change what is called the handedness of the coordinates. If you look at your right hand and your left hand, they are not actually the same. They are mirror images. And, if you squared two coordinate axes, that is what you get. That is the fundamental reason for the minus. Again, we don't need to think too much about that. All we need in this class is the formula. Why do we care about this formula? It is because of the theorem that says that geometrically the determinant of the three vectors A, B, C is, again, plus or minus. This determinant could be positive or negative. See those minuses and all sorts of stuff. Plus or minus the volume of the parallelepiped. That is just a fancy name for a box with parallelogram sides, in case you wonder, with sides A, B and C. You take the three vectors A, B and C and you form a box whose sides are all parallelograms. And when its volume is going to be the determinant. Other questions? I'm sorry. I cannot quite hear you. Yes. We are going to see how to do it geometrically without a determinant, but then you will see that you actually need a determinant to compute it no matter what. We are going to go back to this and see another formula for volume, but you will see that really I am cheating. I mean somehow computationally the only way to compute it is really to use a determinant. That is correct. In general, I mean, actually, I could say if you look at the two-by-two determinant, see, you can also explain it in terms of this extension. If you take a1 and multiply by this one-by-one determinant b2, then you take a2 and you multiply it by this one-by-one determinant b1 but you put a minus sign. And in general, indeed, when you expand, you would stop putting plus, minus, plus, minus alternating. More about that in 18.06. Yes. There is a way to do it based on other rows as well, but then you have to be very careful with the sign vectors. I will refer you to the notes for that. I mean you could also do it with a column, by the way. I mean be careful about the sign rules. Given how little we will use determinants in this class, I mean we will use them in a way that is fundamental, but we won't compute much. Let's say this is going to be enough for us for now. After determinants now I can tell you about cross product. And cross product is going to be the answer to your question about area. OK. Let me move onto cross product. Cross product is something that you can apply to two vectors in space. And by that I mean really in three-dimensional space. This is something that is specific to three dimensions. The definition A cross B -- It is important to really do your multiplication symbol well so that you don't mistake it with a dot product. Well, that is going to be a vector. That is another reason not to confuse it with dot product. Dot product gives you a number. Cross product gives you a vector. They are really completely different operations. They are both called product because someone could not come up with a better name, but they are completely different operations. What do we do to do the cross product of A and B? Well, we do something very strange. Just as I have told you that a determinant is something where we put numbers and we get a number, I am going to violate my own rule. I am going to put together a determinant in which -- Well, the last two rows are the components of the vectors A and B but the first row strangely consists for unit vectors i, j, k. What does that mean? Well, that is not a determinant in the usual sense. If you try to put that into your calculator, it will tell you there is an error. I don't know how to put vectors in there. I want numbers. What is means is it is symbolic notation that helps you remember what the formula is. The actual formula is, well, you use this definition. And, if you use that definition, you see that it is i hat times some number. Let me write it as determinant of a2, a3, b2, b3 times i hat minus determinant a1, a3, b1, b3, j hat plus a1, a2, b1, b2, k hat. And so that is the actual definition in a way that makes complete sense, but to remember this formula without too much trouble it is much easier to think about it in these terms here. That is the definition and it gives you a vector. Now, as usual with definitions, the question is what is it good for? What is the geometric meaning of this very strange operation? Why do we bother to do that? Here is what it does geometrically. Remember a vector has two different things. It has a length and it has a direction. Let's start with the length. A length of a cross product is the area of the parallelogram in space formed by the vectors A and B. Now, if you have a parallelogram in space, you can find its area just by doing this calculation when you know the coordinates of the points. You do this calculation and then you take the length. You take this squared plus that squared plus that squared, square root. It looks like a very complicated formula but it works and, actually, it is the simplest way to do it. This time we don't actually need to put plus or minus because the length of a vector is always positive. We don't have to worry about that. And what is even more magical is that not only is the length remarkable but the direction is also remarkable. The direction of A cross B is perpendicular to the plane of a parallelogram. Our two vectors A and B together in a plane. What I am telling you is that for vector A cross B will point, will stick straight out of that plane perpendicularly to it. In fact, I would have to be more precise. There are two ways that you can be perpendicular to this plane. You can be perpendicular pointing up or pointing down. How do I decide which? Well, there is something called the right-hand rule. What does the right-hand rule say? Well, there are various versions for right-hand rule depending on which country you learn about it. In France, given the culture, you even learn about it in terms of a cork screw and a wine bottle. I will just use the usual version here. You take your right hand. If you are left-handed, remember to take your right hand and not the left one. The other right, OK? Then place your hand to point in the direction of A. Let's say my right hand is going in that direction. Now, curl your fingers so that they point towards B. Here that would be kind of into the blackboard. Don't snap any bones. If it doesn't quite work then rotate your arms so that you can actually physically do it. Then get your thumb to stick straight out. Well, here my thumb is going to go up. And that tells me that A cross B will go up. Let me write that down while you experiment with it. Again, try not to enjoy yourselves. First, your right hand points parallel to vector A. Then your fingers point in the direction of B. Then your thumb, when you stick it out, is going to point in the direction of A cross B. Let's do a quick example. Where is my quick example? Here. Let's take i cross j. I see most of you going in the right direction. If you have it pointing in the wrong direction, it might mean that you are using your left hand, for example. Example, I claim that i cross j equals k. Let's see. I points towards us. J point to our right. I guess this is your right. I think. And then your thumb is going to point up. That tells us it is roughly pointing up. And, of course, the length should be one because if you take the unit square in the x, y plane, its area is one. And the direction should be vertical. Because it should be perpendicular to the x, y plane. It looks like i cross j will be k. Well, let's check with the definition i, j, k. What is i? I is one, zero, zero. J is zero, one, zero. The coefficient of i will be zero times zero minus zero times one. That is zero. The coefficient of j will be one time zero minus zero times zero, that is a zero, minus zero j. It doesn't matter. And the coefficient of k will be one times one, that is one, minus zero times zero, so one k. So we do get i cross j equals k both ways. In this case, it is easier to do it geometrically. If I give you no complicated vectors, probably you will actually want to do the calculation. Any questions? Yes. The coefficient of k, remember I delete the first row and the last column so I get this two-by-two determinant. And that two-by-two determinant is one times one minus zero times zero so that gives me a one. That is what you do with two-by-two determinants. Similarly for the others, but the others turn out to be zero. More questions? Yes. Let me repeat how I got the one in front of k. Remember the definition of a determinant I expand according to the entries in the first row. When I get to k what I do is delete the first row and I delete the last column, the column that contains k. I delete these guys and these guys and I am left with this two-by-two determinant. Now, a two-by-two determinant, you multiply according to this downward diagonal and then minus this times that. One times one, let me see here, I got one k because that is one times one minus zero times zero equals one. Sorry. That is really hard to read. Maybe it will be easier that way. Yes. Let's try. If I do the same for i, I think I will also get zero. Let's do the same for i. I take i, I delete the first row, I delete the first column, I get this two-by-two determinant here and I get zero times zero, that is zero, minus zero times one. That is the other trick question. Zero times one is zero as well. So that zero minus zero is zero. I hope on Monday you should get more practice in recitation about how to compute determinants. Hopefully, it will become very easy for you all to compute this next. I know the first time it is kind of a shock because there are a lot of numbers and a lot of things to do. Let me return to the question that you asked a bit earlier about how do you find actually volume if I don't want to know about determinants? Well, let's have another look at the volume. Let's say that I have three vectors. Let me put them this way, A, B and C. And let's try to see how else I could think about the volume of this box. Probably you know that the volume of a parallelepiped is the area of a base times the height. Sorry. The volume is the area of a base times the height. How do we do that in practice? Well, what is the area of a base? The base is a parallelogram in space with sides B and C. How do we find the area of the parallelogram in space? Well, we just discovered that. We can do it by taking that cross product. The area of a base, well, we take the cross product of B and C. That is not quite it because this is a vector. We would like a number while we take its length. That is pretty good. What about the height? Well, the height is going to be the component of A in the direction that is perpendicular to the base. Let's take a direction that is perpendicular to the base. Let's call that N, a unit vector in that direction. Then we can get the height by taking A dot n. That is what we saw at the beginning of class that A dot n will tell me how much A goes in the direction of n. Are you still with me? OK. Let's keep going. Let's think about this vector n. How do I get it? Well, I can get it by actually using cross product as well. Because I said the direction perpendicular to two vectors I can get by taking that cross product and looking at that direction. This is still B cross C length. And this one is, so I claim, n can be obtained by taking D cross C. Well, that comes in the right direction but it is not a unit vector. How do I get a unit vector? I divide by the length. Thanks. I take B cross C and I divide by length B cross C. Well, now I can probably simplify between these two guys. And so what I will get -- What I get out of this is that my volume equals A dot product with vector B cross C. But, of course, I have to be careful in which order I do it. If I do it the other way around, A dot B, I get a number. I cannot cross that. I really have to do the cross product first. I get the new vector. Then my dot product. The fact is that the determinant of A, B, C is equal to this so-called triple product. Well, that looks good geometrically. Let's try to check whether it makes sense with the formulas, just one small thing. We saw the determinant is a1 times determinant b2, b3, c2, c3 minus a2 times something plus a3 times something. I will let you fill in the numbers. That is this guy. What about this guy? Well, dot product, we take the first component of A, that is a1, we multiply by the first component of B cross C. What is the first component of B cross C? Well, it is this determinant b2, b3, c2, c3. If you put B and C instead of A and B into there you will get the i component is this guy plus a2 times the second component which is minus some determinant plus a3 times the third component which is, again, a determinant. And you can check. You get exactly the same expression, so everything is fine. There is no contradiction in math just yet. On Tuesday we will continue with this and we will start going into matrices, equations of planes and so on. Meanwhile, have a good weekend and please start working on your Problem Sets so that you can ask lots of questions to your TAs on Monday.
http://ocw.mit.edu/courses/mathematics/18-02-multivariable-calculus-fall-2007/video-lectures/lecture-2-determinants/
13
50
As with the other algebras on this site, complex numbers can be introduced in different ways: - We can look at their algebraic properties in their own right, as a set of symbols and numbers with clear rules for doing different operations, (complex number algebra is discussed on this page). - We can look at complex numbers as extensions or generalisations of other algebras (for example they extend real numbers and they are a subset of quaternions and clifford algebras). - We can look at complex number algebra as a system which (unlike real numbers) has a solution to all quadratic equations. - We can look at complex numbers in geometric terms to represent a plane (see discussion of complex plane below) and transforms between planes as explained on this page. We can look at complex numbers in terms of their uses, in particular two dimensional rotations. - As a magnitude and direction given by: r eiθ - As an extension field to the real numbers, isomorphic to R[x]/<x+1> - As a product of two real number fields - As an even subalgebra of a Clifford algebra based on 2D vectors wich square to +ve. The most useful notation for complex numbers is of the form a + i b. Usually when an expression contains an addition symbol we can combine the two operands into a single number, but in the case of a complex number, this is as simple as we can get it and so the plus symbol remains as part of the complex number. We can't combine the two parts of the complex number because they represent different things, the real part and the imaginary part. In geometric terms we can consider the real and imaginary parts to be at 90° to each other. If one of the parts is zero then it is not necessary to include it and, in this case, we can omit the '+' but even in this case we often leave the 0 and '+' in the expression for clarity. The 'i' has the following roles: - It is a marker to distinguish the imaginary part, in other words 'i' is short for 'imaginary'. - It represents the square root of minus one: i = √-1. So, if we are simplifying an expression and if we get i*i appearing in an expression, we can replace it with -1. - In geometric terms it can also be considered an operator, representing rotation by 90°. I have put further explanation of the square root of minus one on this page. Complex numbers are two dimensional in that they contain two scalar values and they can represent points in 2D vector space. They are similar to 2D vectors except with different multiplication rules. Unlike vector multiplication, complex numbers have the following properties: - multiplication of two complex numbers produces another complex number . - there is always a multiplication inverse (division always exists - except for usual restriction of divide by zero). Complex numbers are commutative, associative and distributive over addition (as defined here). So the set of all complex numbers is a two dimensional plane which contains the real numbers, shown below as a horizontal line, and the imaginary numbers, shown below as a vertical line. So multiplying by i rotates round to the imaginary axis, and multiplying by i again rotates to the negative real axis. So, i*i=-1, which is just another way of saying that i is the square root of minus one. Therefore the square root of a negative number always has a solution when we are working in complex numbers even though it does not have a solution when we are working purely in real numbers. Adding complex numbers Just add the real and imaginary components independently as follows: (a + i b)+(c + i d) = (a+c) + i (b+d) Multiplying complex numbers To multiply just expand out the terms and group as follows: (a + i b)*(c + i d) = (a*c - b*d) + i (a*d + b*c) I don't know if multiplications are so costly in CPU time in modern computers, but if we do want to minimise multiplications we can do a complex multiplication using 3 floating point multiplications as follows: double t1= a * other.a; double t2= b * other.b; double t3= (a + b)*(other.a+other.b); a = t1 - t2; b = t3 - t1 - t2; This is the distance (r) of a + i b from the origin. It is written as: r = | a + i b | r = | a + i b | = math.sqrt(a*a + b*b) |a + i b|*|c + i d| = |a*c - b*d + i (a*d + b*c)| We don't tend to use the notation for division, since complex multiplication is not commutative we need to be able to distinguish between [a][b]-1 and [b]-1[a]. So instead of a divide operation we tend to multiply by the inverse. In order to calculate the inverse 1/b we multiply top and bottom by its conjugate as follows, conj(b)/b*conj(b). Multiplying a complex number by its conjugate gives a real number and we already know how to divide by a real number. The conjugate of a + i b is a - i b so (a + i b)*conj(a + i b) = a*a + b*b so 1/(a + i b) = a/(a*a + b*b) - i b/(a*a + b*b) Representing Rotations using complex numbers instead of a + i b the complex number could also be represented in what is known as the polar form: r (cos(θ) + i sin(θ)) in other words replace: - a = r cos(θ) - b =r sin(θ) we can use ei θ= cos(θ) + i sin(θ) to give the exponential form: r ei θ If we want combine the result of two rotations, for example rotate by θ1 then rotate by θ2, then we multiply the corresponding complex numbers because: ei (θ1+θ2) = ei θ1 * ei θ2 Or to combine two rotations by addition if we add the logarithms of the complex numbers. Applications of complex numbers A complex number could be used to represent the position of an object in a two dimensional plane, complex numbers could also represent other quantities in two dimensions like displacements, velocity, acceleration, momentum, etc. But do the usual equations of motion work correctly? There does not seem to be any problem with F = m a where F and a are complex numbers and m is a scalar. But we could have done this just as well with a 2 dimensional vector. To really use complex algebra we would need something that in involved multiplication of two complex numbers. I don't think we can use energy equations because energy is a scalar quantity, isn't it? Another possibility to use complex numbers in simple mechanics might be to use them to represent rotations. We have seen above how complex numbers can be used to represent rotations, but what is the advantage of doing this? why use a complex number when only a single number is needed to represent a rotation angle in a two dimensional plane? Is there any advantages in using complex numbers to represent the complete state of a solid object, in other words it has a state variable that includes both the position. I don't think this would work because, if we want to combine translations then we add them, but it we want to combine rotations using complex numbers then we multiply them. So we cant combine operations using a single operation. We would be better off using a 3 element vector, two elements for position and one for orientation angle. There are a lot of applications in physics and engineering where complex numbers are useful. A common use of complex numbers is in electrical circuits, where capacitors and inductors are like an imaginary component of resistors. This only applies when using alternating current at the frequency being used. In other words, if the frequency of the current changes then the complex 'resistance' value of the components will very. This type of analysis could also be applied to a mechanical analog of electrical circuits, the spring, mass, damper model. If we want to determine the response of such a model at a particular frequency, then complex numbers would be a good way to do this. Another use of complex numbers is for generating fractel patterns on a 2-dimentional plane.
http://euclideanspace.com/maths/algebra/realNormedAlgebra/complex/index.htm
13
97
Center of Mass Introduction to the center of mass Center of Mass ⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles. - I will now do a presentation on the center of mass. - And the center mass, hopefully, is something that - will be a little bit intuitive to you, and it actually has - some very neat applications. - So in very simple terms, the center of mass is a point. - Let me draw an object. - Let's say that this is my object. - Let's say it's a ruler. - This ruler, it exists, so it has some mass. - And my question to you is what is the center mass? - And you say, Sal, well, in order to know figure out the - center mass, you have to tell me what the center of mass is. - And what I tell you is the center mass is a point, and it - actually doesn't have to even be a point in the object. - I'll do an example soon where it won't be. - But it's a point. - And at that point, for dealing with this object as a whole or - the mass of the object as a whole, we can pretend that the - entire mass exists at that point. - And what do I mean by saying that? - Well, let's say that the center of mass is here. - And I'll tell you why I picked this point. - Because that is pretty close to where the center - of mass will be. - If the center of mass is there, and let's say the mass - of this entire ruler is, I don't know, 10 kilograms. This - ruler, if a force is applied at the center of mass, let's - say 10 Newtons, so the mass of the whole ruler is 10 - kilograms. If a force is applied at the center of mass, - this ruler will accelerate the same exact way as would a - point mass. - Let's say that we just had a little dot, but that little - dot had the same mass, 10 kilograms, and we were to push - on that dot with 10 Newtons. - In either case, in the case of the ruler, we would accelerate - upwards at what? - Force divided by mass, so we would accelerate upwards at 1 - meter per second squared. - And in this case of this point mass, we would - accelerate that point. - When I say point mass, I'm just saying something really, - really small, but it has a mass of 10 kilograms, so it's - much smaller, but it has the same mass as this ruler. - This would also accelerate upwards with a magnitude of 1 - meters per second squared. - So why is this useful to us? - Well, sometimes we have some really crazy objects and we - want to figure out exactly what it does. - If we know its center of mass first, we can know how that - object will behave without having to worry about the - shape of that object. - And I'll give you a really easy way of realizing where - the center of mass is. - If the object has a uniform distribution-- when I say - that, it means, for simple purposes, if it's made out of - the same thing and that thing that it's made out of, its - density, doesn't really change throughout the object, the - center of mass will be the object's geometric center. - So in this case, this ruler's almost a - one-dimensional object. - We just went halfway. - The distance from here to here and the distance from here to - here are the same. - This is the center of mass. - If we had a two-dimensional object, let's say we had this - triangle and we want to figure out its center of mass, it'll - be the center in two dimensions. - So it'll be something like that. - Now, if I had another situation, let's say I have - this square. - I don't know if that's big enough for you to see. - I need to draw it a little thicker. - Let's say I have this square, but let's say that half of - this square is made from lead. - And let's say the other half of the square is made from - something lighter than lead. - It's made of styrofoam. - That is lighter than lead. - So in this situation, the center of mass isn't going to - be the geographic center. - I don't know how much denser lead is than styrofoam, but - the center of mass is going to be someplace closer to the - right because this object does not have a uniform density. - It'll actually depend on how much denser the lead is than - the styrofoam, which I don't know. - But hopefully, that gives you a little intuition of what the - center of mass is. - And now I'll tell you something a little more - Every problem we have done so far, we actually made the - simplifying assumption that the force acts on - the center of mass. - So if I have an object, let's say the object that - looks like a horse. - Let's say that object. - If this is the object's center of mass, I don't know where - the horse's center of mass normally is, but let's say a - horse's center of mass is here. - If I apply a force directly on that center of mass, then the - object will move in the direction of that force with - the appropriate acceleration. - We could divide the force by the mass of the entire horse - and we would figure out the - acceleration in that direction. - But now I will throw in a twist. And actually, every - problem we did, all of these Newton's Law's problems, we - assumed that the force acted at the center of mass. - But something more interesting happens if the force acts away - from the center of mass. - Let me actually take that ruler example. - I don't know why I even drew the horse. - If I have this ruler again and this is the center of mass, as - we said, any force that we act on the center of mass, the - whole object will move in the direction of the force. - It'll be shifted by the force, essentially. - Now, this is what's interesting. - If that's the center of mass and if I were to apply a force - someplace else away from the center of mass, let' say I - apply a force here, I want you to think about for a second - what will probably happen to the object. - Well, it turns out that the object will rotate. - And so think about if we're on the space shuttle or we're in - deep space or something, and if I have a ruler, and if I - just push at one end of the ruler, what's going to happen? - Am I just going to push the whole ruler or is the whole - ruler is going to rotate? - And hopefully, your intuition is correct. - The whole ruler will rotate around the center of mass. - And in general, if you were to throw a monkey wrench at - someone, and I don't recommend that you do, but if you did, - and while the monkey wrench is spinning in the air, it's - spinning around its center of mass. - Same for a knife. - If you're a knife catcher, that's something you should - think about, that the object, when it's free, when it's not - fixed to any point, it rotates around its center of mass, and - that's very interesting. - So you can actually throw random objects, and that point - at which it rotates around, that's the - object's center of mass. - That's an experiment that you should do in an open field - around no one else. - Now, with all of this, and I'll actually in the next - video tell you what this is. - When you have a force that causes rotational motion as - opposed to a shifting motion, that's torque, but we'll do - that in the next video. - But now I'll show you just a cool example of how the center - of mass is relevant in everyday applications, like - high jumping. - So in general, let's say that this is a bar. - This is a side view of a bar, and this is the - thing holding the bar. - And a guy wants to jump over the bar. - His center of mass is-- most people's center of mass is - around their gut. - I think evolutionarily that's why our gut is there, because - it's close to our center of mass. - So there's two ways to jump. - You could just jump straight over the bar, like a hurdle - jump, in which case your center of mass would have to - cross over the bar. - And we could figure out this mass, and we can figure out - how much energy and how much force is required to propel a - mass that high because we know projectile motion and we know - all of Newton's laws. - But what you see a lot in the Olympics is people doing a - very strange type of jump, where, when they're going over - the bar, they look something like this. - Their backs are arched over the bar. - Not a good picture. - But what happens when someone arches their back over - the bar like this? - I hope you get the point. - This is the bar right here. - Well, it's interesting. - If you took the average of this person's density and - figured out his geometric center and all of that, the - center of mass in this situation, if someone jumps - like that, actually travels below the bar. - Because the person arches their back so much, if you - took the average of the total mass of where the person is, - their center of mass actually goes below the bar. - And because of that, you can clear a bar without having - your center of mass go as high as the bar and so you need - less force to do it. - Or another way to say it, with the same force, you could - clear a higher bar. - Hopefully, I didn't confuse you, but that's exactly why - these high jumpers arch their back, so that their center of - mass is actually below the bar and they don't have to exert - as much force. - Anyway, hopefully you found that to be a vaguely useful - introduction to the center of mass, and I'll see you in the - next video on torque. Be specific, and indicate a time in the video: At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger? Have something that's not a question about this content? This discussion area is not meant for answering homework questions. Share a tip When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831... Have something that's not a tip or feedback about this content? This discussion area is not meant for answering homework questions. Discuss the site For general discussions about Khan Academy, visit our Reddit discussion page. Flag inappropriate posts Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians. - disrespectful or offensive - an advertisement - low quality - not about the video topic - soliciting votes or seeking badges - a homework question - a duplicate answer - repeatedly making the same post - a tip or feedback in Questions - a question in Tips & Feedback - an answer that should be its own question about the site
http://www.khanacademy.org/science/physics/torque-angular-momentum/torque-tutorial/v/center-of-mass
13
55
Soviet Union in World War II Joseph Stalin was the General Secretary of the Communist Party of the Soviet Union's Central Committee from 1922 until his death in 1953. In the years following Lenin's death in 1924, he rose to become the authoritarian leader of the Soviet Union. In August 1939, at Stalin's direction, the Soviet Union entered into a non-aggression pact with Nazi Germany, containing a secret protocol, dividing the whole of eastern Europe into German and Soviet spheres of influence. Thereafter, Germany and the Soviet Union invaded their apportioned sections of Poland. The Soviet Union later invaded Estonia, Latvia, Lithuania and part of Romania, along with an attempted invasion of Finland. Stalin and Hitler later traded proposals for a Soviet entry into the Axis Pact. In June 1941, Germany began an invasion of the Soviet Union, before which Stalin had ignored reports of a German invasion. Stalin was confident that the total Allied war machine would eventually stop Germany, and the Soviets stopped the Wehrmacht some 30 kilometers from Moscow. Over the next four years, the Soviet Union repulsed German offensives, such as at the Battle of Stalingrad and Battle of Kursk, and pressed forward to victory in large Soviet offensives such as the Vistula-Oder Offensive. Stalin began to listen to his generals more after Kursk. Stalin met with Churchill and Roosevelt in Tehran Conference and began to discuss a two-front war against Germany and future of Europe after the war. Berlin finally fell in April 1945, but Stalin was never fully convinced his nemesis Hitler had committed suicide. Fending off the German invasion and pressing to victory in the East required a tremendous sacrifice by the Soviet Union, which suffered the highest military casualties in the war, losing approximately 35 million men. Stalin became personally involved with questionable tactics employed during the war, including the Katyn massacre, Order No. 270, Order No. 227 and NKVD prisoner massacres. Controversy also surrounds rapes and looting in Soviet-held territory, along with large numbers of deaths of POWs held by the Soviets, and the Soviets' abusive treatment of their own soldiers who had been held in German POW camps. Pact with Adolf Hitler In August 1939, Stalin accepted Adolf Hitler's proposal to enter into a non-aggression pact with Nazi Germany, negotiated by the foreign ministers Vyacheslav Molotov for the Soviets and Joachim von Ribbentrop for the Germans. Officially a non-aggression treaty only, an appended secret protocol, also reached on August 23, 1939, divided the whole of eastern Europe into German and Soviet spheres of influence. The USSR was promised an eastern part of Poland, then primarily populated by Ukrainians and Belarusians, in case of its dissolution, and Germany recognized Latvia, Estonia and Finland as parts of the Soviet sphere of influence, with Lithuania added in a second secret protocol in September 1939. Another clause of the treaty was that Bessarabia, then part of Romania, was to be joined to the Moldovan ASSR, and become the Moldovan SSR under control of Moscow. The Pact was reached two days after the breakdown of Soviet military talks with British and French representatives in August 1939 over a potential Franco-Anglo-Soviet alliance. Political discussions had been suspended on August 2 when Molotov stated they could not be restarted until progress was made in military talks late in August, after the talks had stalled over guarantees of the Baltic states, while the military talks upon which Molotov insisted started on 11 August. At the same time, Germany—with whom the Soviets had started secret discussions since July 29 -- argued that it could offer the Soviets better terms than Britain and France, with Ribbentrop insisting, "there was no problem between the Baltic and the Black Sea that could not be solved between the two of us." German officials stated that, unlike Britain, Germany could permit the Soviets to continue their developments unmolested, and that "there is one common element in the ideology of Germany, Italy and the Soviet Union: opposition to the capitalist democracies of the West." By that time, Molotov obtained information regarding Anglo-German negotiations and a pessimistic report from the Soviet ambassador in France. After disagreement regarding Stalin's demand to move Red Army troops through Poland and Romania (which Poland and Romania opposed), on August 21, the Soviets proposed adjournment of military talks using the excuse that the absence of the senior Soviet personnel at the talks interfered with the autumn manoeuvres of the Soviet forces, though the primary reason was the progress being made in the Soviet-German negotiations. That same day, Stalin received assurance that Germany would approve secret protocols to the proposed non-aggression pact that would grant the Soviets land in Poland, the Baltic states, Finland and Romania, after which Stalin telegrammed Hitler that night that the Soviets were willing to sign the pact and that he would receive Ribbentrop on August 23. Regarding the larger issue of collective security, some historians state that one reason that Stalin decided to abandon the doctrine was the shaping of his views of France and Britain by their entry into the Munich Agreement and the subsequent failure to prevent German occupation of Czechoslovakia. Stalin also viewed the Pact as gaining time in an inevitable war with Hitler in order to reinforce the Soviet military and shifting Soviet borders westwards, which would be militarily beneficial in such a war. Stalin and Ribbentrop spent most of the night of the Pact's signing trading friendly stories about world affairs and cracking jokes (a rarity for Ribbentrop) about England's weakness, and the pair even joked about how the Anti-Comintern Pact principally scared "British shopkeepers." They further traded toasts, with Stalin proposing a toast to Hitler's health and Ribbentrop proposing a toast to Stalin. Implementing the division of Eastern Europe and other invasions On September 1, 1939, the German invasion of its agreed upon portion of Poland started World War II. On September 17 the Red Army invaded eastern Poland and occupied the Polish territory assigned to it by the Molotov-Ribbentrop Pact, followed by co-ordination with German forces in Poland. Eleven days later, the secret protocol of the Molotov-Ribbentrop Pact was modified, allotting Germany a larger part of Poland, while ceding most of Lithuania to the Soviet Union. The Soviet portions lay east of the so-called Curzon Line, an ethnographic frontier between Russia and Poland drawn up by a commission of the Paris Peace Conference in 1919. In early 1940, the Soviets executed over 25,000 Polish POWs and political prisoners in the Katyn Forrest. After unsuccessfully attempting to install a communist puppet government in Finland, in November 1939, the Soviet Union invaded Finland. The Finnish defense defied Soviet expectations, and after stiff losses, Stalin settled for an interim peace granting the Soviet Union less than total domination by annexing only the eastern region of Karelia (10% of Finnish territory). Soviet official casualty counts in the war exceeded 200,000, while Soviet Premier Nikita Khrushchev later claimed the casualties may have been one million. After this campaign, Stalin took actions to bolster the Soviet military, modify training and improve propaganda efforts in the Soviet military. In mid-June 1940, when international attention was focused on the German invasion of France, Soviet NKVD troops raided border posts in Lithuania, Estonia and Latvia. Stalin claimed that the mutual assistance treaties had been violated, and gave six hour ultimatums for new governments to be formed in each country, including lists of persons for cabinet posts provided by the Kremlin. Thereafter, state administrations were liquidated and replaced by Soviet cadres, followed by mass repression in which 34,250 Latvians, 75,000 Lithuanians and almost 60,000 Estonians were deported or killed. Elections for parliament and other offices were held with single candidates listed, the official results of which showed pro-Soviet candidates approval by 92.8 percent of the voters of Estonia, 97.6 percent of the voters in Latvia and 99.2 percent of the voters in Lithuania. The resulting peoples assemblies immediately requested admission into the USSR, which was granted by the Soviet Union. In late June 1940, Stalin directed the Soviet annexation of Bessarabia and northern Bukovina, proclaiming this formerly Romanian territory part of the Moldavian Soviet Socialist Republic. But in annexing northern Bukovina, Stalin had gone beyond the agreed limits of the secret protocol. After the Tripartite Pact was signed by Axis Powers Germany, Japan and Italy, in October 1940, Stalin personally wrote to Ribbentrop about entering an agreement regarding a "permanent basis" for their "mutual interests." Stalin sent Molotov to Berlin to negotiate the terms for the Soviet Union to join the Axis and potentially enjoy the spoils of the pact. At Stalin's direction, Molotov insisted on Soviet interest in Turkey, Bulgaria, Romania, Hungary, Yugoslavia and Greece, though Stalin had earlier unsuccessfully personally lobbied Turkish leaders to not sign a mutual assistance pact with Britain and France. Ribbentrop asked Molotov to sign another secret protocol with the statement: "The focal point of the territorial aspirations of the Soviet Union would presumably be centered south of the territory of the Soviet Union in the direction of the Indian Ocean." Molotov took the position that he could not take a "definite stand" on this without Stalin's agreement. Stalin did not agree with the suggested protocol, and negotiations broke down. In response to a later German proposal, Stalin's stated that the Soviets would join the Axis if Germany foreclosed acting in the Soviet's sphere of influence. Shortly thereafter, Hitler issued a secret internal directive related to his plan to invade the Soviet Union. In an effort to demonstrate peaceful intentions toward Germany, on April 13, 1941, Stalin oversaw the signing of a neutrality pact with the Axis power Japan. While Stalin had little faith in Japan's commitment to neutrality, he felt that the pact was important for its political symbolism, to reinforce a public affection for Germany. Stalin felt that there was a growing split in German circles about whether Germany should initiate a war with the Soviet Union. Hitler breaks the pact During the early morning of June 22, 1941, Hitler broke the pact by starting Operation Barbarossa, the German invasion of Soviet-held territories and the Soviet Union that began the war on the Eastern Front. Before the invasion, Stalin felt that Germany would not attack the Soviet Union until Germany had defeated Britain. At the same time, Soviet generals warned Stalin that Germany had concentrated forces on its borders. Two highly placed Soviet spies in Germany, "Starshina" and "Korsikanets", had sent dozens of reports to Moscow containing evidence of preparation for a German attack. Further warnings came from Richard Sorge, a Soviet spy in Tokyo working undercover as a German journalist. Seven days before the invasion, a Soviet spy in Berlin warned Stalin that the movement of German divisions to the borders was to wage war on the Soviet Union. Five days before the attack, Stalin received a report from a spy in the German Air Ministry that "all preparations by Germany for an armed attack on the Soviet Union have been completed, and the blow can be expected at any time." In the margin, Stalin wrote to the people's commissar for state security, "you can send your 'source' from the headquarters of German aviation to his mother. This is not a 'source' but a dezinformator." Although Stalin increased Soviet western border forces to 2.7 million men and ordered them to expect a possible German invasion, he did not order a full-scale mobilization of forces to prepare for an attack. Stalin felt that a mobilization might provoke Hitler to prematurely begin to wage war against the Soviet Union, which Stalin wanted to delay until 1942 in order to strengthen Soviet forces. Viktor Suvorov suggested that Stalin had made aggressive preparations beginning in the late 1930s and was preparing to invade Germany in the summer 1941. He believes that Hitler forestalled Stalin and the German invasion was in essence a pre-emptive strike, precisely as Hitler claimed. This theory was supported by Igor Bunich, Joachim Hoffmann, Mikhail Meltyukhov (see Stalin's Missed Chance) and Edvard Radzinsky (see Stalin: The First In-Depth Biography Based on Explosive New Documents from Russia's Secret Archives). Other historians, especially Gabriel Gorodetsky and David Glantz, reject this thesis. General Fedor von Boch's diary says that the Abwehr fully expected a Soviet attack against German forces in Poland no later than 1942. In the initial hours after the German attack began, Stalin hesitated, wanting to ensure that the German attack was sanctioned by Hitler, rather than the unauthorized action of a rogue general. Accounts by Nikita Khrushchev and Anastas Mikoyan claim that, after the invasion, Stalin retreated to his dacha in despair for several days and did not participate in leadership decisions. But, some documentary evidence of orders given by Stalin contradicts these accounts, leading historians such as Roberts to speculate that Khrushchev's account is inaccurate. In the first three weeks of the invasion, as the Soviet Union tried to defend against large German advances,it suffered 750,000 casualties, and lost 10,000 tanks and 4,000 aircraft. In July 1940, Stalin completely reorganized the Soviet military, placing himself directly in charge of several military organizations. This gave him complete control of his country's entire war effort; more control than any other leader in World War II. A pattern soon emerged where Stalin embraced the Red Army's strategy of conducting multiple offensives, while the Germans overran each of the resulting small, newly gained grounds, dealing the Soviets severe casualties. The most notable example of this was the Battle of Kiev, where over 600,000 Soviet troops were quickly killed, captured or missing. By the end of 1941, the Soviet military had suffered 4.3 million casualties and the Germans had captured 3.0 million Soviet prisoners, 2.0 million of whom died in German captivity by February 1942. German forces had advanced c. 1,700 kilometers, and maintained a linearly-measured front of 3,000 kilometers. The Red Army put up fierce resistance during the war's early stages. Even so, according to Glantz, they were plagued by an ineffective defense doctrine against well-trained and experienced German forces, despite possessing some modern Soviet equipment, such as the KV-1 and T-34 tanks. Soviets stop the Germans While the Germans made huge advances in 1941, killing millions of Soviet soldiers, at Stalin's direction, the Red Army directed sizable resources to prevent the Germans from achieving one of their key strategic goals, the attempted capture of Leningrad. They held the city at the cost of more than a million Soviet soldiers in the region and more than a million civilians, many who died from starvation. While the Germans pressed forward, Stalin was confident of an eventual Allied victory over Germany. In September 1941, Stalin told British diplomats that he wanted two agreements: (1) a mutual assistance/aid pact and (2) a recognition that, after the war, the Soviet Union would gain the territories in countries that it had taken pursuant to its division of Eastern Europe with Hitler in the Molotov–Ribbentrop Pact. The British agreed to assistance but refused to agree to the territorial gains, which Stalin accepted months later as the military situation had deteriorated somewhat by mid-1942. In November 1941, Stalin rallied his generals in a speech given underground in Moscow, telling them that the German blitzkrieg would fail because of weaknesses in the German rear in Nazi-occupied Europe and the underestimation of the strength of the Red Army, and that the German war effort would crumble against the British-American-Soviet "war engine". On November 6, 1941, Stalin addressed the Soviet Union for the second time (the first was on July 2, 1941). Correctly calculating that Hitler would direct efforts to capture Moscow, Stalin concentrated his forces to defend the city, including numerous divisions transferred from Soviet eastern sectors after he determined that Japan would not attempt an attack in those areas. By December, Hitler's troops had advanced to within 30 km of the Kremlin in Moscow. On December 5, the Soviets launched a counteroffensive, pushing German troops back c. 80 km from Moscow in what was the first major defeat of the Wehrmacht in the war. In early 1942, the Soviets began a series of offensives labeled "Stalin's First Strategic Offensives", although there is no evidence that Stalin developed the offensives. The counteroffensive bogged down, in part due to mud from rain in the Spring of 1942. Stalin's attempt to retake Kharkov in the Ukraine ended in the disastrous encirclement of Soviet forces, with over 200,000 Soviet casualties suffered. Stalin attacked the competence of the generals involved. General Georgy Zhukov and others subsequently revealed that some of those generals had wished to remain in a defensive posture in the region, but Stalin and others had pushed for the offensive. Some historians have doubted Zhukov's account. At the same time, Hitler was worried about American support after their entry into the war following the Attack on Pearl Harbor, and a potential Anglo-American invasion on the Western Front in 1942 (which did not occur until the summer of 1944). He changed his primary goal from an immediate victory in the East, to the more long-term goal of securing the southern Soviet Union to protect oil fields vital to the long-term German war effort. While Red Army generals correctly judged the evidence that Hitler would shift his efforts south, Stalin thought it a flanking move in the German attempt to take Moscow. The German southern campaign began with a push to capture the Crimea, which ended in disaster for the Red Army. Stalin publicly criticized his generals' leadership. In their southern campaigns, the Germans took 625,000 Red Army prisoners in July and August 1942 alone. At the same time, in a meeting in Moscow, Churchill privately told Stalin that the British and Americans were not yet prepared to make an amphibious landing against a fortified Nazi-held French coast in 1942, and would direct their efforts to invading German-held North Africa. He pledged a campaign of massive strategic bombing, to include German civilian targets. Estimating that the Russians were "finished," the Germans began another southern operation in the fall of 1942, the Battle of Stalingrad. Hitler insisted upon splitting German southern forces in a simultaneous siege of Stalingrad and an offensive against Baku on the Caspian Sea. Stalin directed his generals to spare no effort to defend Stalingrad. Although the Soviets suffered in excess of 1.1 million casualties at Stalingrad, their victory over German forces, including the encirclement of 290,000 Axis troops, marked a turning point in the war. Within a year after Barbarossa, Stalin reopened the churches in the Soviet Union. He may have wanted to motivate the majority of the population who had Christian beliefs. By changing the official policy of the party and the state towards religion, he could engage the Church and its clergy in mobilizing the war effort. On September 4, 1943, Stalin invited the metropolitans Sergius, Alexy and Nikolay to the Kremlin. He proposed to reestablish the Moscow Patriarchate, which had been suspended since 1925, and elect the Patriarch. On September 8, 1943, Metropolitan Sergius was elected Patriarch. One account said that Stalin's reversal followed a sign that he supposedly received from heaven. he wrote that Ilya, Metropolitan of the Lebanon Mountains, claimed to receive a sign from heaven that "The churches and monasteries must be reopened throughout the country. Priests must be brought back from imprisonment, Leningrad must not be surrendered, but the sacred icon of Our Lady of Kazan should be carried around the city boundary, taken on to Moscow, where a service should be held, and thence to Stalingrad Tsaritsyn." Shortly thereafter, Stalin's attitude changed. Radzinsky wrote: "Whatever the reason, after his mysterious retreat, he began making his peace with God. Something happened which no historian has yet written about. On his orders many priests were brought back to the camps. In Leningrad, besieged by the Germans and gradually dying of hunger, the inhabitants were astounded, and uplifted, to see wonder-working icon Our Lady of Kazan brought out into the streets and borne in procession." Radzinsky asked, "Had he seen the light? Had fear made him run to his Father? Had the Marxist God-Man simply decided to exploit belief in God? Or was it all of these things at once?." Soviet push to Germany The Soviets repulsed the important German strategic southern campaign and, although 2.5 million Soviet casualties were suffered in that effort, it permitted to Soviets to take the offensive for most of the rest of the war on the Eastern Front. In 1943, Stalin ceded to his generals' call for the Soviet Union to take a defensive stance because of disappointing losses after Stalingrad, a lack of reserves for offensive measures and a prediction that the German's would likely next attack a bulge in the Soviet front at Kursk such that defensive preparations there would more efficiently use resources. The Germans did attempt an encirclement attack at Kursk, which was successfully repulsed by the Soviets after Hitler canceled the offensive, in part, because of the Allied invasion of Sicily, though the Soviets suffered over 800,000 casualties. Kursk also marked the beginning of a period where Stalin became more willing to listen to the advice of his generals. By the end of 1943, the Soviets occupied half of the territory taken by the Germans from 1941-1942. Soviet military industrial output also had increased substantially from late 1941 to early 1943 after Stalin had moved factories well to the East of the front, safe from German invasion and air attack. The strategy paid off, as such industrial increases were able to occur even while the Germans in late 1942 occupied over half of European Russia, including 40% (80 million) of its population, and c. 2,500,000 square kilometers of Russian territory. The Soviets had also prepared for war for over a decade, including preparing 14 million civilians with some military training. Accordingly, while almost all of the original 5 million men of the Soviet army had been wiped out by the end of 1941, the Soviet military had swelled to 8 million members by the end of that year. Despite substantial losses in 1942 far in excess of German losses, Red Army size grew even further, to 11 million. While there is substantial debate whether Stalin helped or hindered these industrial and manpower efforts, Stalin left most economic wartime management decisions in the hands of his economic experts. While some scholars claim that evidence suggests that Stalin considered, and even attempted, negotiating peace with Germany in 1941 and 1942, others find this evidence unconvincing and even fabricated. In November 1943, Stalin met with Churchill and Roosevelt in Tehran. Roosevelt told Stalin that he hoped that Britain and America opening a second front against Germany could initially draw 30-40 German division from the Eastern Front. Stalin and Roosevelt, in effect, ganged up on Churchill by emphasizing the importance of a cross-channel invasion of German-held northern France, while Churchill had always felt that Germany was more vulnerable in the "soft underbelly" of Italy (which the Allies had already invaded) and the Balkans. The parties later agreed that Britain and America would launch a cross-channel invasion of France in May 1944, along with a separate invasion of southern France. Stalin insisted that, after the war, the Soviet Union should incorporate the portions of Poland it occupied pursuant to the Molotov-Ribbentrop Pact with Germany, which Churchill tabled. In 1944, the Soviet Union made significant advances across Eastern Europe toward Germany, including Operation Bagration, a massive offensive in Belorussia against the German Army Group Centre. Stalin, Roosevelt and Churchill closely coordinated, such that Bagration occurred at roughly the same time as American and British forces initiation of the invasion of German held Western Europe on France's northern coast. The operation resulted in the Soviets retaking Belorussia and the western Ukraine, along with the successful effective destruction of the Army Group Centre and 300,000 German casualties, though at the cost of over 750,000 Soviet casualties. Successes at Operation Bagration and in the year that followed were, in large part, due to a weakened Wehrmacht that lacked the fuel and armament they needed to operate effectively, growing Soviet advantages in manpower and materials, and the attacks of Allies on the Western Front. In his 1944 May Day speech, Stalin praised the Western allies for diverting German resources in the Italian Campaign, Tass published detailed lists of the large numbers of supplies coming from Western allies, and Stalin made a speech in November 1944 stating that Allied efforts in the West had already quickly drawn 75 German divisions to defend that region, without which, the Red Army could not yet have driven the Wehrmacht from Soviet territories. The weakened Wehrmacht also helped Soviet offensives because no effective German counter-offensive could be launched, Beginning in the summer of 1944, a reinforced German Army Centre Group did prevent the Soviets from advancing in around Warsaw for nearly half a year. Some historians claim that the Soviets' failure to advance was a purposeful Soviet stall to allow the Wehrmacht to slaughter members of a Warsaw Uprising by the Polish home army in August 1944 that occurred as the Red Army approached, though others dispute the claim and cite sizable unsuccessful Red Army efforts to attempt to defeat the Wehrmacht in that region. Earlier in 1944, Stalin had insisted that the Soviets would annex the portions of Poland it divided with Germany in the Molotov-Ribbentrop Pact, while the Polish government in exile, which the British insisted must be involved in postwar Poland, demanded that the Polish border be restored to prewar locations. The rift further highlighted Stalin's blatant hostility toward the anti-communist Polish government in exile and their Polish home army, which Stalin felt threatened his plans to create a post-war Poland friendly to the Soviet Union. Further exacerbating the rift was Stalin's refusal to resupply the Polish home army, and his refusal to allow American supply planes to use the necessary Soviet air bases to ferry supplies to the Polish home army, which Stalin referred to in a letter to Roosevelt and Churchill as "power-seeking criminals." Worried about the possible repercussions of those actions, Stalin later began a Soviet supply airdrop to Polish rebels, though most of the supplies ended up in the hands of the Germans. The uprising ended in disaster with 20,000 Polish rebels and up to 200,000 civilians killed by Wehrmacht forces, with Soviet forces entering the city in January 1945. Other important advances occurred in late 1944, such as the invasion of Romania in August and Bulgaria. The Soviet Union declared war on Bulgaria in September 1944 and invaded the country, installing a communist government. Following the invasion of these Balkan countries, Stalin and Churchill met in the fall of 1944, where they agreed upon various percentages for "spheres of influence" in several Balkan states, though the diplomats for neither leader knew what the term actually meant. The Red Army also expelled German forces from Lithuania and Estonia in late 1944 at the cost of 260,000 Soviet casualties. In late 1944, Soviet forces battled fiercely to capture Hungary in the Budapest Offensive, but could not take it, which became a topic so sensitive to Stalin that he refused to allow his commanders to speak of it. The Germans held out in the subsequent Battle of Budapest until February 1945, when the remaining Hungarians signed an armistice with the Soviet Union. Victory at Budapest permitted the Red Army to launch the Vienna Offensive in April 1945. To the northeast, the taking of Belorussia and the Western Ukraine permitted the Soviets to launch the massive Vistula–Oder Offensive, where German intelligence had incorrectly guessed the Soviets would have a 3-to-1 numerical superiority advantage that was actually 5-to-1 (over 2 million Red Army personnel attacking 450,000 German defenders), the successful culmination of which resulted in the Red Army advancing from the Vistula river in Poland to the German Oder river in Eastern Germany. Stalin's shortcomings as strategist are frequently noted regarding massive Soviet loss of life and early Soviet defeats. An example of it is the summer offensive of 1942, which led to even more losses by the Red Army and recapture of initiative by the Germans. Stalin eventually recognized his lack of know-how and relied on his professional generals to conduct the war. Additionally, Stalin was well aware that other European armies had utterly disintegrated when faced with Nazi military efficacy and responded effectively by subjecting his army to galvanizing terror and nationalist appeals to patriotism. He also appealed to the Russian Orthodox church and images of national Russian people. Final Victory By April 1945, Germany faced its last days with 1.9 million German soldiers in the East fighting 6.4 million Red Army soldiers while 1 million German soldiers in the West battled 4 million Western Allied soldiers. While initial talk existed of a race to Berlin by the Allies, after Stalin successfully lobbied for Eastern Germany to fall within the Soviet "sphere of influence" at Yalta, no plans were made by the Western Allies to seize the city by a ground operation. Stalin still remained suspicious that western Allied forces holding at the Elbe river might move on the capital and, even in the last days, that the Americans might employ their two airborne divisions to capture the city. Stalin directed the Red Army to move rapidly in a broad front into Germany because he did not believe the Western Allies would hand over territory they occupied, while he made the overriding objective capturing Berlin. After successfully capturing Eastern Prussia, three Red Army fronts converged on the heart of Eastern Germany, with one of the last pitched battles of the war putting the Soviets at the virtual gates of Berlin. By April 24, Berlin was encircled by elements of two Soviet fronts, one of which had begun a massive shelling of the city on April 20 that would not end until the city's surrender. On April 30, Hitler and Eva Braun committed suicide, after which Soviet forces found their remains, which had been burned at Hitler's directive. German forces surrendered a few days later. Some historians argue that Stalin delayed the last final push for Berlin by two months in order to capture other areas for political reasons, which they argue gave the Wehrmacht time to prepare and increased Soviet casualties (which exceeded 400,000), though this is contested by other historians. Despite the Soviets' possession of Hitler's remains, Stalin did not believe that his old nemesis was actually dead, a belief that remained for years after the war. Stalin also later directed aides to spend years researching and writing a secret book about Hitler's life for his own private reading that reflected Stalin's prejudices, including an absence of criticism of Hitler for his treatment of Jews. Fending off the German invasion and pressing to victory over Nazi Germany in the World War II required a tremendous sacrifice by the Soviet Union (more than any other country in human history). Soviet military casualties totaled approximately 35 million (official figures 28.2 million) with approximately 14.7 million killed, missing or captured (official figures 11.285 million). Although figures vary, the Soviet civilian death toll probably reached 20 million. Millions of Soviet soldiers and civilians disappeared into German detention camps and slave labor factories, while millions more suffered permanent physical and mental damage. Economic losses, including losses in resources and manufacturing capacity in western Russia and Ukraine, were also catastrophic. The war resulted in the destruction of approximately 70,000 Soviet cities, towns and villages. Destroyed in that process were 6 million houses, 98,000 farms, 32,000 factories, 82,000 schools, 43,000 libraries, 6,000 hospitals and thousands of kilometers of roads and railway track. Questionable tactics After taking around 300,000 Polish prisoners in 1939 and early 1940, NKVD officers conducted lengthy interrogations of the prisoners in camps that were, in effect, a selection process to determine who would be killed. On March 5, 1940, pursuant to a note to Stalin from Lavrenty Beria, the members of the Soviet Politburo (including Stalin) signed an order to execute 25,700 Polish POWs, labeled "nationalists and counterrevolutionaries", kept at camps and prisons in occupied western Ukraine and Belarus. This became known as the Katyn massacre. Major-General Vasili M. Blokhin, chief executioner for the NKVD, personally shot 6,000 of the captured Polish officers in 28 consecutive nights, which remains one of the most organized and protracted mass murders by a single individual on record During his 29 year career Blokhin shot an estimated 50,000 people, making him ostensibly the most prolific official executioner in recorded world history. Stalin personally told a Polish general requesting information about missing officers that all of the Poles were freed, and that not all could be accounted because the Soviets "lost track" of them in Manchuria. After Polish railroad workers found the mass grave, the Nazi's used the massacre to attempt to drive a wedge between Stalin and the other Allies, including bringing in a European commission of investigators from twelve countries to examine the graves. In 1943, as the Soviets prepared to retake Poland, Nazi Propaganda Minister Joseph Goebbels correctly guessed that Stalin would attempt to falsely claim that the Germans massacred the victims. As Goebbels predicted, the Soviets had a "commission" investigate the matter, falsely concluding that the Germans had killed the POWs. The Soviets did not admit responsibility until 1990. On August 16, 1941, in attempts to revive a disorganized Soviet defense system, Stalin issued Order No. 270, demanding any commanders or commissars "tearing away their insignia and deserting or surrendering" to be considered malicious deserters. The order required superiors to shoot these deserters on the spot. Their family members were subjected to arrest. The second provision of the order directed all units fighting in encirclements to use every possibility to fight. The order also required division commanders to demote and, if necessary, even to shoot on the spot those commanders who failed to command the battle directly in the battlefield. Thereafter, Stalin also conducted a purge of several military commanders that were shot for "cowardice" without a trial. In June 1941, weeks after the German invasion began, Stalin directed that the retreating Red Army also sought to deny resources to the enemy through a scorched earth policy of destroying the infrastructure and food supplies of areas before the Germans could seize them, and that partisans were to be set up in evacuated areas. This, along with abuse by German troops, caused starvation and suffering among the civilian population that were left behind. Stalin feared that Hitler would use disgruntled Soviet citizens to fight his regime, particularly people imprisoned in the Gulags. He thus ordered the NKVD to take care of the situation. They responded by murdering around one hundred thousand political prisoners throughout the western parts of the Soviet Union, with methods that included bayoneting people to death and tossing grenades into crowded cells. Many others were simply deported east. In July 1942, Stalin issued Order No. 227, directing that any commander or commissar of a regiment, battalion or army, who allowed retreat without permission from his superiors was subject to military tribunal. The order called for soldiers found guilty of disciplinary measures to be forced into "penal battalions", which were sent to the most dangerous sections of the front lines. From 1942 to 1945, 427,910 soldiers were assigned to penal battalions. The order also directed "blocking detachments" to shoot fleeing panicked troops at the rear. In the first two months following the order, over 1,000 troops were shot by blocking units and blocking units sent over 130,000 troops to penal battalions. Despite having some effect initially, this measure proved to have a deteriorating effect on the troops' morale, so by October 1942 the idea of regular blocking units was quietly dropped By 20 November 1944 the blocking units were disbanded officially. After the capture of Berlin, Soviet troops reportedly raped German women and girls, with total victim estimates ranging from tens of thousands to two million. During and after the occupation of Budapest, (Hungary), an estimated 50,000 women and girls were raped. Regarding rapes that occurred in Yugoslavia, Stalin responded to a Yugoslav partisan leader's complaints saying, "Can't he understand it if a soldier who has crossed thousands of kilometers through blood and fire and death has fun with a woman or takes some trifle?" In former Axis countries, such as Germany, Romania and Hungary, Red Army officers generally viewed cities, villages and farms as being open to pillaging and looting. For example, Red Army soldiers and NKVD members frequently looted transport trains in 1944 and 1945 in Poland and Soviet soldiers set fire to the city centre of Demmin while preventing the inhabitants from extinguishing the blaze, which, along with multiple rapes, played a part in causing over 900 citizens of the city to commit suicide. In the Soviet occupation zone of Germany, when members of the SED reported to Stalin that looting and rapes by Soviet soldiers could result in negative consequences for the future of socialism in post-war East Germany, Stalin reacted angrily: "I shall not tolerate anybody dragging the honour of the Red Army through the mud." Accordingly, all evidence of looting, rapes and destruction by the Red Army was deleted from archives in the Soviet occupation zone. Stalin's personal military leadership was emphasied as part of the "cult of personality" after the publication of Stalin's ten victories extracted from 6 November 1944 speech "27th anniversary of the Great October socialist revolution" (Russian: «27-я годовщина Великой Октябрьской социалистической революции») during the 1944 meeting of the Moscow's Soviet deputies. According to recent figures, of an estimated four million POWs taken by the Russians, including Germans, Japanese, Hungarians, Romanians and others, some 580,000 never returned, presumably victims of privation or the Gulags, compared with 3.5 million Soviet POW that died in German camps out of the 5.6 million taken. Soviet POWs and forced laborers who survived German captivity were sent to special "transit" or "filtration" camps meant to determine which were potential traitors. Of the approximately 4 million to be repatriated 2,660,013 were civilians and 1,539,475 were former POWs. Of the total, 2,427,906 were sent home and 801,152 were reconscripted into the armed forces. 608,095 were enrolled in the work battalions of the defense ministry. 272,867 were transferred to the authority of the NKVD for punishment, which meant a transfer to the Gulag system. 89,468 remained in the transit camps as reception personnel until the repatriation process was finally wound up in the early 1950s. During the rapid German advances in the early months of the war, nearly reaching the cities of Moscow and Leningrad, the bulk of Soviet industry which could not be evacuated was either destroyed or lost due to German occupation. Agricultural production was interrupted, with grain harvests left standing in the fields that would later cause hunger reminiscent of the early 1930s. In one of the greatest feats of war logistics, factories were evacuated on an enormous scale, with 1523 factories dismantled and shipped eastwards along four principal routes to the Caucasus, Central Asian, Ural, and Siberian regions. In general, the tools, dies and production technology were moved, along with the blueprints and their management, engineering staffs and skilled labour. The whole of the Soviet Union become dedicated to the war effort. The population of the Soviet Union was probably better prepared than any other nation involved in the fighting of World War II to endure the material hardships of the war. This is primarily because the Soviets were so used to shortages and coping with economic crisis in the past, especially during wartime—World War I brought similar restrictions on food. Still, conditions were severe. World War II was especially devastating to citizens of the USSR because it was fought on Soviet territory and caused massive destruction. In Leningrad, under German siege, over a million people died of starvation and disease. Many factory workers were teenagers, women and old people. The government implemented rationing in 1941 and first applied it to bread, flour, cereal, pasta, butter, margarine, vegetable oil, meat, fish, sugar, and confectionary all across the country. The rations remained largely stable in other places during the war. Additional rations were often so expensive that they could not add substantially to a citizen’s food supply unless that person was especially well-paid. Peasants received no rations and had to make do with local resources they farmed themselves. Most rural peasants struggled and lived in unbearable poverty but others sold any surplus they had at a high price and a few became rouble millionaires until a currency reform two years after the end of the war wiped out their wealth. Despite harsh conditions, the war led to a spike in Soviet nationalism and unity. Soviet propaganda toned down extreme Communist rhetoric of the past as the people now rallied by a belief of protecting their Motherland against the evils of German invaders. Ethnic minorities thought to be collaborators were forced into exile. Religion, which was previously shunned, became a part of Communist Party propaganda campaign in the Soviet society in order to mobilize the religious elements. The social composition of Soviet society changed drastically during the war. There was a burst of marriages in June and July 1941 between people about to be separated by the war and in the next few years the marriage rate dropped off steeply, with the birth rate following shortly thereafter to only about half of what it would have been in peacetime. For this reason mothers with several children during the war received substantial honors and money benefits if they had a great enough number of children—mothers could earn around 1,300 rubles for having their fourth child and earn up to 5,000 rubles for their tenth. Survival in Leningrad The city of Leningrad endured more suffering and hardships than any other city in the Soviet Union during the war, as it was under siege for 900 days, from September 1941-January 1944. Hunger, malnutrition, disease, starvation, and even cannibalism became common during the siege of Leningrad; civilians lost weight, grew weaker, and became more vulnerable to diseases. Citizens of Leningrad managed to survive through a number of methods with varying degrees of success. Since only four hundred thousand Russians were evacuated before the siege began, this left two and a half million in Leningrad, including four hundred thousand children. More managed to escape the city; this was most successful when Lake Lagoda froze over and people could walk over the ice road—or “road of life”—to safety. Most survival strategies during the siege, though, involved staying within the city and facing the problems through resourcefulness or luck. One way to do this was by securing factory employment because many factories became autonomous and possessed more of the tools of survival during the winter, such as food and heat. Workers got larger rations than regular civilians and factories were likely to have electricity if they produced crucial goods. Factories also served as mutual-support centers and had clinics and other services like cleaning crews and teams of women who would sew and repair clothes. Factory employees were still driven to desperation on occasion and people resorted to eating glue or horses in factories where food was scarce, but factory employment was the most consistently successful method of survival, and at some food production plants not a single person died. Survival opportunities open to the larger Soviet community included bartering and farming on private land. Black markets thrived as private barter and trade became more common, especially between soldiers and civilians. Soldiers, who had more food to spare, were eager to trade with Soviet citizens that had extra warm clothes to trade. Planting vegetable gardens in the spring became popular, primarily because citizens got to keep everything grown on their own plots. The campaign also had a potent psychological effect and boosted morale, a survival component almost as crucial as bread. Many of the most desperate Soviet citizens turned to crime as a way to support themselves in trying times. Most common was the theft of food and of ration cards, which could prove fatal for a malnourished person if their card was stolen more than a day or two before a new card was issued. For these reasons, the stealing of food was severely punished and a person could be shot for as little as stealing a loaf of bread. More serious crimes such as murder and cannibalism also occurred, and special police squads were set up to combat these crimes, though by the end of the siege, roughly 1,500 had been arrested for cannibalism. - Stalin as War Leader History Today - Roberts 1992, pp. 57–78 - Encyclopædia Britannica, German-Soviet Nonaggression Pact, 2008 - Text of the Nazi-Soviet Non-Aggression Pact, executed August 23, 1939 - Christie, Kenneth, Historical Injustice and Democratic Transition in Eastern Asia and Northern Europe: Ghosts at the Table of Democracy, RoutledgeCurzon, 2002, ISBN 0-7007-1599-1 - Roberts 2006, pp. 30–32 - Lionel Kochan. The Struggle For Germany. 1914-1945. New York, 1963 - Shirer, William L. (1990), The Rise and Fall of the Third Reich: A History of Nazi Germany, Simon and Schuster, p. 504, ISBN 0-671-72868-7 - Watson 2000, p. 709 - Michael Jabara Carley (1993). End of the 'Low, Dishonest Decade': Failure of the Anglo-Franco-Soviet Alliance in 1939. Europe-Asia Studies 45 (2), 303-341. - Watson 2000, p. 715 - Watson 2000, p. 713 - Fest, Joachim C., Hitler, Houghton Mifflin Harcourt, 2002, ISBN 0-15-602754-2, page 588 - Ulam, Adam Bruno,Stalin: The Man and His Era, Beacon Press, 1989, ISBN 0-8070-7005-X, page 509-10 - Shirer, William L., The Rise and Fall of the Third Reich: A History of Nazi Germany, Simon and Schuster, 1990 ISBN 0-671-72868-7, page 503 - Fest, Joachim C., Hitler, Harcourt Brace Publishing, 2002 ISBN 0-15-602754-2, page 589-90 - Vehviläinen, Olli, Finland in the Second World War: Between Germany and Russia, Macmillan, 2002, ISBN 0-333-80149-0, page 30 - Bertriko, Jean-Jacques Subrenat, A. and David Cousins, Estonia: Identity and Independence, Rodopi, 2004, ISBN 90-420-0890-3 page 131 - Murphy 2006, p. 23 - Shirer, William L., The Rise and Fall of the Third Reich: A History of Nazi Germany, Simon and Schuster, 1990 ISBN 0-671-72868-7, pages 528 - Max Beloff The Foreign Policy of Soviet Russia. vol. II, I936-41. Oxford University Press, 1949. p. 166, 211. - For example, in his article From Munich to Moscow, Edward Hallett Carr explains the reasons behind signing a non-aggression pact between USSR and Germany as follows: Since 1934 the U.S.S.R. had firmly believed that Hitler would start a war somewhere in Europe: the bugbear of Soviet policy was that it might be a war between Hitler and the U.S.S.R. with the western powers neutral or tacitly favourable to Hitler. In order to conjure this bugbear, one of three alternatives had to be envisaged: (i) a war against Germany in which the western powers would be allied with the U.S.S.R. (this was the first choice and the principal aim of Soviet policy from 1934–38); (2) a war between Germany and the western powers in which the U.S.S.R. would be neutral (this was clearly hinted at in the Pravda article of September 21st, 1938, and Molotov's speech of November 6th, 1938, and became an alternative policy to (i) after March 1939, though the choice was not finally made till August 1939); and (3) a war between Germany and the western powers with Germany allied to the U.S.S.R. (this never became a specific aim of Soviet policy, though the discovery that a price could be obtained from Hitler for Soviet neutrality made the U.S.S.R. a de facto, though non-belligerent, partner of Germany from August 1939 till, at any rate, the summer of 1940)., see E. H. Carr., From Munich to Moscow. I., Soviet Studies, Vol. 1, No. 1, (Jun., 1949), pp. 3–17. Taylor & Francis, Ltd. - This view is disputed by Werner Maser and Dmitri Volkogonov - Yuly Kvitsinsky. Russia-Germany: memoirs of the future, Moscow, 2008 ISBN 5-89935-087-3 p.95 - Watson 2000, pp. 695–722 - Shirer, William L., The Rise and Fall of the Third Reich: A History of Nazi Germany, Simon and Schuster, 1990 ISBN 0-671-72868-7, pages 541 - Roberts 2006, p. 43 - Sanford, George (2005). Katyn and the Soviet Massacre Of 1940: Truth, Justice And Memory. London, New York: Routledge. ISBN 0-415-33873-5. - Wettig 2008, p. 20 - Roberts 2006, p. 37 - Roberts 2006, p. 45 - Kennedy-Pipe, Caroline, Stalin's Cold War, New York : Manchester University Press, 1995, ISBN 0-7190-4201-1 - Roberts 2006, p. 52 - Mosier, John, The Blitzkrieg Myth: How Hitler and the Allies Misread the Strategic Realities of World War II, HarperCollins, 2004, ISBN 0-06-000977-2, page 88 - Roberts 2006, p. 53 - Senn, Alfred Erich, Lithuania 1940 : revolution from above, Amsterdam, New York, Rodopi, 2007 ISBN 978-90-420-2225-6 - Simon Sebag Montefiore. Stalin: The Court of the Red Tsar. p. 334. - Wettig 2008, p. 21 - Brackman 2001, p. 341 - Roberts 2006, p. 58 - Brackman 2001, p. 343 - Roberts 2006, p. 59 - Roberts 2006, p. 63 - Roberts 2006, p. 66 - Roberts 2006, p. 82 - Roberts 2006, p. 67 - Ferguson, Niall (2005-06-12). "Stalin's Intelligence". The New York Times. Retrieved 2010-05-07. - Roberts 2006, p. 68 - Murphy 2006, p. xv - Roberts 2006, p. 69 - Roberts 2006, p. 70 - see e.g. Teddy J. Uldricks. "The Icebreaker Controversy: Did Stalin Plan to Attack Hitler?" Slavic Review, Vol. 58, No. 3 (Autumn, 1999), pp. 626-643. Stable URL: http://www.jstor.org/stable/2697571 or Gabriel Gorodetsky. Grand Delusion: Stalin and the German Invasion of Russia p. 5. Published by Yale University Press, 2001. ISBN 0-300-08459-5 - Simon Sebag Montefiore. Stalin: The Court of the Red Tsar, Knopf, 2004 (ISBN 1-4000-4230-5) - Roberts 2006, p. 89 - Roberts 2006, p. 90 - Roberts 2006, p. 85 - Roberts 2006, p. 97 - Roberts 2006, pp. 99–100 - Roberts 2006, pp. 116–7 - Glantz, David, The Soviet-German War 1941–45: Myths and Realities: A Survey Essay, October 11, 2001, page 7 - Roberts 2006, p. 106 - Roberts 2006, pp. 114–115 - Roberts 2006, p. 110 - Roberts 2006, p. 108 - Roberts 2006, p. 88 - Roberts 2006, p. 112 - Roberts 2006, p. 122 - Roberts 2006, pp. 124–5 - Roberts 2006, pp. 117–8 - Roberts 2006, p. 126 - Roberts 2006, pp. 135–140 - Roberts 2006, p. 128 - Roberts 2006, p. 134 - Сталинградская битва - Roberts 2006, p. 154 - (Radzinsky 1996, p.472-3) - Roberts 2006, p. 155 - Roberts 2006, pp. 156–7 - McCarthy, Peter, Panzerkrieg: The Rise and Fall of Hitler's Tank Divisions, Carroll & Graf Publishers, 2003, ISBN 0-7867-1264-3, page 196 - Russian Central Military Archive TsAMO, f. (16 VA), f.320, op. 4196, d.27, f.370, op. 6476, d.102, ll.6, 41, docs from the Russian Military Archive in Podolsk. Loss records for 17 VA are incomplete. It records 201 losses for 5–8 July. From 1–31 July it reported the loss of 244 (64 in air-to-air combat, 68 to AAA fire. It reports a further 108 missing on operations and four lost on the ground. 2 VA lost 515 aircraft missing or due to unknown/unrecorded reasons, a further 41 in aerial combat and a further 31 to AAA fire, between 5–18 July 1943. Furthermore, another 1,104 Soviet aircraft were lost between 12 July and 18 August. Bergström, Christer (2007). Kursk — The Air Battle: July 1943. Chervron/Ian Allen. ISBN 978-1-903223-88-8, page 221. - Roberts 2006, p. 159 - Roberts 2006, p. 163 - Roberts 2006, pp. 164–5 - Roberts 2006, pp. 165–7 - Roberts 2006, p. 180 - Roberts 2006, p. 181 - Roberts 2006, p. 185 - Roberts 2006, pp. 186–7 - Roberts 2006, pp. 194–5 - Roberts 2006, pp. 199–201 - Williams, Andrew, D-Day to Berlin. Hodder, 2005, ISBN 0-340-83397-1, page 213 - Roberts 2006, pp. 202–3 - Roberts 2006, pp. 205–7 - Roberts 2006, pp. 208–9 - Roberts 2006, pp. 214–5 - Roberts 2006, pp. 216–7 - Wettig 2008, p. 49 - Roberts 2006, pp. 218–21 - Erickson, John, The Road to Berlin, Yale University Press, 1999 ISBN 0-300-07813-7, page 396-7. - Duffy, C., Red Storm on the Reich: The Soviet March on Germany 1945, Routledge, 1991, ISBN 0-415-22829-8 - Glantz, David, The Soviet-German War 1941–45: Myths and Realities: A Survey Essay, October 11, 2001 - Beevor, Antony, Berlin: The Downfall 1945, Viking, Penguin Books, 2005, ISBN 0-670-88695-5, page 194 - Williams, Andrew (2005). D-Day to Berlin. Hodder. ISBN 0-340-83397-1., page 310-1 - Erickson, John, The Road to Berlin, Yale University Press, 1999 ISBN 0-300-07813-7, page 554 - Beevor, Antony, Berlin: The Downfall 1945, Viking, Penguin Books, 2005, ISBN 0-670-88695-5, page 219 - Ziemke, Earl F (1969), Battle for Berlin End of the Third Reich Ballantine's Illustrated History of World War II (Battle Book #6), Ballantine Books, page 71 - Ziemke, Earl F, Battle For Berlin: End Of The Third Reich, NY:Ballantine Books, London:Macdomald & Co, 1969, pages 92-94 - Beevor, Antony, Revealed" Hitler's Secret Bunkers (2008) - Bullock, Alan, Hitler: A Study in Tyranny, Penguin Books, ISBN 0-14-013564-2, 1962, pages 799-800 - Glantz, David, The Soviet-German War 1941–45: Myths and Realities: A Survey Essay, October 11, 2001, pages 91-93 - Kershaw, Ian, Hitler, 1936-1945: Nemesis, W. W. Norton & Company, 2001, ISBN 0-393-32252-1, pages 1038-39 - Dolezal, Robert, Truth about History: How New Evidence Is Transforming the Story of the Past, Readers Digest, 2004, ISBN 0-7621-0523-2, page 185-6 - Eberle, Henrik, Matthias Uhl and Giles MacDonogh, The Hitler Book: The Secret Dossier Prepared for Stalin from the Interrogations of Hitler's Personal Aides, PublicAffairs, 2006, ISBN 1-58648-456-7. A reprint of one of only two existing copies. This copy was Nikita Khrushchev's, and was deposited in the Moscow Party archives where it was later found by Henrik Eberle and Matthias Uhl, and made public for the first time in 2006. As of 2006, the only other known copy is in kept in a safe by Vladimir Putin. - Glantz, David, The Soviet-German War 1941–45: Myths and Realities: A Survey Essay, October 11, 2001, page 13 - Roberts 2006, pp. 4–5 - (Polish) obozy jenieckie zolnierzy polskich (Prison camps for Polish soldiers) Encyklopedia PWN. Last accessed on 28 November 2006. - (Polish) Edukacja Humanistyczna w wojsku. 1/2005. Dom wydawniczy Wojska Polskiego. ISNN 1734-6584. (Official publication of the Polish Army) - (Russian) Молотов на V сессии Верховного Совета 31 октября цифра «примерно 250 тыс.» (Please provide translation of the reference title and publication data and means) - (Russian) Отчёт Украинского и Белорусского фронтов Красной Армии Мельтюхов, с. 367. (Please provide translation of the reference title and publication data and means) - Fischer, Benjamin B., "The Katyn Controversy: Stalin's Killing Field", Studies in Intelligence, Winter 1999-2000. - Excerpt from the minutes No. 13 of the Politburo of the Central Committee meeting, shooting order of March 5, 1940 online, last accessed on 19 December 2005, original in Russian with English translation - Sanford, Google Books, p. 20-24. - "Stalin's Killing Field" (PDF). Retrieved 2008-07-19. - Parrish, Michael (1996). The Lesser Terror: Soviet state security, 1939–1953. Westport, CT: Praeger Press. pp. 324–325. ISBN 0-275-95113-8. - Montefiore, Simon Sebag (2005-09-13). Stalin: The Court of the Red Tsar. New York: Vintage Books. pp. 197–8, 332–4. ISBN 978-1-4000-7678-9. - Katyn executioners named Gazeta Wyborcza. December 15, 2008 - (Polish) Various authors. Biuletyn „Kombatant” nr specjalny (148) czerwiec 2003 Special Edition of Kombatant Bulletin No.148 6/2003 on the occasion of the Year of General Sikorski. Official publication of the Polish government Agency of Combatants and Repressed - Ромуальд Святек, "Катынский лес", Военно-исторический журнал, 1991, №9, ISSN 0042-9058 - Brackman 2001 - (Polish) Barbara Polak (2005). "Zbrodnia katynska" (pdf). Biuletyn IPN: 4–21. Retrieved 2007-09-22. - Engel, David. " Facing a Holocaust: The Polish Government-In-Exile and the Jews, 1943–1945]". 1993. ISBN 0-8078-2069-5. - Bauer, Eddy. "The Marshall Cavendish Illustrated Encyclopedia of World War II". Marshall Cavendish, 1985 - Goebbels, Joseph. The Goebbels Diaries (1942–1943). Translated by Louis P. Lochner. Doubleday & Company. 1948 - "CHRONOLOGY 1990; The Soviet Union and Eastern Europe." Foreign Affairs, 1990, pp. 212. - Text of Order No. 270 - Roberts 2006, p. 98 - Robert Gellately. Lenin, Stalin, and Hitler: The Age of Social Catastrophe. Knopf, 2007 ISBN 1-4000-4005-1 p. 391 - Anne Applebaum. Gulag: A History, Doubleday, 2003 (ISBN 0-7679-0056-1) - Richard Rhodes (2002). Masters of Death: The SS-Einsatzgruppen and the Invention of the Holocaust. New York: Alfred A. Knopf. pp. 46–47. ISBN 0-375-40900-9. See also: Allen Paul. Katyn: Stalin’s Massacre and the Seeds of Polish Resurrection, Naval Institute Press, 1996, (ISBN 1-55750-670-1), p. 155 - Roberts 2006, p. 132 - G. I. Krivosheev. Soviet Casualties and Combat Losses. Greenhill 1997 ISBN 1-85367-280-7 - Catherine Merridale. Ivan's War: Life and Death in the Red Army, 1939-1945. Page 158. Macmillan, 2006. ISBN 0-8050-7455-4 - Schissler, Hanna The Miracle Years: A Cultural History of West Germany, 1949-1968 - Mark, James, "Remembering Rape: Divided Social Memory and the Red Army in Hungary 1944-1945", Past & Present — Number 188, August 2005, page 133 - Naimark, Norman M., The Russians in Germany: A History of the Soviet Zone of Occupation, 1945-1949. Cambridge: Belknap, 1995, ISBN 0-674-78405-7, pages 70-71 - Beevor, Antony, Berlin: The Downfall 1945, Penguin Books, 2002, ISBN 0-670-88695-5. Specific reports also include Report of the Swiss legation in Budapest of 1945 and Hubertus Knabe: Tag der Befreiung? Das Kriegsende in Ostdeutschland (A day of liberation? The end of war in Eastern Germany), Propyläen 2005, ISBN 3-549-07245-7 German). - Urban, Thomas, Der Verlust, Verlag C. H. Beck 2004, ISBN 3-406-54156-9, page 145 - Beevor, Antony, Berlin: The Downfall 1945, Viking, Penguin Books, 2005, ISBN 0-670-88695-5 - Buske, Norbert (Hg.): Das Kriegsende in Demmin 1945. Berichte Erinnerungen Dokumente (Landeszentrale für politische Bildung Mecklenburg-Vorpommern. Landeskundliche Hefte), Schwerin 1995 - Wolfgang Leonhard, Child of the Revolution ,Pathfinder Press, 1979, ISBN 0-906133-26-2 - Norman M. Naimark. The Russians in Germany: A History of the Soviet Zone of Occupation, 1945-1949. Harvard University Press, 1995. ISBN 0-674-78405-7 - Wolfgang Leonhard, Child of the Revolution, Pathfinder Press, 1979, ISBN 0-906133-26-2. - Richard Overy, The Dictators Hitler's Germany, Stalin's Russia p.568–569 - (“Военно-исторический журнал” (“Military-Historical Magazine”), 1997, №5. page 32) - Земское В.Н. К вопросу о репатриации советских граждан. 1944-1951 годы // История СССР. 1990. № 4 (Zemskov V.N. On repatriation of Soviet citizens. Istoriya SSSR., 1990, No.4 - Walter Scott Dunn (1995). The Soviet Economy and the Red Army, 1930-1945. Greenwood. p. 34. - John Barber and Mark Harrison, The Soviet Home Front, 1941-1945: a social and economic history of the USSR in World War II (Longman, 1991), 77, 81, 85-6. - Barber and Harrison, The Soviet Home Front, 1941-1945 91-93. - Robert Forczyk (2009). Leningrad 1941-44: The epic siege. Osprey. - Barber and Harrison, The Soviet Home Front, 1941-1945 pp 86-7. - Richard Bidlack; Nikita Lomagin (26 June 2012). The Leningrad Blockade, 1941-1944: A New Documentary History from the Soviet Archives. Yale U.P. p. 406. - Bidlack, “Survival Strategies in Leningrad pp 90-94. - Bidlack, “Survival Strategies in Leningrad p 97. - Bidlack, “Survival Strategies in Leningrad p 98 - Brackman, Roman (2001), The Secret File of Joseph Stalin: A Hidden Life, Frank Cass Publishers, ISBN 0-7146-5050-1 - Brent, Jonathan; Naumov, Vladimir (2004), Stalin's Last Crime: The Plot Against the Jewish Doctors, 1948-1953, HarperCollins, ISBN 0-06-093310-0 - Henig, Ruth Beatrice (2005), The Origins of the Second World War, 1933-41, Routledge, ISBN 0-415-33262-1 - Lewkowicz Nicolas, The German Question and the Origins of the Cold War (IPOC, Milan) (2008) [ISBN 8895145275] - Merridale, Catherine (2007). Ivan's War: Life and Death in the Red Army, 1939-1945. Macmillan. ISBN 978-0-312-42652-1. - Murphy, David E. (2006), What Stalin Knew: The Enigma of Barbarossa, Yale University Press, ISBN 0-300-11981-X - Nekrich, Aleksandr Moiseevich; Ulam, Adam Bruno; Freeze, Gregory L. (1997), Pariahs, Partners, Predators: German-Soviet Relations, 1922-1941, Columbia University Press, ISBN 0-231-10676-9 - Roberts, Geoffrey (2006), Stalin's Wars: From World War to Cold War, 1939–1953, Yale University Press, ISBN 0-300-11204-1 - Roberts, Geoffrey (2002), Stalin, the Pact with Nazi Germany, and the Origins of Postwar Soviet Diplomatic Historiography 4 (4) - Roberts, Geoffrey (1992), "The Soviet Decision for a Pact with Nazi Germany", Soviet Studies 55 (2) - Soviet Information Bureau (1948), Falsifiers of History (Historical Survey), Moscow: Foreign Languages Publishing House, 272848 - Department of State (1948), Nazi-Soviet Relations, 1939–1941: Documents from the Archives of The German Foreign Office, Department of State - Taubert, Fritz (2003), The Myth of Munich, Oldenbourg Wissenschaftsverlag, ISBN 3-486-56673-3 - Watson, Derek (2000), "Molotov's Apprenticeship in Foreign Policy: The Triple Alliance Negotiations in 1939", Europe-Asia Studies 52 (4) - Wettig, Gerhard (2008), Stalin and the Cold War in Europe, Rowman & Littlefield, ISBN 0-7425-5542-9 - Abramov, Vladimir K. "Mordovia During the Second World War," Journal of Slavic Military Studies (2008) 21#2 pp 291-363. - Annaorazov, Jumadurdy. "Turkmenistan during the Second World War," Journal of Slavic Military Studies (2012) 25#1 pp 53-64. - Barber, John, and Mark Harrison. The Soviet Home Front: A Social and Economic History of the USSR in World War II, Longman, 1991. - Berkhoff, Karel C. Harvest of Despair: Life and Death in Ukraine Under Nazi Rule. Harvard U. Press, 2004. 448 pp. - Braithwaite, Rodric. Moscow 1941: A City and Its People at War (2006) - Thurston, Robert W., and Bernd Bonwetsch (Eds). The People's War: Responses to World War II in the Soviet Union (2000) - Dallin, Alexander. Odessa, 1941-1944: A Case Study of Soviet Territory under Foreign Rule. Portland: Int. Specialized Book Service, 1998. 296 pp. - Ellmana, Michael, and S. Maksudovb. "Soviet deaths in the great patriotic war: A note," Europe-Asia Studies (1994) 46#4 pp 671-680 DOI: 10.1080/09668139408412190 - Glantz, David M. (2001). The Siege of Leningrad, 1941-1944: 900 Days of Terror. Zenith. ISBN 978-0-7603-0941-4. - Hill, Alexander. "British Lend-Lease Aid and the Soviet War Effort, June 1941-June 1942," Journal of Military History (2007) 71#3 pp 773-808. - Overy, Richard. Russia's War: A History of the Soviet Effort: 1941-1945 (1998) 432pp excerpt and txt search - Reese, Roger R. "Motivations to Serve: The Soviet Soldier in the Second World War," Journal of Slavic Military Studies (2007) 10#2 pp 263-282. - Thurston, Robert W. and Bernd Bonwetsch (2000). The People's War: Responses to World War II in the Soviet Union. U. of Illinois Press. ISBN 978-0-252-02600-3. - Vallin, Jacques; Meslé, France; Adamets, Serguei; and Pyrozhkov, Serhii. "A New Estimate of Ukrainian Population Losses During the Crises of the 1930s and 1940s." Population Studies (2002) 56(3): 249-264. in JSTOR Reports life expectancy at birth fell to a level as low as ten years for females and seven for males in 1933 and plateaued around 25 for females and 15 for males in the period 1941-44. Primary sources - Bidlack, Richard, and Nikita Lomagin, eds. The Leningrad Blockade, 1941-1944: A New Documentary History from the Soviet Archives. Yale U.P. - Hill, Alexander, ed. The Great Patriotic War of the Soviet Union, 1941-45: A Documentary Reader (2011) 368pp
http://en.wikipedia.org/wiki/Soviet_Union_in_World_War_II
13
76
Isaac Newton first published his three laws of motion in 1687 in his monumental Mathematical Principles of Natural Philosophy. In these three simple laws, Newton sums up everything there is to know about dynamics. This achievement is just one of the many reasons why he is considered one of the greatest physicists While a multiple-choice exam can’t ask you to write down each law in turn, there is a good chance you will encounter a problem where you are asked to choose which of Newton’s laws best explains a given physical process. You will also be expected to make simple calculations based on your knowledge of these laws. But by far the most important reason for mastering Newton’s laws is that, without them, thinking about dynamics is impossible. For that reason, we will dwell at some length on describing how these laws work qualitatively. Newton’s First Law Newton’s First Law describes how forces relate An object at rest remains at rest, unless acted upon by a net force. An object in motion remains in motion, unless acted upon by a net force. A soccer ball standing still on the grass does not move until someone kicks it. An ice hockey puck will continue to move with the same velocity until it hits the boards, or someone else hits it. Any change in the velocity of an object is evidence of a net force acting on that object. A world without forces would be much like the images we see of the insides of spaceships, where astronauts, pens, and food float eerily about. Remember, since velocity is a vector quantity, a change in velocity can be a change either in the magnitude or the direction of the velocity vector. A moving object upon which no net force is acting doesn’t just maintain a constant speed—it also moves in a straight line. But what does Newton mean by a net force? The net force is the sum of the forces acting on a body. Newton is careful to use the phrase “net force,” because an object at rest will stay at rest if acted upon by forces with a sum of zero. Likewise, an object in motion will retain a constant velocity if acted upon by forces with a sum of zero. Consider our previous example of you and your evil roommate pushing with equal but opposite forces on a box. Clearly, force is being applied to the box, but the two forces on the box cancel each other out exactly: F + –F = 0. Thus the net force on the box is zero, and the box does not move. Yet if your other, good roommate comes along and pushes alongside you with a force R, then the tie will be broken and the box will move. The net force is equal to: Note that the acceleration, a, and the velocity of the box, v, is in the same direction as the net force. The First Law is sometimes called the law of inertia. We define inertia as the tendency of an object to remain at a constant velocity, or its resistance to being accelerated. Inertia is a fundamental property of all matter and is important to the definition of mass. Newton’s Second Law To understand Newton’s Second Law, you must understand the concept of mass. Mass is an intrinsic scalar quantity: it has no direction and is a property of an object, not of the object’s location. Mass is a measurement of a body’s inertia, or its resistance to being accelerated. The words mass and matter are related: a handy way of thinking about mass is as a measure of how much matter there is in an object, how much “stuff” it’s made out of. Although in everyday language we use the words mass and weight interchangeably, they refer to two different, but related, quantities in physics. We will expand upon the relation between mass and weight later in this chapter, after we have finished our discussion of Newton’s We already have some intuition from everyday experience as to how mass, force, and acceleration relate. For example, we know that the more force we exert on a bowling ball, the faster it will roll. We also know that if the same force were exerted on a basketball, the basketball would move faster than the bowling ball because the basketball has less mass. This intuition is quantified in Newton’s Second Law: Stated verbally, Newton’s Second Law says that the net force, F, acting on an object causes the object to accelerate, a. Since F = ma can be rewritten as a = F/m, you can see that the magnitude of the acceleration is directly proportional to the net force and inversely proportional to the mass, m. Both force and acceleration are vector quantities, and the acceleration of an object will always be in the same direction as the net force. The unit of force is defined, quite appropriately, as a newton (N). Because acceleration is given in units of m/s2 and mass is given in units of kg, Newton’s Second Law implies that 1 N = 1 kg · m/s2. In other words, one newton is the force required to accelerate a one-kilogram body, by one meter per second, each second. Newton’s Second Law in Two Dimensions With a problem that deals with forces acting in two dimensions, the best thing to do is to break each force vector into its x- and y-components. This will give you two equations instead The component form of Newton’s Second Law tells us that the component of the net force in the direction is directly proportional to the resulting component of the acceleration in the direction, and likewise for the y Newton’s Third Law Newton’s Third Law has become a cliché. The Third Law tells us that: To every action, there is an equal and opposite What this tells us in physics is that every push or pull produces not one, but two forces. In any exertion of force, there will always be two objects: the object exerting the force and the object on which the force is exerted. Newton’s Third Law tells us that when object A exerts a force F on object B, object B will exert a force –F on object A. When you push a box forward, you also feel the box pushing back on your hand. If Newton’s Third Law did not exist, your hand would feel nothing as it pushed on the box, because there would be no reaction force acting on it. Anyone who has ever played around on skates knows that when you push forward on the wall of a skating rink, you recoil Newton’s Third Law tells us that the force that the skater exerts on the wall, , is exactly equal in magnitude and opposite in direction to the force that the wall exerts on the skater, . The harder the skater pushes on the wall, the harder the wall will push back, sending the skater sliding backward. Newton’s Third Law at Work Here are three other examples of Newton’s Third Law at work, variations of which often pop up on SAT II Physics: You push down with your hand on a desk, and the desk pushes upward with a force equal in magnitude to your A brick is in free fall. The brick pulls the Earth upward with the same force that the Earth pulls the brick downward. When you walk, your feet push the Earth backward. In response, the Earth pushes your feet forward, which is the force that moves you on your way. The second example may seem odd: the Earth doesn’t move upward when you drop a brick. But recall Newton’s Second Law: the acceleration of an object is inversely proportional to its mass (a = F/m). The Earth is about 1024 times as massive as a brick, so the brick’s downward acceleration of –9.8 m/s2 is about 1024 times as great as the Earth’s upward acceleration. The brick exerts a force on the Earth, but the effect of that force is insignificant.
http://www.sparknotes.com/testprep/books/sat2/physics/chapter6section2.rhtml
13
69
Circles are simple shapes of Euclidean geometry. A circle consists of those points in a plane which are at a constant distance, called the radius, from a fixed point, called the center. A chord of a circle is a line segment whose both endpoints lie on the circle. A diameter is a chord passing through the center. The length of a diameter is twice the radius. A diameter is the largest chord in a circle. Circles are simple closed curves which divide the plane into an interior and an exterior. The circumference of a circle is the perimeter of the circle, and the interior of the circle is called a disk. An arc is any connected part of a circle. A circle is a special ellipse in which the two foci are coincident. Circles are conic sections attained when a right circular cone is intersected with a plane perpendicular to the axis of the cone. Many students find circles difficult. They feel overwhelmed with circles homework, tests and projects. And it is not always easy to find circles tutor who is both good and affordable. Now finding circles help is easy. For your circles homework, circles tests, circles projects, and circles tutoring needs, TuLyn is a one-stop solution. You can master hundreds of math topics by using TuLyn. At TuLyn, we have over 2000 math video tutorial clips including circles videos , circles practice word problems , circles questions and answers , and circles worksheets Our circles videos replace text-based tutorials and give you better step-by-step explanations of circles. Watch each video repeatedly until you understand how to approach circles problems and how to solve them. - Hundreds of video tutorials on circles make it easy for you to better understand the concept. - Hundreds of word problems on circles give you all the practice you need. - Hundreds of printable worksheets on circles let you practice what you have learned by watching the video tutorials. How to do better on circles: TuLyn makes circles easy. Do you need help with Radius in your Geometry class? Do you need help with Diameter in your Geometry class? Do you need help with Circumference in your Geometry class? Do you need help with Circle Equation in your Geometry class? Do you need help with Area of Circles in your Geometry class? Geometry: Circles Videos Area Of a Circle When Diameter is Given Video Clip Length: 5 minutes 9 seconds Video Clip Views: This tutorial will teach you how to find the area of a circle when given the diameter. You will also learn how to use given information, the diameter, and figure out what the radius will be equivalent to in order to solve for the area. You also learn to multiply decimals with various decimal places. Geometry: Circles Worksheets Geometry: Circles Word Problems A circle with radius 5 A circle with radius 5 has its center at (4,1). the line x-2y+4=0 intersects the circle. find the intersections The volume of the soda can The volume of the soda can is fixed at 400 cubic centimeters. Use the volume with each radius to find the possible heights of different sized soda cans. Once the height column is completed calculate the surface areas. The results for radius are already given. Round the height to the nearest tenth and the surface area to the nearest whole number. The given radius are: 1 cm, 2 cm, 3 cm, 4 cm, 5 cm, 6 cm, 7 ... Geometry: Circles Practice Questions What is the circumference and area if the radius of a circle is 23cm? What is the radius? Area = 28 in. ... What is the circumference of a circle with a diameter of 18cm? How Others Use Our Site Learn basics of circles.
http://www.tulyn.com/geometry/circles
13
252
In geometry, an angle is the figure formed by two rays, called the sides of the angle, sharing a common endpoint, called the vertex of the angle. Angles are usually presumed to be in a Euclidean plane or in the Euclidean space, but are also defined in non-Euclidean geometries. In particular, in spherical geometry, the spherical angles are defined, using arcs of great circles instead of rays. Angle is also used to designate the measure of an angle or of a rotation. This measure is the ratio of the length of a circular arc to its radius. In the case of a geometric angle, the arc is centered at the vertex and delimited by the sides. In the case of a rotation, the arc is centered at the center of the rotation and delimited by any other point and its image by the rotation. The word angle comes from the Latin word angulus, meaning "a corner". The word angulus is a diminutive, of which the primitive form, angus, does not occur in Latin. Cognate words are the Greek ἀγκύλος (ankylοs), meaning "crooked, curved," and the English word "ankle". Both are connected with the Proto-Indo-European root *ank-, meaning "to bend" or "bow". Euclid defines a plane angle as the inclination to each other, in a plane, of two lines which meet each other, and do not lie straight with respect to each other. According to Proclus an angle must be either a quality or a quantity, or a relationship. The first concept was used by Eudemus, who regarded an angle as a deviation from a straight line; the second by Carpus of Antioch, who regarded it as the interval or space between the intersecting lines; Euclid adopted the third concept, although his definitions of right, acute, and obtuse angles are certainly quantitative. Measuring angles The size of a geometric angle is usually characterized by the magnitude of the smallest rotation that maps one of the rays into the other. Angles that have the same size are sometimes called congruent angles. In some contexts, such as identifying a point on a circle or describing the orientation of an object in two dimensions relative to a reference orientation, angles that differ by an exact multiple of a full turn are effectively equivalent. In other contexts, such as identifying a point on a spiral curve or describing the cumulative rotation of an object in two dimensions relative to a reference orientation, angles that differ by a non-zero multiple of a full turn are not equivalent. In order to measure an angle θ, a circular arc centered at the vertex of the angle is drawn, e.g. with a pair of compasses. The length of the arc s is then divided by the radius of the arc r, and possibly multiplied by a scaling constant k (which depends on the units of measurement that are chosen): The value of θ thus defined is independent of the size of the circle: if the length of the radius is changed then the arc length changes in the same proportion, so the ratio s/r is unaltered. Units used to represent angles are listed below in descending magnitude order. Of these units, the degree and the radian are by far the most commonly used. Angles expressed in radians are dimensionless for the purposes of dimensional analysis. Most units of angular measurement are defined such that one turn (i.e. one full circle) is equal to n units, for some whole number n. The two exceptions are the radian and the diameter part. For example, in the case of degrees, n = 360. A turn of n units is obtained by setting k = n/(2π) in the formula above. (Proof. The formula above can be rewritten as k = θr/s. One turn, for which θ = n units, corresponds to an arc equal in length to the circle's circumference, which is 2πr, so s = 2πr. Substituting n for θ and 2πr for s in the formula, results in k = nr/(2πr) = n/(2π).) - The cycle (or turn, full circle, revolution, or rotation) is one full circle. A turn can be subdivided in centiturns and milliturns. A turn is abbreviated or rev or rot depending on the application, but just r in rpm (revolutions per minute). 1 turn = 360° = 2π rad = 400 grad = 4 right angles. - The quadrant is 1/4 of a turn, i.e. a right angle. It is the unit used in Euclid's Elements. 1 quad. = 90° = π/2 rad = 1/4 turn = 100 grad. In German the symbol ∟ has been used to denote a quadrant. - The sextant (angle of the equilateral triangle) is 1/6 of a turn. It was the unit used by the Babylonians, and is especially easy to construct with ruler and compasses. The degree, minute of arc and second of arc are sexagesimal subunits of the Babylonian unit. 1 Babylonian unit = 60° = π/3 rad ≈ 1.047197551 rad. - The radian is the angle subtended by an arc of a circle that has the same length as the circle's radius (k = 1 in the formula given earlier). One turn is 2π radians, and one radian is 180/π degrees, or about 57.2958 degrees. The radian is abbreviated rad, though this symbol is often omitted in mathematical texts, where radians are assumed unless specified otherwise. When radians are used angles are considered as dimensionless. The radian is used in virtually all mathematical work beyond simple practical geometry, due, for example, to the pleasing and "natural" properties that the trigonometric functions display when their arguments are in radians. The radian is the (derived) unit of angular measurement in the SI system. - The diameter part (occasionally used in Islamic mathematics) is 1/60 radian. One "diameter part" is approximately 0.95493°. - The astronomical hour angle is 1/24 of a turn. Since this system is amenable to measuring objects that cycle once per day (such as the relative position of stars), the sexagesimal subunits are called minute of time and second of time. Note that these are distinct from, and 15 times larger than, minutes and seconds of arc. 1 hour = 15° = π/12 rad = 1/6 quad. = 1/24 turn ≈ 16.667 grad. - The point, used in navigation, is 1/32 of a turn. 1 point = 1/8 of a right angle = 11.25° = 12.5 grad. Each point is subdivided in four quarter-points so that 1 turn equals 128 quarter-points. - The binary degree, also known as the binary radian (or brad), is 1/256 of a turn. The binary degree is used in computing so that an angle can be efficiently represented in a single byte (albeit to limited precision). Other measures of angle used in computing may be based on dividing one whole turn into 2n equal parts for other values of n. - The degree, denoted by a small superscript circle (°), is 1/360 of a turn, so one turn is 360°. One advantage of this old sexagesimal subunit is that many angles common in simple geometry are measured as a whole number of degrees. Fractions of a degree may be written in normal decimal notation (e.g. 3.5° for three and a half degrees), but the "minute" and "second" sexagesimal subunits of the "degree-minute-second" system are also in use, especially for geographical coordinates and in astronomy and ballistics: - The grad, also called grade, gradian, or gon, is 1/400 of a turn, so a right angle is 100 grads. It is a decimal subunit of the quadrant. A kilometre was historically defined as a centi-grad of arc along a great circle of the Earth, so the kilometer is the decimal analog to the sexagesimal nautical mile. The grad is used mostly in triangulation. - The mil is approximately equal to a milliradian. There are several definitions ranging from 0.05625 to 0.06 degrees (3.375 to 3.6 minutes), with the milliradian being approximately 0.05729578 degrees (3.43775 minutes). In NATO countries, it is defined as 1/6400th of a circle. Its value is approximately equal to the angle subtended by a width of 1 metre as seen from 1 km away (2π / 6400 = 0.0009817… ≒ 1/1000). - The minute of arc (or MOA, arcminute, or just minute) is 1/60 of a degree = 1/21600 turn. It is denoted by a single prime ( ′ ). For example, 3° 30′ is equal to 3 + 30/60 degrees, or 3.5 degrees. A mixed format with decimal fractions is also sometimes used, e.g. 3° 5.72′ = 3 + 5.72/60 degrees. A nautical mile was historically defined as a minute of arc along a great circle of the Earth. - The second of arc (or arcsecond, or just second) is 1/60 of a minute of arc and 1/3600 of a degree. It is denoted by a double prime ( ″ ). For example, 3° 7′ 30″ is equal to 3 + 7/60 + 30/3600 degrees, or 3.125 degrees. - The hexacontade is a unit of 6° that Eratosthenes used, so that a whole turn was divided into 60 units. - The Babylonians sometimes used the unit pechus of about 2° or 2½°. Positive and negative angles Although the definition of the measurement of an angle does not support the concept of a negative angle, it is frequently useful to impose a convention that allows positive and negative angular values to represent orientations and/or rotations in opposite directions relative to some reference. In a two dimensional Cartesian coordinate system, an angle is typically defined by its two sides, with its vertex at the origin. The initial side is on the positive x-axis, while the other side or terminal side is defined by the measure from the initial side in radians, degrees, or turns. With positive angles representing rotations toward the positive y-axis and negative angles representing rotations toward the negative y-axis. When Cartesian coordinates are represented by standard position, defined by the x-axis rightward and the y-axis upward, positive rotations are anticlockwise and negative rotations are clockwise. In many contexts, an angle of −θ is effectively equivalent to an angle of "one full turn minus θ". For example, an orientation represented as − 45° is effectively equivalent to an orientation represented as 360° − 45° or 315°. However, a rotation of − 45° would not be the same as a rotation of 315°. In three dimensional geometry, "clockwise" and "anticlockwise" have no absolute meaning, so the direction of positive and negative angles must be defined relative to some reference, which is typically a vector passing through the angle's vertex and perpendicular to the plane in which the rays of the angle lie. In navigation, bearings are measured relative to north. By convention, viewed from above, bearing angle are positive clockwise, so a bearing of 45° corresponds to a north-east orientation. Negative bearings are not used in navigation, so a north-west orientation corresponds to a bearing of 315°. Alternative ways of measuring the size of an angle There are several alternatives to measuring the size of an angle by the corresponding angle of rotation. The grade of a slope, or gradient is equal to the tangent of the angle, or sometimes the sine. Gradients are often expressed as a percentage. For very small values (less than 5%), the grade of a slope is approximately the measure of an angle in radians. In rational geometry the spread between two lines is defined at the square of sine of the angle between the lines. Since the sine of an angle and the sine of its supplementary angle are the same any angle of rotation that maps one of the lines into the other leads to the same value of the spread between the lines. Astronomical approximations Astronomers measure angular separation of objects in degrees from their point of observation. - 1° is approximately the width of a little finger at arm's length. - 10° is approximately the width of a closed fist at arm's length. - 20° is approximately the width of a handspan at arm's length. These measurements clearly depend on the individual subject, and the above should be treated as rough approximations only. Identifying angles In mathematical expressions, it is common to use Greek letters (α, β, γ, θ, φ, ...) to serve as variables standing for the size of some angle. (To avoid confusion with its other meaning, the symbol π is typically not used for this purpose.) Lower case roman letters (a, b, c, ...) are also used. See the figures in this article for examples. In geometric figures, angles may also be identified by the labels attached to the three points that define them. For example, the angle at vertex A enclosed by the rays AB and AC (i.e. the lines from point A to point B and point A to point C) is denoted ∠BAC or Sometimes, where there is no risk of confusion, the angle may be referred to simply by its vertex ("angle A"). Potentially, an angle denoted, say, ∠BAC might refer to any of four angles: the clockwise angle from B to C, the anticlockwise angle from B to C, the clockwise angle from C to B, or the anticlockwise angle from C to B, where the direction in which the angle is measured determines its sign (see Positive and negative angles). However, in many geometrical situations it is obvious from context that the positive angle less than or equal to 180 degrees is meant, and no ambiguity arises. Otherwise, a convention may be adopted so that ∠BAC always refers to the anticlockwise (positive) angle from B to C, and ∠CAB to the anticlockwise (positive) angle from C to B. Types of angles Individual angles - Angles smaller than a right angle (less than 90°) are called acute angles ("acute" meaning "sharp"). - An angle equal to 1/4 turn (90° or π/2 radians) is called a right angle. Two lines that form a right angle are said to be normal, orthogonal, or perpendicular. - Angles larger than a right angle and smaller than a straight angle (between 90° and 180°) are called obtuse angles ("obtuse" meaning "blunt"). - Angles equal to 1/2 turn (180° or π radians) are called straight angles. - Angles larger than a straight angle but less than 1 turn (between 180° and 360°) are called reflex angles. - Angles equal to 1 turn (360° or 2π radians) are called full angles or a perigon. - Angles that are not right angles or a multiple of a right angle are called oblique angles. The names, intervals, and measured units are shown in a table below: Equivalence angle pairs - Angles that have the same measure (i.e. the same magnitude) are said to be equal or congruent. An angle is defined by its measure and is not dependent upon the lengths of the sides of the angle (e.g. all right angles are congruent). - Two angles which share terminal side, however differ in size by an integer multiple of a turn, are called coterminal angles. - A reference angle is the acute version of any angle determined by repeatedly subtracting or adding 180 degrees, and subtracting the result from 180 degrees if necessary, until a value between 0 degrees and 90 degrees is obtained. For example, an angle of 30 degrees has a reference angle of 30 degrees, and an angle of 150 degrees also has a reference angle of 30 degrees (180-150). An angle of 750 degrees has a reference angle of 30 degrees (750-720). Intersecting angle pairs - Two angles opposite each other, formed by two intersecting straight lines that form an "X"-like shape, are called vertical angles or opposite angles or vertically opposite angles. These angles are equal in measure. - Angles that share a common vertex and edge but do not share any interior points are called adjacent angles. - Alternate angles, corresponding angle, interior angles and exterior angles are associated with a transversal of a pair of lines by a third. Combine angle pairs - Two angles that sum to one right angle (90°) are called complementary angles. - The difference between an angle and a right angle is termed the complement of the angle. - Two angles that sum to a straight angle (180°) are called supplementary angles. - The difference between an angle and a straight angle (180°) is termed the supplement of the angle. - Two angles that sum to one turn (360°) are called explementary angles or conjugate angles. - An angle that is part of a simple polygon is called an interior angle if it lies on the inside of that simple polygon. A concave simple polygon has at least one interior angle that exceeds 180°. - In Euclidean geometry, the measures of the interior angles of a triangle add up to π radians, or 180°, or 1/2 turn; the measures of the interior angles of a simple quadrilateral add up to 2π radians, or 360°, or 1 turn. In general, the measures of the interior angles of a simple polygon with n sides add up to [(n − 2) × π] radians, or [(n − 2) × 180]°, or (2n − 4) right angles, or (n/2 − 1) turn. - The angle supplementary to the interior angle is called the exterior angle. It measures the amount of rotation one has to make at this vertex to trace out the polygon. If the corresponding interior angle is a reflex angle, the exterior angle should be considered negative. Even in a non-simple polygon it may be possible to define the exterior angle, but one will have to pick an orientation of the plane (or surface) to decide the sign of the exterior angle measure. - In Euclidean geometry, the sum of the exterior angles of a simple polygon will be one full turn (360°). - Some authors use the name exterior angle of a simple polygon to simply mean the explementary (not supplementary!) of the interior angle. This conflicts with the above usage. - The angle between two planes (such as two adjacent faces of a polyhedron) is called a dihedral angle. It may be defined as the acute angle between two lines normal to the planes. - The angle between a plane and an intersecting straight line is equal to ninety degrees minus the angle between the intersecting line and the line that goes through the point of intersection and is normal to the plane. Angles between curves The angle between a line and a curve (mixed angle) or between two intersecting curves (curvilinear angle) is defined to be the angle between the tangents at the point of intersection. Various names (now rarely, if ever, used) have been given to particular cases:—amphicyrtic (Gr. ἀμφί, on both sides, κυρτός, convex) or cissoidal (Gr. κισσός, ivy), biconvex; xystroidal or sistroidal (Gr. ξυστρίς, a tool for scraping), concavo-convex; amphicoelic (Gr. κοίλη, a hollow) or angulus lunularis, biconcave. Dot product and generalisation Inner product To define angles in an abstract real inner product space, we replace the Euclidean dot product ( · ) by the inner product , i.e. In a complex inner product space, the expression for the cosine above may give non-real values, so it is replaced with or, more commonly, using the absolute value, with The latter definition ignores the direction of the vectors and thus describes the angle between one-dimensional subspaces and spanned by the vectors and correspondingly. Angles between subspaces The definition of the angle between one-dimensional subspaces and given by Angles in Riemannian geometry Angles in geography and astronomy In geography, the location of any point on the Earth can be identified using a geographic coordinate system. This system specifies the latitude and longitude of any location in terms of angles subtended at the centre of the Earth, using the equator and (usually) the Greenwich meridian as references. In astronomy, a given point on the celestial sphere (that is, the apparent position of an astronomical object) can be identified using any of several astronomical coordinate systems, where the references vary according to the particular system. Astronomers measure the angular separation of two stars by imagining two lines through the centre of the Earth, each intersecting one of the stars. The angle between those lines can be measured, and is the angular separation between the two stars. Astronomers also measure the apparent size of objects as an angular diameter. For example, the full moon has an angular diameter of approximately 0.5°, when viewed from Earth. One could say, "The Moon's diameter subtends an angle of half a degree." The small-angle formula can be used to convert such an angular measurement into a distance/size ratio. See also - Angle bisector - Angular velocity - Argument (complex analysis) - Astrological aspect - Central angle - Clock angle problem - Complementary angles - Great circle distance - Hyperbolic angle - Inscribed angle - Irrational angle - Solid angle for a concept of angle in three dimensions. - Spherical angle - Supplementary angles - Sidorov, L.A. (2001), "Angle", in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 - Slocum, Jonathan (2007), Preliminary Indo-European lexicon — Pokorny PIE data, University of Texas research department: linguistics research center, retrieved 2 Feb., 2010 - Chisholm 1911; Heiberg 1908, pp. 177–178 - J.H. Jeans (1947), The Growth of Physical Science, p.7; Francis Dominic Murnaghan (1946), Analytic Geometry, p.2 - ooPIC Programmer's Guide (archived) www.oopic.com - Angles, integers, and modulo arithmetic Shawn Hargreaves blogs.msdn.com - Weisstein, Eric W., "Exterior Angle", MathWorld. - Chisholm 1911; Heiberg 1908, p. 178 - Heiberg, Johan Ludvig (1908). Heath, T. L., ed. Euclid. The thirteen books of Euclid's Elements 1. Cambridge University press. - This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Angle". Encyclopædia Britannica (11th ed.). Cambridge University Press. |Wikimedia Commons has media related to: Angles| - Angle Bisectors in a Quadrilateral at cut-the-knot - Constructing a triangle from its angle bisectors at cut-the-knot - Angle Estimation – for basic astronomy. - Angle definition pages with interactive applets. - Various angle constructions with compass and straightedge
http://en.wikipedia.org/wiki/Angle
13
65
Precession of the Equinoxes In astronomy, axial precession is a gravity-induced, slow and continuous change in the orientation of an astronomical body's rotational axis. In particular, it refers to the gradual shift in the orientation of Earth's axis of rotation, which, like a wobbling top, traces out a pair of cones joined at their apices in a cycle of approximately 26,000 years (called a Great or Platonic Year in astrology). The term "precession" typically refers only to this largest secular motion; other changes in the alignment of Earth's axis - nutation and polar motion - are much smaller in magnitude. Earth's precession was historically called precession of the equinoxes because the equinoxes moved westward along the ecliptic relative to the fixed stars, opposite to the motion of the Sun along the ecliptic. This term is still used in non-technical discussions, that is, when detailed mathematics are absent. Historically, Hipparchus is credited with discovering precession of the equinoxes. The exact dates of his life are not known, but astronomical observations attributed to him by Ptolemy date from 147 BC to 127 BC. With improvements in the ability to calculate the gravitational force between planets during the first half of the 19th century, it was recognized that the ecliptic itself moved slightly, which was named planetary precession as early as 1863, while the dominant component was named lunisolar precession. Their combination was named general precession instead of precession of the equinoxes. Lunisolar precession is caused by the gravitational forces of the Moon and Sun on Earth's equatorial bulge, causing Earth's axis to move with respect to inertial space. Planetary precession (actually an advance) is due to the small angle between the gravitational force of the other planets on Earth and its orbital plane (the ecliptic), causing the plane of the ecliptic to shift slightly relative to inertial space. Lunisolar precession is about 500 times larger than planetary precession. In addition to the Moon and Sun, the other planets also cause a small movement of Earth's axis in inertial space, making the contrast in the terms lunisolar versus planetary misleading, so in 2006 the International Astronomical Union recommended that the dominant component be renamed the precession of the equator and the minor component be renamed precession of the ecliptic, but their combination is still named general precession. The precession of the equinoxes is caused by the gravitational forces of the Sun and the Moon, and to a lesser extent other bodies, on the Earth. It was first explained by Sir Isaac Newton. Axial precession is similar to the precession of a spinning top. In both cases, the applied force is due to gravity. For a spinning top, this force tends to be almost parallel to the rotation axis. For the Earth, however, the applied forces of the Sun and the Moon are nearly perpendicular to the axis of rotation. The Earth is not a perfect sphere but an oblate spheroid, with an equatorial diameter about 43 kilometers larger than its polar diameter. Because of the Earth's axial tilt, during most of the year the half of this bulge that is closest to the Sun is off-center, either to the north or to the south, and the far half is off-center on the opposite side. The gravitational pull on the closer half is stronger, since gravity decreases with distance, so this creates a small torque on the Earth as the Sun pulls harder on one side of the Earth than the other. The axis of this torque is roughly perpendicular to the axis of the Earth's rotation so the axis of rotation precesses. If the Earth were a perfect sphere, there would be no precession. This average torque is perpendicular to the direction in which the rotation axis is tilted away from the ecliptic pole, so that it does not change the axial tilt itself. The magnitude of the torque from the sun (or the moon) varies with the gravitational object's alignment with the Earth's spin axis and approaches zero when it is orthogonal. Although the above explanation involved the Sun, the same explanation holds true for any object moving around the Earth, along or close to the ecliptic, notably, the Moon. The combined action of the Sun and the Moon is called the lunisolar precession. In addition to the steady progressive motion (resulting in a full circle in about 25,700 years) the Sun and Moon also cause small periodic variations, due to their changing positions. These oscillations, in both precessional speed and axial tilt, are known as the nutation. The most important term has a period of 18.6 years and an amplitude of less than 20 seconds of arc. In addition to lunisolar precession, the actions of the other planets of the solar system cause the whole ecliptic to rotate slowly around an axis which has an ecliptic longitude of about 174¡ measured on the instantaneous ecliptic. This so-called planetary precession shift amounts to a rotation of the ecliptic plane of 0.47 seconds of arc per year (more than a hundred times smaller than lunisolar precession). The sum of the two precessions is known as the general precession. The precession of the Earth's axis has a number of observable effects. First, the positions of the south and north celestial poles appear to move in circles against the space-fixed backdrop of stars, completing one circuit in 25,772 Julian years (2000 rate). Thus, while today the star Polaris lies approximately at the north celestial pole, this will change over time, and other stars will become the "north star". The south celestial pole currently lacks a bright star to mark its position, but over time precession will also cause bright stars to become south stars. As the celestial poles shift, there is a corresponding gradual shift in the apparent orientation of the whole star field, as viewed from a particular position on Earth. Secondly, the position of the Earth in its orbit around the Sun at the solstices, equinoxes, or other time defined relative to the seasons, slowly changes. For example, suppose that the Earth's orbital position is marked at the summer solstice, when the Earth's axial tilt is pointing directly towards the Sun. One full orbit later, when the Sun has returned to the same apparent position relative to the background stars, the Earth's axial tilt is not now directly towards the Sun: because of the effects of precession, it is a little way "beyond" this. In other words, the solstice occurred a little earlier in the orbit. Thus, the tropical year, measuring the cycle of seasons (for example, the time from solstice to solstice, or equinox to equinox), is about 20 minutes shorter than the sidereal year, which is measured by the Sun's apparent position relative to the stars. Note that 20 minutes per year is approximately equivalent to one year per 25,772 years, so after one full cycle of 25,772 years the positions of the seasons relative to the orbit are "back where they started". (In actuality, other effects also slowly change the shape and orientation of the Earth's orbit, and these, in combination with precession, create various cycles of differing periods; see also Milankovitch cycles. The magnitude of the Earth's tilt, as opposed to merely its orientation, also changes slowly over time, but this effect is not attributed directly to precession.) For identical reasons, the apparent position of the Sun relative to the backdrop of the stars at some seasonally fixed time, say the vernal equinox, slowly regresses a full 360¡ through all twelve traditional constellations of the zodiac, at the rate of about 50.3 seconds of arc per year (approximately 360 degrees divided by 25,772), or 1 degree every 71.6 years. Age of Aquarius We are allegedly in the Age of Aquarius. According to astrological mysticism, there will be unusual harmony and understanding in the world. Those who follow that belief system see it as a turning point in human consciousness in which balance is restored by consciously moving beyond the physical body. The Aquarius symbol is metaphoric in content - meaning 'closure in water'. Water represents the collective unconsciousness or consciousness hologramwhich creates the grid programs of our physical reality. Many connect the Age of Aquarius with the return of the goddess, priestess, or feminine energies - those that vibrate above/faster than physical frequency. This is the return to higher consciousness, the awakening of higher mind and thought in the alchemy of time. The Age of Aquarius is the polar opposite of the Age of Leo - in the bipolar reality in which we experience the physical experiment of time and illusion through the consciousness projection of the eye or All Seeing Eye. Though there is still-controversial evidence that Aristarchus of Samos possessed distinct values for the sidereal and tropical years as early as c. 280 BC, the discovery of precession is usually attributed to Hipparchus (190-120 BC) of Rhodes or Nicaea, a Greek astronomer. According to Ptolemy's Almagest, Hipparchus measured the longitude of Spica and other bright stars. Comparing his measurements with data from his predecessors, Timocharis (320-260 BC) and Aristillus (~280 BC), he concluded that Spica had moved 2° relative to the autumnal equinox. He also compared the lengths of the tropical year (the time it takes the Sun to return to an equinox) and the sidereal year (the time it takes the Sun to return to a fixed star), and found a slight discrepancy. Hipparchus concluded that the equinoxes were moving ("precessing") through the zodiac, and that the rate of precession was not less than 1° in a century, in other words completing a full cycle in no more than 36000 years. Virtually all Hipparchus' writings are lost, including his work on precession. They are mentioned by Ptolemy, who explains precession as the rotation of the celestial sphere around a motionless Earth. It is reasonable to assume that Hipparchus, like Ptolemy, thought of precession in geocentric terms as a motion of the heavens. The first astronomer known to have continued Hipparchus' work on precession is Ptolemy in the 2nd century. Ptolemy measured the longitudes of Regulus, Spica, and other bright stars with a variation of Hipparchus' lunar method that did not require eclipses. Before sunset, he measured the longitudinal arc separating the Moon from the Sun. Then, after sunset, he measured the arc from the Moon to the star. He used Hipparchus' model to calculate the Sun's longitude, and made corrections for the Moon's motion and its parallax (Evans 1998, pp. 251-255). Ptolemy compared his own observations with those made by Hipparchus, Menelaus of Alexandria, Timocharis, and Agrippa. He found that between Hipparchus' time and his own (about 265 years), the stars had moved 2°40', or 1° in 100 years (36" per year; the rate accepted today is about 50" per year or 1° in 72 years). He also confirmed that precession affected all fixed stars, not just those near the ecliptic, and his cycle had same period of 36000 years as found by Hipparchus. Most ancient authors did not mention precession and perhaps did not know of it. Besides Ptolemy, the list includes Proclus, who rejected precession, and Theon of Alexandria, a commentator on Ptolemy in the 4th century, who accepted Ptolemy's explanation. Theon also reports an alternate theory: According to certain opinions ancient astrologers believe that from a certain epoch the solstitial signs have a motion of 8° in the order of the signs, after which they go back the same amount. . . . (Dreyer 1958, p. 204) Instead of proceeding through the entire sequence of the zodiac, the equinoxes "trepidated" back and forth over an arc of 8°. The theory of trepidation is presented by Theon as an alternative to precession. Various assertions have been made that other cultures discovered precession independent of Hipparchus. At one point it was suggested that the Babylonians may have known about precession. According to Al-Battani, the Chaldean astronomers had distinguished the tropical and sidereal year (the value of precession is equivalent to the difference between the tropical and sidereal years). He stated that they had, around 330 BC, an estimation for the length of the sidereal year to be SK = 365 days 6 hours 11 min (= 365.258 days) with an error of (about) 2 min. It was claimed by P. Schnabel in 1923 that Kidinnu theorized about precession in 315 BC. Otto Neugebauer's work on this issue in the 1950s superseded Schnabel's (and earlier, Kugler's) theory of a Babylonian discoverer of precession. In recent decades, the hypothesis was revived and amplified in de Santillana and von Dechend's Hamlet's Mill (Harvard University Press, 1969). In an application of extreme Panbabylonism to archaeoastronomy, they proposed that a Babylonian mythological account of the precession gave rise via diffusion to similar myths around the world, even as far away as China, Polynesia, and North America. While their theory has not been widely accepted in academia, it anticipated the recent popular revival of interest in precessional archeoastronomy. Similar claims have been made that precession was known in Ancient Egypt prior to the time of Hipparchus, but these remain controversial. Some buildings in the Karnak temple complex, for instance, were allegedly oriented towards the point on the horizon where certain stars rose or set at key times of the year. A few centuries later, when precession made the orientations obsolete, the temples would be rebuilt. However, the observation that a stellar alignment has grown wrong does not mean that the Egyptians understood that the stars moved across the sky at the rate of about one degree per 72 years. Nonetheless, they kept accurate calendars and if they recorded the date of the temple reconstructions it would be a fairly simple matter to plot the rough precession rate. The Dendera Zodiac, a star-map from the Hathor temple at Dendera from a late (Ptolemaic) age, supposedly records precession of the equinoxes (Tompkins 1971). In any case, if the ancient Egyptians knew of precession, their knowledge is not recorded in surviving astronomical texts. Michael Rice wrote in his Egypt's Legacy, "Whether or not the ancients knew of the mechanics of the Precession before its definition by Hipparchos the Bithynian in the second century BC is uncertain, but as dedicated watchers of the night sky they could not fail to be aware of its effects." (p. 128) Rice believes that "the Precession is fundamental to an understanding of what powered the development of Egypt" (p. 10), to the extent that "in a sense Egypt as a nation-state and the king of Egypt as a living god are the products of the realization by the Egyptians of the astronomical changes effected by the immense apparent movement of the heavenly bodies which the Precession implies." (p. 56) Following Carl Jung, Rice says that "the evidence that the most refined astronomical observation was practiced in Egypt in the third millennium BC (and probably even before that date) is clear from the precision with which the Pyramids at Giza are aligned to the cardinal points, a precision which could only have been achieved by their alignment with the stars. This fact alone makes Jung's belief in the Egyptians' knowledge of the Precession a good deal less speculative than once it seemed." (p. 31) The Egyptians also, says Rice, were "to alter the orientation of a temple when the star on whose position it had originally been set moved its position as a consequence of the Precession, something which seems to have happened several times during the New Kingdom." (p. 170) The notion that an ancient Egyptian priestly elite tracked the precessional cycle over many thousands of years plays a central role in the theories expounded by Robert Bauval and Graham Hancock in their 1996 book "Keeper of Genesis.' The authors claim that the ancient Egyptians' monumental building projects functioned as a map of the heavens, and that associated rituals were an elaborate earthly acting-out of celestial events. In particular, the rituals symbolized the "turning back" of the precessional cycle to a remote ancestral time known as Zep Tepi ("first time") which, the authors calculate, dates to around 10,500 BC. There has been speculation that the Mesoamerican Long Count calendar is somehow calibrated against the precession, but this view is not held by professional scholars of Mayan civilization. A 12th century text by Bhaskara II says: "sampan revolves negatively 30000 times in a Kalpa of 4320 million years according to Suryasiddhanta, while Munjala and others say ayana moves forward 199669 in a Kalpa, and one should combine the two, before ascertaining declension, ascensional difference, etc." Lancelot Wilkinson translated the last of these three verses in a too concise manner to convey the full meaning, and skipped the portion combine the two which the modern Hindi commentary has brought to the fore. According to the Hindi commentary, the final value of period of precession should be obtained by combining +199669 revolutions of ayana with -30000 revolutions of sampaat to get +169669 per Kalpa, i.e. one revolution in 25461 years, which is near the modern value of 25771 years. Moreover, Munjala's value gives a period of 21636 years for ayana's motion, which is the modern value of precession when anomalistic precession is also taken into account. The latter has a period of 136000 years now, but Bhaskar-II gives its value at 144000 years (30000 in a Kalpa), calling it sampan. Bhaskar-II did not give any name of the final term after combining the negative sampan with the positive ayana. But the value he gave indicates that by ayana he meant precession on account of the combined influence of orbital and anomalistic precessions, and by sampan he meant the anomalistic period, but defined it as equinox. His language is a bit confused, which he clarified in his own Vasanabhashya commentary Siddhanta Shiromani]by saying that Suryasiddhanta was not available and he was writing on the basis of hearsay. Bhaskar-II did not give his own opinion, he merely cited Suryasiddhanta, Munjala and unnamed "others". Yu Xi (4th century CE) was the first Chinese astronomer to mention precession. He estimated the rate of precession as 1° in 50 years (Pannekoek 1961, p. 92). Middle Ages and Renaissance In medieval Islamic astronomy, the Zij-i Ilkhani compiled at the Maragheh observatory set the precession of the equinoxes at 51 arc seconds per annum, which is very close to the modern value of 50.2 arc seconds. In the Middle Ages, Islamic and Latin Christian astronomers treated "trepidation" as a motion of the fixed stars to be added to precession. This theory is commonly attributed to the Arab astronomer Thabit ibn Qurra, but the attribution has been contested in modern times. Nicolaus Copernicus published a different account of trepidation in De revolutionibus orbium coelestium (1543). This work makes the first definite reference to precession as the result of a motion of the Earth's axis. Copernicus characterized precession as the third motion of the earth. Over a century later precession was explained in Isaac Newton's Philosophiae Naturalis Principia Mathematica (1687) to be a consequence of gravitation (Evans 1998, p. 246). However, Newton's original precession equations did not work and were revised considerably by Jean le Rond d'Alembert and subsequent scientists. Hipparchus gave an account of his discovery in On the Displacement of the Solsticial and Equinoctial Points (described in Almagest III.1 and VII.2). He measured the ecliptic longitude of the star Spica during lunar eclipses and found that it was about 6° west of the autumnal equinox. By comparing his own measurements with those of Timocharis of Alexandria (a contemporary of Euclid who worked with Aristillus early in the 3rd century BC), he found that Spica's longitude had decreased by about 2° in about 150 years. He also noticed this motion in other stars. He speculated that only the stars near the zodiac shifted over time. Ptolemy called this his "first hypothesis" (Almagest VII.1), but did not report any later hypothesis Hipparchus might have devised. Hipparchus apparently limited his speculations because he had only a few older observations, which were not very reliable. Why did Hipparchus need a lunar eclipse to measure the position of a star? The equinoctial points are not marked in the sky, so he needed the Moon as a reference point. Hipparchus had already developed a way to calculate the longitude of the Sun at any moment. A lunar eclipse happens during Full moon, when the Moon is in opposition. At the midpoint of the eclipse, the Moon is precisely 180° from the Sun. Hipparchus is thought to have measured the longitudinal arc separating Spica from the Moon. To this value, he added the calculated longitude of the Sun, plus 180° for the longitude of the Moon. He did the same procedure with Timocharis' data (Evans 1998, p. 251). Observations like these eclipses, incidentally, are the main source of data about when Hipparchus worked, since other biographical information about him is minimal. The lunar eclipses he observed, for instance, took place on April 21, 146 BC, and March 21, 135 BC (Toomer 1984, p. 135 n. 14). Hipparchus also studied precession in On the Length of the Year. Two kinds of year are relevant to understanding his work. The tropical year is the length of time that the Sun, as viewed from the Earth, takes to return to the same position along the ecliptic (its path among the stars on the celestial sphere). The sidereal year is the length of time that the Sun takes to return to the same position with respect to the stars of the celestial sphere. Precession causes the stars to change their longitude slightly each year, so the sidereal year is longer than the tropical year. Using observations of the equinoxes and solstices, Hipparchus found that the length of the tropical year was 365+1/4-1/300 days, or 365.24667 days (Evans 1998, p. 209). Comparing this with the length of the sidereal year, he calculated that the rate of precession was not less than 1° in a century. From this information, it is possible to calculate that his value for the sidereal year was 365+1/4+1/144 days (Toomer 1978, p. 218). By giving a minimum rate he may have been allowing for errors in observation. To approximate his tropical year Hipparchus created his own lunisolar calendar by modifying those of Meton and Callippus in On Intercalary Months and Days (now lost), as described by Ptolemy in the Almagest III.1 (Toomer 1984, p. 139). The Babylonian calendar used a cycle of 235 lunar months in 19 years since 499 BC (with only three exceptions before 380 BC), but it did not use a specified number of days. The Metonic cycle (432 BC) assigned 6,940 days to these 19 years producing an average year of 365+1/4+1/76 or 365.26316 days. The Callippic cycle (330 BC) dropped one day from four Metonic cycles (76 years) for an average year of 365+1/4 or 365.25 days. Hipparchus dropped one more day from four Callipic cycles (304 years), creating the Hipparchic cycle with an average year of 365+1/4-1/304 or 365.24671 days, which was close to his tropical year of 365+1/4-1/300 or 365.24667 days. The three Greek cycles were never used to regulate any civil calendar - they only appear in the Almagest in an astronomical context. We find Hipparchus' mathematical signatures in the Antikythera Mechanism, an ancient astronomical computer of the 2nd century BC. The mechanism is based on a solar year, the Metonic Cycle, which is the period the Moon reappears in the same star in the sky with the same phase (full Moon appears at the same position in the sky approximately in 19 years), the Callipic cycle (which is four Metonic cycles and more accurate), the Saros cycle and the Exeligmos cycles (three Saros cycles for the accurate eclipse prediction). The study of the Antikythera Mechanism proves that the ancients have been using very accurate calendars based on all the aspects of solar and lunar motion in the sky. In fact the Lunar Mechanism which is part of the Antikythera Mechanism depicts the motion of the Moon and its phase, for a given time, using a train of four gears with a pin and slot device which gives a variable lunar velocity that is very close to the second law of Kepler, i.e. it takes into account the fast motion of the Moon at perigee and slower motion at apogee. This discovery proves that Hipparchus mathematics were much more advanced than Ptolemy describes in his books, as it is evident that he developed a good approximation of Kepler's second law. Mithraism was a mystery religion or school based on the worship of the god Mithras. Many underground temples were built in the Roman Empire from about the 1st century BC to the 5th century AD. Understanding Mithraism has been made difficult by the near-total lack of written descriptions or scripture; the teachings must be reconstructed from iconography found in mithraea (a mithraeum was a cave or underground meeting place that often contained bas reliefs of Mithras, the zodiac and associated symbols). Until the 1970s most scholars followed Franz Cumont in identifying Mithras with the Persian god Mithra. Cumont's thesis was re-examined in 1971, and Mithras is now believed to be a syncretic deity only slightly influenced by Persian religion. Mithraism is recognized as having pronounced astrological elements, but the details are debated. One scholar of Mithraism, David Ulansey, has interpreted Mithras (Mithras Sol Invictus - the unconquerable sun) as a second sun or star that is responsible for precession. He suggests the cult may have been inspired by Hipparchus' discovery of precession. Part of his analysis is based on the tauroctony, an image of Mithras sacrificing a bull, found in most of the temples. According to Ulansey, the tauroctony is a star chart. Mithras is a second sun or hyper-cosmic sun and/or the constellation Perseus, and the bull is Taurus, a constellation of the zodiac. In an earlier astrological age, the vernal equinox had taken place when the Sun was in Taurus. The tauroctony, by this reasoning, commemorated Mithras-Perseus ending the "Age of Taurus" (about 2000 BC based on the Vernal Equinox Ð or about 11,500 BC based on the Autumnal Equinox). The iconography also contains two torch bearing boys (Cautes and Cautopates) on each side of the zodiac. Ulansey, and Walter Cruttenden in his book Lost Star of Myth and Time, interpret these to mean ages of growth and decay, or enlightenment and darkness; primal elements of the cosmic progression. Thus Mithraism is thought to have something to do with the changing ages within the precession cycle or Great Year (Plato's term for one complete precession of the equinox). Changing Pole Stars A consequence of the precession is a changing pole star. Currently Polarisis extremely well-suited to mark the position of the north celestial pole, as it is about a half degree away from it and it is a moderately bright star (visual magnitude is 2.1 (variable)). On the other hand Thuban in the constellation Draco, which was the pole star in 3000 BC is much less conspicuous at magnitude 3.67 (one-fifth as bright as Polaris); today it is all but invisible in light-polluted urban skies. The brilliant Vega in the constellation Lyra is often touted as the best Northstar, when it fulfilled that role around 12000 BC and will do so again around the year AD 14000. In reality it never comes closer than 5¡ to the pole. When Polaris will be the north star again around 27800 AD, due to its proper motion it will be farther away from the pole then than it is now, while in 23600 BC it came closer to the pole. To find the south celestial pole in the sky at this moment, one is less lucky, as that area is a particularly bland portion of the sky, and the nominal south pole star is Sigma Octantis, which with magnitude 5.5 is barely visible even under a properly dark sky. However that will change in the 80th to 90th century, when the south celestial pole travels through the False cross. It is also seen from the starmap that the south pole, nicely pointed to by the Southern cross for the last 2000 years or so, is moving towards that constellation. By consequence it is now no longer visible from subtropical northern latitudes as it was in the time of the ancient Greeks. Still pictures like these, found in many astronomy books, are only first approximations as they do not take into account the variable speed of the precession, the variable obliquity of the ecliptic, the planetary precession (which makes not the ecliptic pole the centre, but a circle about 6¡ away from it) and the proper motions of the stars. Polar Shift and Equinoxes Shift The rotation axis of the Earth describes over a period of about 25800 years a small circle (blue) among the stars, centred around the ecliptic northpole (blue E) and with an angular radius of about 23.4¡: the angle known as the obliquity of the ecliptic. The orange axis was the Earth's rotation axis 5000 years ago when it pointed to the star Thuban. The yellow axis, pointing to Polaris is the situation now. Note that when the celestial sphere is seen from outside constellations appear in mirror image. Also note that the daily rotation of the Earth around its axis is opposite to the precessional rotation. When the polar axis precesses from one direction to another, then the equatorial plane of the Earth (indicated with the circular grid around the equator) and the associated celestial equator will move too. Where the celestial equator intersects the ecliptic (red line) there are the equinoxes. As seen from the drawing, the orange grid, 5000 years ago one intersection of equator and ecliptic, the vernal equinox was close to the star Aldebaran of Taurus. By now (the yellow grid) it has shifted (red arrow) to somewhere in the constellation of Pisces. Note that this is an astronomical description of the precessional movement and the vernal equinox position in a given constellation may not imply the astrological meaning of an Age carrying the same name, as they (ages and constellations) only have an exact alignment in the "first point of Aries", meaning once in each ca. 25800 (Great Sidereal Year). It might not be directly clear to the non-astronomer what the shift of the equinoxes has to do with the precession of the rotation axis of the Earth. The figures to the right try to explain that. The rotation axis of the Earth describes over a period of 25700 years a small circle (blue) among the stars, centred around the ecliptic northpole (blue E) and with an angular radius of about 23.4¡: the angle known as the obliquity of the ecliptic. The orange axis was the Earth's rotation axis 5000 years ago when it pointed to the star Thuban. The yellow axis, pointing to Polaris is the situation now. Note that when the celestial sphere is seen from outside (as in the first drawing, an impossibilty of course) constellations appear in mirror image. Also note that the daily rotation of the Earth around its axis is opposite to the precessional rotation. Of course when the polar axis precesses from one direction to another, then the equatorial plane of the Earth (indicated with the circular grid around the equator) and the associated celestial equator will move too. Where the celestial equator intersects the ecliptic (red line) there are the equinoxes. As seen from the drawing, the orange grid, 5000 years ago one intersection of equator and ecliptic, the vernal equinox was close to the star Aldebaran of Taurus. By now (the yellow grid) it has shifted (red arrow) to somewhere in the constellation of Pisces. This is why the equinoctial shift is a consequence of the precession of the rotation axis of the Earth and the other way around. The second drawing shows the perspective from a near Earth position as seen through a very wide angle lens (from which the apparent distortion). The precession of the equinoxes is caused by the differential gravitation forces of Sun and Moon on Earth. In popular science books one often finds this explained with the analogy of the precession of a spinning top. Indeed it is the same physical effect, however, some crucial details differ. In a spinning top it is gravity which causes the top to wobble which in its turn causes precession. The applied force is thus in the first instance parallel to the rotation axis. But for the Earth the applied forces of the Sun and Moon are in the first instance perpendicular to it. So how then can they cause it? The answer is that the forces do not work on the rotation axis. Instead they work on the equatorial bulge; due to its own rotation, the Earth is not a perfect sphere but an oblate spheroid, the equatorial diameter about 43 km larger than the polar. If the Earth were a perfect sphere, there would be no precession. The figure explains how this works. The Earth is given as a perfect sphere (so that all gravitational forces working on it can be taken equal as one force working on its center), and the bulge is approximated to be a torus of mass (blue) around its equator. Green arrows indicate the gravitational forces from the Sun on some extreme points. These forces are not parallel as they all point towards the centre of the Sun. Therefore the forces working on the northernmost and southernmost parts of the equatorial bulge have a component perpendicular on the ecliptical plane and directed towards it. We find them (small cyan arrows) when the average gravitation force on the centre of the Earth is substracted (because this force will be used as the centripetal force for the Earth in its orbit around the Sun). In all cases in addition to these tangential components there will be also radial components, but they are not shown as they do not contribute to the precession (they contribute to the tides). It is now clear how these tangential forces create a torque (orange), and this torque added to the rotation (magenta) shifts the rotation axis slightly to a new position (yellow). Repeat this again and again, and one sees how the axis precesses along the white circle, which is centred around the ecliptic pole. It is important to note that the torque is always in the same direction, perpendicular onto the direction in which the rotation axis is tilted away from the ecliptic pole, so that it does not change the axial tilt itself. It is also important to note that the torque is everywhere the same, whatever position of the Earth is in its orbit around the Sun. The precession is thus always steadily progressing and does not change with the seasons. Although the above explanation involved the Sun, the same story holds true for any object moving around the Earth along (or close to) the ecliptic, i.e. the Moon. The combined action of the Sun and the Moon is called the lunisolar precession. In addition to the steady progressive motion (resulting in a full circle in 25700 years) the Sun and Moon also cause small periodic variations, due to their changing positions. These oscillations, in both precessional speed and axial tilt are known as the nutation. The most important term has a period of 18.6 years and an amplitude of less than 20 arcseconds. In addition to lunisolar precession, the actions of the other planets of the solar system cause the whole ecliptic to slowly rotate around an axis which has an ecliptic longitude of about 174¡ measured on the instantaneous ecliptic. This planetary precession shift is only 0.47 arcseconds per year (more than a hundred times smaller than lunisolar precession), and takes place along the instantaneous equator. The sum of the two precessions is known as the general precession. Effects of Axial Precession on the Seasons This figure illustrates the effects of axial precession on the seasons, relative to perihelion and aphelion. The precession of the equinoxes can cause periodic climate change (see Milankovitch cycles), because the hemisphere that experiences summer at perihelion and winter at aphelion (as the southern hemisphere does presently) is in principle prone to more severe seasons than the opposite hemisphere. Hipparchus estimated Earth's precession around 130 BC, adding his own observations to those of Babylonian and Chaldean astronomers in the preceding centuries. In particular they measured the distance of the stars like Spica to the Moon and Sun at the time of lunar eclipses, and because he could compute the distance of the Moon and Sun from the equinox at these moments, he noticed that Spica and other stars appeared to have moved over the centuries. Precession causes the cycle of seasons (tropical year) to be about 20.4 minutes less than the period for the earth to return to the same position with respect to the stars as one year previously (sidereal year). This results in a slow change (one day per 71 calendar years) in the position of the sun with respect to the stars at an equinox. It is significant for calendars and their leap year rules. Source: crystalinks Wikipedia
http://www.cityofshamballa.net/profiles/blogs/way-21-12-2012-is-important-for-us
13
106
An expression, also called an operation, is a technique of combining two or more values or data fields, to either modify an existing value or to produce a new value. Based on this, to create an expression or to perform an operation, you need at least one value or field and one symbol. A value or field involved in an operation is called an operand. A symbol involved in an operation is called an operator. A unary operator is one that uses only one operand. An operator is referred to as binary if it operates on two operands. A constant is a value that does not change. The constants you will be using in your databases have already been created and are built in Microsoft Access. Normally, Visual Basic for Applications (VBA), the version of Microsoft Visual Basic that ships with Microsoft Access also provides many constants. Just in case you are aware of them, you will not be able to use those constants, as Microsoft Access does not inherently "understand" them. For this reason, we will mention here only the constants you can use when building regular expressions. The algebraic numbers you have been using all the time are constants because they never change. Examples of constant numbers are 12, 0, 1505, or 88146. Therefore, any number you can think of is a constant. Every letter of the alphabet is a constant and is always the same. Examples of constant letters are d, n, c. Some characters on your keyboard represent symbols that are neither letters nor digits. These are constants too. Examples are &, |, @, or !. The names of people are constants too. In fact, any name you can thing of is a contant. In order to provide a value to an existing field, you can use an operator called assignment and its symbol is "=". It uses the following syntax: Field/Object = Value/Field/Object The operand on the left side of the = operator is referred to as the left value: The operand on the right side of the operator is referred to as the right value. It can be a constant, a value, an expression, the name of a field, or an object. In some other cases, the assignment operator will be part of a longer expression. We will see examples we move on. An algebraic value is considered positive if it is greater than 0. As a mathematical convention, when a value is positive, you do not need to express it with the + operator. Just writing the number without any symbol signifies that the number is positive. Therefore, the numbers +4, +228, and +90335 can be, and are better, expressed as 4, 228, or 90335. Because the value does not display a sign, it is referred as unsigned. A value is referred to as negative if it is less than 0. To express a negative value, it must be appended with a sign, namely the - symbol. Examples are -12, -448, -32706. A value accompanied by - is referred to as negative. The - sign must be typed on the left side of the number it is used to negate. Remember that if a number does not have a sign, it is considered positive. Therefore, whenever a number is negative, it must have a - sign. In the same way, if you want to change a value from positive to negative, you can just add a - sign to its left. In the same way, if you want to negate the value of a field and assign it to another field, you can type the - operator on its left when assigning it. Besides a numeric value, the value of a field or an object can also be expressed as being negative by typing a - sign to its left. For example, -txtLength means the value of the control named txtLength must be made negative. The addition is used to add one value or expression to another. It is performed using the + symbol and its syntax is: Value1 + Value2 The addition allows you to add two numbers such as 12 + 548 or 5004.25 + 7.63 After performing the addition, you get a result. You can provide such a result to another field of a form or report. This can be done using the assignment operator. The syntax used would be: = Value1 + Value2 To use the result of this type of operation, you can write it in the Control Source property of the field that would show the result. Subtraction is performed by retrieving one value from another value. This is done using the - symbol. The syntax used is: Value1 - Value2 The value of Value1 is subtracted from the value of Value2. Multiplication allows adding one value to itself a certain number of times, set by the second value. The multiplication is performed with the * sign which is typed with Shift + 8. Here is an example: Value1 * Value2 During the operation, Value1 is repeatedly added to itself, Value2 times. The result can be assigned to the Control Source of a field as. The division is used to get the fraction of one number in terms of another number. Microsoft Access provides two types of results for the division operation. If you want the result of the operation to be a natural number, called an integer, use the backlash "\" as the operator. Here is an example: Value1 \ Value2 This operation can be performed on two types of valid numbers, with or without decimal parts. After the operation, the result would be a natural number. The second type of division results in a decimal number. It is performed with the forward slash "/". Its syntax is: Value1 / Value2 After the operation is performed, the result is a decimal number. Exponentiation is the ability to raise a number to the power of another number. This operation is performed using the ^ operator (Shift + 6). It uses the following mathematical formula: In Microsoft Access, this formula is written as y^x and means the same thing. Either or both y and x can be values or expressions, but they must carry valid values that can be evaluated. When the operation is performed, the value of y is raised to the power of x. You can display the result of such an operation in a field using the assignment operator as follows: The division operation gives a result of a number with or without decimal values, which is fine in some circumstances. Sometimes you will want to get the value remaining after a division renders a natural result. The remainder operation is performed with keyword Mod. Its syntax is: Value1 Mod Value2 The result of the operation can be used as you see fit or you can display it in a control using the assignment operator as follows: = Value1 Mod Value2 In previous lessons, we learned that a property was something that characterized or describes an object. For example, users mainly use a text box either to read the text it contains, or to change its content, by changing the existing text or by entering new text. Therefore, the text the user types in a text box is a property of the text box. To access the property of an object, type the name of the object, followed by a period, followed by the name of the property you need. The syntax used is: The property you are trying to use must be a valid property of the object. In Microsoft Access, to use a property of an object, you must know, either based on experience or with certainty, that the property exists. Even so, unfortunately, not all properties are available in Microsoft Access. To name our objects so far, in some cases we used a name made of one word without space. In some other cases, we used spaces or special characters in a name. This is possible because Microsoft Access allows a great level of flexibility when it comes to names used in a database. Unfortunately, when such names get involved in an expression, there would be an error or the result would be unpredictable. To make sure Microsoft Access can recognize any name in an expression, you should include it between an opening square bracket "[" and a closing square brackets "]". Examples are [© Year], [Soc. Sec. #], or [Date of Birth]. In the same way, even if the name is in one word, to be safe, you should (always) include it in square brackets. Examples are [Country], [FirstName], or [SocialSecurityNumber]. Therefore, the =txtLength expression that we referred to can be written =[txtLength]. The objects used in Microsoft Access are grouped in categories called collections. For example, the forms belong to a collection of objects called Forms. Consequently, all forms of your database project belong to the Forms collection. The reports belong to a collection of objects called Reports and all reports of your database belong to the Reports collection. The data fields belong to a collection called Controls and all controls of a form or a report of your database belong to the Controls collection. To call a particular object in an expression, use the exclamation point operator "!". To do this, type the name of the collection followed by the ! operator, followed by the name of the object you want to access. For example, on a form, if you have a text box called txtLength and you want to refer to it, you can type [Controls]![txtLength]. Therefore, the =txtLength expression that we referred to can be written =Controls!txtLength, and =[txtLength] can be written =Controls![txtLength] or =[Controls]![txtLength]. The name of the collection is used to perform what is referred to as qualification: the name of the collection "qualifies" the object. In other words, it helps the database engine locate the object by referring to its collection. This is useful in case two objects of different categories are being referred to. In a database, Microsoft Access allows two objects to have the same name, as long as they do not belong to the same category. For example, you cannot have two forms called Employees in the same database. In the same way, you cannot have two reports named Contracts in the same database. On the other hand, you can have a form named Employees and a report named Employees in the same database. For this reason, when creating expressions, you should (strongly) qualify the object you are referring to, using its collection. Therefore, when an object named Employees is referred to in an expression, you should specify its collection, using the ! operator. An example would be Forms!Employees which means the Employees form of the Forms collection. If the name of the form is made of more than one word, or for convenience, you must (strongly suggested) use square brackets to delimit the name of the form. The form would be accessed with Forms![Employees]. To refer to a control placed on a form or report, you can type the Forms collection, followed by the ! operator, followed by the name of the form, followed by the ! operator and followed by the name of the control. An example would be Forms!People!LastName. Using the assignment operator that we introduced earlier, if on a form named People, you have a control named LastName and you want to assign its value to another control named FullName, in the Control Source property of the FullName field, you can enter one of the following expressions: =LastName =[LastName] =Controls!LastName =[Controls]![LastName] =Forms!People!LastName =[Forms]![People]![LastName] These expressions would produce the same result. Parentheses are used in two main circumstances: in expressions (or operations) or in functions. The parentheses in an expression help to create sections. This regularly occurs when more than one operators are used in an operation. Consider the following operation: 8 + 3 * 5 The result of this operation depends on whether you want to add 8 to 3 then multiply the result by 5 or you want to multiply 3 by 5 and then add the result to 8. Parentheses allow you to specify which operation should be performed first in a multi-operator operation. In our example, if you want to add 8 to 3 first and use the result to multiply it by 5, you would write (8 + 3) * 5. This would produce 55. On the other hand, if you want to multiply 3 by 5 first then add the result to 8, you would write 8 + (3 * 5). This would produce 23. As you can see, results are different when parentheses are used on an operation that involves various operators. This concept is based on a theory called operator precedence. This theory manages which operation would execute before which one; but parentheses allow you to control the sequence of these operations. A function is a task that must be performed to produce a result on a table, a form, or a report. It is like an operation or an expression with the first difference that someone else created it and you can just use it. For example, instead of the addition operator "+", to add two values, you could use a function. In practicality, you cannot create a function in Microsoft Access. You can only use those that have been created and that exist already. These are referred to as built-in functions. If you had to create a function (remember that we cannot create a function in Microsoft Access; the following sections are only hypothetical but illustrative of the subject of a function), a formula you would use is: This syntax is very simplistic but indicates that the minimum piece of information a function needs is a name. The name allows you to refer to this function in other parts of the database. The name of the function is followed by parentheses. As stated already, a function is meant to perform a task. This task would be defined or described in the body of the function. In our simple syntax, the body of the function would start just under its name after the parentheses and would stop just above the End word. The person who creates a function also decides what the function can do. Following our simple formula, if we wanted a function that can open Solitaire, it could appear as follows: FunctionExample() Open Solitaire End Once a function has been created, it can be used. Using a function is referred to as calling it. To call a simple function like the above FunctionExample, you would just type its name. The person who creates a function also decides what kind of value the function can return. For example, if you create a function that performs a calculation, the function may return a number. If you create another function that combines a first name and a last name, you can make the function return a string that represents a full name. When asked to perform its task, a function may need one or more values to work with. If a function needs a value, such a value is called a parameter. The parameter is provided in the parentheses of the function. The formula used to create such a function would be: ReturnValue FunctionName(Parameter) End Once again, the body of the function would be used to define what the function does. For example, if you were writing a function that multiplies its parameter by 12.58, it would appear almost as follows: Decimal FunctionName(Parameter) parameter * 12.58 End While a certain function may need one parameter, another function would need many of them. The number and types of parameters of a function depend on its goal. When a function uses more than one parameter, a comma separates them in the parentheses. The syntax used is: ReturnValue FunctionName(Parameter1, Parameter2, Parameter_n) End If you were creating a function that adds its two parameters, it would appear as follows: NaturalNumber AddTwoNumbers(Parameter1, Parameter2) Parameter1 + Parameter2 End Once a function has been created, it can be used in other parts of the database. Once again, using a function is referred to as calling it. If a function is taking one or more parameters, it is called differently than a function that does not take any parameter. We saw already how you could call a function that does not take any parameter and assign it to a field using its Control Source. If a function is taking one parameter, when calling it, you must provide a value for the parameter, otherwise the function would not work (when you display the form or report, Microsoft Access would display an error). When you call a function that takes a parameter, the parameter is called an argument. Therefore, when calling the function, we would say that the function takes one argument. In the same way, a function with more than one parameter must be called with its number of arguments. To call a function that takes an argument, type the name of the function followed by the opening parenthesis "(", followed by the value (or the field name) that will be the argument, followed by a closing parenthesis ")". The argument you pass can be a constant number. Here is an example: The value passed as argument can be the name of an existing field. The rule to respect is that, when Microsoft Access will be asked to perform the task(s) for the function, the argument must provide, or be ready to provide, a valid value. As done with the argument-less function, when calling this type of function, you can assign it to a field by using the assignment operator in its Control Source property. Here is an example: If the function is taking more than one argument, to call it, type the values for the arguments, in the exact order indicated, separated from each other by a comma. As for the other functions, the calling can be assigned to a field in its Control Source. All the arguments can be constant values, all of them can be the names of fields or objects, or some arguments can be passed as constants and others as names of fields. Here is an example: We have mentioned that, when calling a function that takes an argument, you must supply a value for the argument. There is an exception. Depending on how the function was created, it may be configured to use its own value if you fail, forget, or choose not, to provide one. This is known as the default argument. Not all functions follow this rule and you would know either by checking the documentation of that function or through experience. If a function that takes one argument has a default value for it, then you do not have to supply a value when calling that function. Such an argument is considered optional. Whenever in doubt, you should provide your own value for the argument. That way, you would not only be on the safe side but also you would know with certainty what value the function had to deal with. If a function takes more than one argument, some argument(s) may have default values while some others do not. The arguments that have default values can be used and you do not have to supply them. To assist you with writing expressions or calling a (built-in) function and reduce the likelihood of a mistake, Microsoft Access is equipped with a good functional dialog box named the Expression Builder. The Expression Builder is used to create an expression or call a function that would be used as the Control Source of a field. To access the Expression Builder, open the Property Sheet for the control that will use the expression or function, and click its ellipsis button . This would call the Expression Builder dialog box Like every regular dialog box, the Expression Builder starts on top with its title bar that displays its caption and its system Close button. Unlike a regular dialog box, the Expression Builder is resizable: you can enlarge, narrow, heighten, or shorten it, to a certain extent. Under the title bar, there is a label followed by a link: Calculated Control. If you click that link, a Help window would come up: Under the link, there is an example of an expression. The main upper area of the Expression Builder shows a rectangular text box with a white background. It is used to show the current expression when you have written it. If you already know what you want, you can directly type an expression, a function, or a combination of those. The right section of the Expression Builder displays a few buttons. After creating an expression, to submit it, you click OK. To abandon whatever you have done, yo can click Cancel or press Esc. To get help while using the Expression Builder, you can click Help. To show a reduced height of the Expression Builder, click the << Less button. The button would change to More >>: To show the whole dialog box, click More >>. Under the text box, there are three boxes. The left list displays some categories of items. Some items in the left list appear with a + button. To access an object, expand its node collection by double-clicking its corresponding button or clicking its + button. After you have expanded a node, a list would appear. In some cases, such as the Forms node, another list of categories may appear. To access an object of a collection, in the left list, you can click its node. This would fill the middle list with some items that would of course depend on what was selected in the left list. Here is example: The top node is the name of the form or report on which you are working. Under that name are the Functions node. To access a function, first expand the Functions node. To use one of the Microsoft Access built-in functions, in the left list, click Built-In Functions. The middle list would display categories of functions. If you see the function you want to use, you can use it. If the right list is too long and you know the type of the function you are looking for, you can click its category in the middle list and locate it in the right list. Once you see the function you want in the right list, you can double-click it. If it is a parameter-less function, its name and parentheses would be added to the expression area: If the function is configured to take arguments, its name and a placeholder for each argument would be added to the expression area: You must then replace each placeholder with the appropriate value or expression. To assist you with functions, in its bottom section, the Expression Builder shows the syntax of the function, including its name and the name(s) of the argument(s). To get more information about a function, click its link in the bottom section of the Expression Builder. A help window would display. Here is an example: Besides the built-in functions, if you had created a function in the current database, in the left list, click the name of the database, its function(s) would display in the middle list. Depending on the object that was clicked in the left list, the middle list can display the Windows controls that are part of, or are positioned on, the form or report. For example, if you click the name of a form in the left list, the middle list would display the names of all the controls on that form. To use one of the controls on the object, you can double-click the item in the middle list. When you do, the name of the control would appear in the expression area. Some items in the middle list hold their own list of items. To show that list, you must click an item in the middle list. For example, to access the properties of a control positioned on a form, in the left list, expand the Forms node and expand All Forms: Then, in the left list, click the name of a form. This would cause the middle list to display the controls of the selected form. To access the properties of the control, click its name in the middle list. The right list would show its properties: As mentioned already, after creating the expression, if you are satisfied with it, click OK.
http://www.functionx.com/access/Lesson16.htm
13
92
Layer 1, the Physical Layer defines the characteristics of the hardware necessary to carry the data transmission signal. Things such as voltage levels, and the number and locations of interface pins, are defined in this layer (RS232C, V.35, IEEE 802.3, ...). TCP/IP does not define physical standards, it makes use of existing standards. Describes the way data is actually transmitted on the network medium. The Physical Layer communicates directly with the communication medium, and has two responsibilities: Sending bits and receiving bits. A binary digit, or bit, is the basic unit of information in data communication. A bit can have only two values, 0 or 1, represented by different states on the communication medium. Other communication layers are responsible for collecting these bits into groups that represent message data. Bits are represented by changes in signals on the network medium. Some wire media represent 0s and 1s with different voltages, some use distinct audio tones, and yet others use more sophisticated methods, such as state transitions. A wide variety of media are used for data communication, including electric cable, fibre optics, light waves, radio, and microwaves. The medium used can vary, a different medium simply necessitates a different set of physical layer protocols. Thus, the upper layers are completely independent from the particular process used to deliver bits through the network medium. The physical layer describes the bit patters to be used, but does not define the medium, it describes how data are encoded into media signals and the characteristics of the media attachment interface. Layer 2, the Data Link Layer is responsible for delivering the data without errors to the next layer. It formats the packets for transmitting after delivery. Defines the network-frames. This layer synchronises the transmission and is responsible for error-control on frame-level (a frame is a block of data within network-specific addressing information), also error-correction so that information can be transmitted from the physical layer. It formats the message into a data frame, and the CRC-verification (this checks on errors into the frame) is in this layer established. This layer carries the access-method's for Ethernet and Token Ring. This layer also provide the address information for the physical layer on top of the transmitted frame. Data Frame Format: As data is exchanged between computers, communication processes need to make decisions about the various aspects of the exchange process: As the receiving computer listens to the wire to recover messages send to it, it requires a mechanism by which it can tell whether to treat signals it detects as data-carrying signals or to discard them as mere noise. If it is determined by the detection mechanism that what is on the wire is indeed data-carrying signals, the second decision the receiving end must be able to make is whether the data was intended for itself, some other computer on the network, or a broadcast. If the receiving end engages in the process of recovering data from the wire, it needs to be able to tell where the data train intended for the receiver ends. After this determination is made, the receiver should discard subsequent signals unless it can determine that they belong to a new, impeding transmission. When data reception is complete, another concern arises, and that is of establishing that the recovered data withstood corruption from noise and electromagnetic interference. In the event of detecting corruption, the receiver must have the capability of dealing with the corruption. As can be concluded from the points made earlier, in addition to user data, computers must be able to exchange additional information about the progress of the physical communication process. To accommodate these decision-making requirements, network designers decided to deliver data on the wire is well defined packages called data frames. It is important to realise that the primary concern of the receive process is the reliable recovery of the information embedded in the information field, with no attention paid to the nature of the actual contents of that field. Instead, processing the data in the information field is delegated to another process as the receive process reverse to listening mode to take care of future transmissions. The reliable delivery of data across the underlying physical network is handled by the Data Link Layer. TCP/IP rarely creates protocols in this layer. Most RFC's that relate to this layer talk about how IP can make use of existing data link protocols. Defines how these streams of bits are put together into manageable chunks of data. Devices that can communicate on a network frequently are called nodes, station or device. The data link layer is responsible for providing node-to-node communication on a single, local network. To provide this service, the data link layer must perform two functions. It must provide an address mechanism that enable messages to be delivered to the correct nodes. Also, it must translate messages from upper layers into bits that the physical layer can transmit. When the data link layer receives a message to transmit, it formats the message into a data frame (packets). The sections of a frame are called fields. Figure 31 shows an example of a data frame. The fields in figure 31 are as follows: Start Indicator : A specific bit pattern indicates the start of a data frame. Source Address : The address of the sending node so that replies to messages can be addressed properly. Destination Address : The address of the receiving node to identifies messages that it should receive. Control : Additional control information. Data : All data that were forwarded to the data link layer from upper protocol layers. Error Control : Contains information that enables the receiving node to determinate whether an error occurred during transmission. Frame delivery on a local network is extremely simple. A sending node simply transmits the frame. Each node on the network sees every frame, and examines the destination address. When the destination address of a frame matches the node's address, the data link layer at the node receives the frame and sends it up the protocol stack. Data units at the data link layer are most commonly called frames, although the term packet is used with some protocols. Figure 32 shows how simple delivering of a frame on a local network can be. In figure 32, the source node simply builds a frame that includes the recipients destination address. The senders responsibility ends when the addressed frame is placed on the network. On LANs, each node examines each frame that is sent on the network, looking for frames with a destination address that matches its own MAC address. Frames that matches are received. Frames the dont match are discarded by Ethernet networks or forwarded to the next node by Token Ring networks. Frames and Network Interfaces: The data link layer defines the format of data on the network. A series of bits with a definite beginning and, constitutes a network frame, commonly called a packet. A proper data link layer packet has checksum and network-specific addressing information in it so that each host on the network can recognise it as a valid or invalid frame and determine if the packet is addressed to it. The largest packet that can be sent through the data link layer defines the Maximum Transmission Unit (MTU), of the network. All hosts have at least one network interface, although any host connected to an Ethernet has at least two: The Ethernet interface and the loopback interface. The Ethernet interface handles the physical and logical connection to the outside world, while the loopback interface allows a host to send packets to itself. If a packet's destination is the local hosts, the data link layer chooses to send it via the loopback, rather than Ethernet, interface. The loopback device simply turns the packet around and enqueues it at the bottom of the protocol stack as if it were just received from the Ethernet. Associated with the data link layer is it a method for addressing hosts on the network. Every machine on the Ethernet has a unique, 48-bit address called its Ethernet address or Media Access Control (MAC) address. Vendors making network ready equipment ensure that every machine in the world has unique MAC address. 24-bit prefixes for MAC addresses are assigned to hardware vendors, and each vendor is responsible for the uniqueness of the lower 24-bits. MAC addresses are usually represented as colon-separated pairs of hex digits. Note that MAC addresses identify a host, and a host with multiple network interfaces may (or should) use the same MAC address on each. Part of the data link layer's protocol-specific header are the packet's source and destination MAC address. Each protocol layer supports the notation of a broadcast, which is a packet or set of packets that must be sent to all hosts on the network. The broadcast MAC address is: ff:ff:ff:ff:ff:ff. All network interfaces recognise this wildcard MAC address as a broadcast address, and pass the packet up to a higher-level protocol handler. Layer 3, the Network Layer transmit the data and decide which route the data must follow through the internetwork. The network layer receives data-packets from the upper layer from the transmitter, and transmit these by so many connections and subsystems as needed to reach it destination. Defines the network packets. Controls the routing and the switching from the data through the network. This layer controls the transmitting from packets between stations. On basics from certain information will this layer transmit the data sequential from one station to one other by the most economic route, and both logical as physical. This layer permits that data units can be transmit to other networks if the are using special equipment, called routers. Routers are defined in this layer. The Network Layer manages connections across the network and isolates the upper layer protocols from the details of the underlying network. The Internet Protocol (IP), which isolates the upper layers from the underlying network and handles the addressing and delivery of data, is usually described as TCP/IP's Network layer. The most known protocol in this layer is IP. The network-layer is the limit from the communication subnet: Above this layer increases the level off abstraction dramatically. For layer 3 and lower is there mostly an upper-limit for the size of these packets. In broadcast-networks is the routing very simply, so that the network-layer is thin or event existing. This is the reason why the transport layer-protocol TCP so many times is combined with IP, called TCP/IP. Only the smallest networks consist of a single, local network. The majority of networks must be subdivided. A network that consists of several network segments is frequently called an internetwork, or an internet, not to be confused with the Internet. These subdivisions may be planned to reduce traffic on network segments or to isolate remote networks connected by slower communication media. When networks are subdivided, it can no longer be assumed that messages will be delivered on the local network. A mechanism must be put in place to route messages from one network to another. Figure 33 shows the schematic of a single, local network. Figure 34 shows the schematic of a bridged network. Figure 35 shows the schematic of a subnetted network. To deliver messages on an internetwork, each network must be uniquely identified by a network address. When it receives a message from the upper layers, the network layer adds a header to the message that includes the source and destination network address. This combination of data plus the network layer is called a packet. The network address information is used to deliver a message to the correct network. After the message arrives on the correct network, the data link layer can use the node address to deliver the message to a specific node. Forwarding packets to the correct network is called routing, and the devices that route packets are called routers. An internetwork has two types of nodes: End nodes: Provides user services. End nodes do use a network layer to add network address information to packets, but they do not perform routing. End nodes are sometimes called end systems or hosts. Routers: Incorporate special mechanisms that perform routing. Because routing is a complex task, routers usually are dedicated devices that do not provide services to end users. Routers are sometimes called intermediate systems or gateways. The network layer operates independently of the physical medium, which is a concern of the physical layer. Since routers are network layers devices, they can be used to forward packets between physically different networks. For example, a router can join an Ethernet to a Token Ring network. Routers also are often used to connect a local area network, such as Ethernet, to a wide area network, such as the Internet. Figure 36 shows a schematic of a router that join an Ethernet to a Token Ring network. Layer 4, the Transport Layer guarantees that the receiver gets the data exactly as it was sent. In TCP/IP this function is performed by the Transmission Control Protocol (TCP), However, TCP/IP offers a second Transport Layer service, User Datagram Protocol (UDP) that does not perform the end-to-end reliability checks. All network technologies set a maximum size for frames that can be sent on the network. Ethernet limits the size of the data field to 1500 bytes. This limit is necessary for two reasons: Small frames improve network efficiency when many devices must share the network. If devices could transmit frames of unlimited size, the might monopolise the network for an excessive period of time. With small frames, devices take turns at shorter intervals, and devices are more likely to have ready access to the network. With small frames, less data must be retranslated to correct an error. One responsibility of the transport layer is to divide messages into fragments that fit within the size limitations established by the network. At the receiving end, the transport layer reassembles the fragments to recover the original message. When messages are divided into multiple fragments, the possibility that segments might not be received in the order sent increases. When the packets are received, the transport layer must reassemble the message fragments in the correct order. To enable packets to be reassembled in their original order, the transport layer includes a message sequence number in its header. The transport layer is responsible for delivering messages from a specific process on one computer to the corresponding process on the destination computer. The transport layer assigns a Service Access Point (SAP) ID to each packet. The SAP ID is an address that identifies the process that originated the message. The SAP ID enables the transport layer of the receiving node to route the message to the appropriate process. Identifying messages from several processes so that the message can be transmitted through the same network medium is called multiplexing. The procedure of recovering messages and directing them to the correct process is called demultiplexing. Multiplexing is a common occurrence on networks, which are designed to enable many dialogues to share the same network medium. Because multiple protocols may be supported for any given layer, multiplexing and demultiplexing can occur at many layers. Although the data link and network layers can be assigned responsibility for detecting errors in transmitting data, that responsibility generally is dedicated to the transport layer. Two general categories of error detection can be performed by the transport layer: Reliable delivery: Does not mean that errors cannot occur, only that errors are detected if the do occur. Recovery from a detected error can take the form of simply notifying upper layer processes that the error occurred. Often, however, the transport layer can request the retransmission of a packet for which an error was detected. Unreliable delivery: Does not mean that errors are likely to occur, but rather, indicates that the transport layer does not check for errors. Because error checking takes time and reduces network performance, unreliable delivery often is preferred when a network is known to be highly reliable, which is the case with majority of local area networks. Unreliable delivery generally is used when each packet contains a completes message, whereas reliable delivery is preferred when messages consist of large number of packets. Unreliable delivery is often called datagram delivery, and independent packets transmitted in this way frequently are called datagrams. Assuming that reliable delivery is always preferable is a common mistake. Unreliable delivery actually is preferable in at least two cases: When the network is fairly reliable and performance must be optimised, and when entire messages are contained in individual packets and loss of a packet is not a critical problem. Layer 5, the Session Layer manages the sessions (connection) between co-operating applications. In TCP/IP, this function largely occurs in the transport layer, and the term session is not used. For TCP/IP, the term socket and port are used to describe the path over which co-operating applications communicate. This layer is not identifiable as a separate layer in the TCP/IP protocol hierarchy. The Session Layer is responsible for dialogue control between nodes. A dialogue is a formal conversation in which two nodes agree to exchange data. Communication can take place in three dialogue modes: Simplex: One node transmit exclusively, while another exclusively receives. Half-duplex: Only one node may send at a given time, and nodes take turns transmitting. Full-duplex: Nodes may transmit and receive simultaneously. Sessions enable nodes to communicate in an organised manner. Each session has three phases: Connection establishment: The nodes establish contact. They negotiate the rules of communication, including the protocol to be used and communication parameters. Data transfer: The nodes engage in a dialogue to exchange data. Connection release: When the nodes no longer need to communicate, they engage in an orderly release of the session. Connection establishment and Connection release represent extra overhead for the communication process. When devices are managed on a network, they send out periodic status reports that generally consist of single frame messages. If all such messages were sent as part of a formal session, the connection establishment and release phases would transfer far more data than the message itself. In such situation, communicating using a connection-less approach is common. The sending node simply transmits its data and assumes availability of the desired receiver. A connection-oriented session approach is desirable for complex communication. Consider transmitting a large amount of data to another node. Without formal controls, a single error anytime during the transfer would require resending of the entire file. After establishing a session, the sending and receiving nodes can agree on a checkpoint procedure. If an error occurs, the sending node must retransmit only the data sent since the previous checkpoint, The process of managing a complex activity is called activity management. Layer 6, the Presentation Layer is for co-operating applications to exchange data, they must agree about how data is represented. This layer is handled within the applications in TCP/IP. The Presentation Layer is responsible for presenting data to the application layer. In some cases, the presentation layer directly translates data from one format to another, whereas virtually all other computers use the ASCII encoding scheme. For example, if data is being transmitted from an EBCDIC computer to an ASCII computer, the presentation layer might be responsible for translating between the different character sets. Numeric data is also represented quite differently on different computer architecture and must be converted when transferred between different machines times. A common technique used to improve data transfer is to convert all data to a standard format before transmitting data. This standard format probably is not the native data format of any computer. All computers can be configured to retrieve standard format data, however, and convert it into their native data forms. Other functions that may correspond to the presentation layer are data encryption/decryption and compression/decompression. Layer 7, the Application layer is the level of the protocol hierarchy where user-accessed network processes reside. An TCP/IP application is any network process that occurs above the transport layer. This include all the processes that the users directly interact with, as well as other processes at this level that users are not necessarily aware of. The Application Layer provides the services user applications needed to communicate through the network. Here are several examples of user application layer services: Electronic mail transport. Remote file access. Remote job execution.
http://www.citap.com/documents/tcp-ip/tcpip006.htm
13
126
Apothems and Area From Math Images |What is an Apothem?| What is an Apothem? - The image to the right shows the shortest distance from the center to the midpoint of one side in various regular polygons. Basic DescriptionAn apothem extends from the center of a regular polygon to the midpoint of one of its sides. If you know the lengths of the apothem and one side of a regular polygon, you can easily find its area. If the regular polygon (see the hexagon in Figure 1) is divided into triangles, the triangles can be unrolled to form half of a rectangle. Let’s start with a simple example. We will use the apothem of a hexagon to find its area as shown in Figure 2. - Start with two hexagons. - Assume that each side is x units long. The perimeter of each hexagon is 6x units. - Divide each hexagon into six equilateral triangles (illustrated below). Each equilateral triangle has sides of length x and a height equal to the apothem. Let’s call this a. - "Unwrap” each hexagon to get two rows of congruent equilateral triangles. These two rows fit together nicely in a rectangle. The base of the rectangle is equal to 6x, so the area of the rectangle is equal to 6xa. But what is a? To find out the area of the rectangle and therefore the hexagon, we need the rectangle's height, which is equal to the apothem. Since the apothem cuts each equilateral triangle into two right triangles we can use trigonometry to solve for a. A More Mathematical Explanation - Note: understanding of this explanation requires: *Geometry, Basic Algebra General Formula for Finding Area Consider a regular polygon with n sides that are each x units long. Let’s find a formula for the apothem (a) that will help us to find the area. Since the figure is a regular polygon, the measure of one of its angles is How do we know this? Let's derive a general formula by looking at a regular hexagon. Draw a line from one vertex of the hexagon to the other 4, non-adjacent vertices. |We see that this forms 4 triangles. Each triangle's angles must total 180 degrees. Therefore, the total degrees of the internal angles of the triangles must be And since the hexagon has 6 equal angles, each angle must be equal to We can generalize this method over all n-polygons divided into n-2 triangles to say that the measure of each angle in an n-sided polygon is equal to Refer to Figure 5. We've connected center A to vertices B and C to create a triangle. We then dropped an altitude, creating line segment AD whose length is the apothem, a. So we want to know as much about triangle ABD as possible. The first thing to get would be the measure of angle ABC in terms of x, n, or a. We can do this because we already have angle MBC's measurement from the formula above: ABC is half of MBC so we just multiply that expression by 1/2 to get So now we have the measure of angle ABC, expressed in terms of n. What else do we know about triangle ABD? Well, as line segment BD is half of BC, and BC's length is x, BD's length is .5x. We also know that angle ADB is a right angle, thus making ABD a right triangle. This is important because right triangles have special properties like the tangent ratio. We can (and will!) use this ratio with our other two pieces of information. The tangent of angle ABD is the ratio of the opposite side over the adjacent side--in this case, a over BD. We have the measure of BD already, so we can set up an equation and solve for a! |Look to Figure 6. By using the equations in the red box (which is what we know about the triangle) we can substitute in the values we know in terms of x and n for the parts of the triangle they represent. We now have a formula for the apothem: How can we check this formula? Well, with a polygon whose apothem we can check with other ways. A regular hexagon can be formed from six triangles that are not only isosceles but also equilateral. This will be important later. For now, let's try it out... If we plug the values for a hexagon with side length 1 (for simplicity) into our formula, we get the following: which simplifies into roughly .866. Let's return to the image above. Because we're using a hexagon, triangle ABC is equilateral, so triangle ABD is a 30-60-90 triangle. In a 30-60-90 triangle, the shorter leg (BD) is s, the hypotenuse (AB) is 2s, and the longer leg (a) is times s. Our hexagon has side length 1, so .5x is just .5. If we multiply this by , we get around .866. Our formula works! Using the Apothem to Solve the Wire Problem A common optimization problem encountered in calculus classes involves cutting a wire in half and morphing each half into a certain shape. The wire problem asks students to cut the wire so that the sum of the enclosed areas is maximized or minimized. Students are often asked to model the following scenarios: - a circle and a square - a circle and an equilateral triangle - an equilateral triangle and a square We will go through a minimization example of the wire problem below. Say we have a 2 foot piece of wire that will be bent into a square and an equilateral triangle, and we want to minimize the sum of the enclosed area. Where should we cut the wire? We will let the wire for the square be x feet and the wire for the triangle be 2-x feet. Each side of the square will be x/4 and each side of the triangle will be 1/3(2-x). The height of the triangle will be √3/6(2-x) So the total area of both objects If you are unfamiliar with the wire problem, example 4 on this webpage provides a more detailed explanation of the above problem. Pat Cade and Russell A. Gordon of Whitman College have demonstrated a unique way to solve the wire problem in their paper, An Apothem Apparently Appears. They have found that in each minimization solution one shape is inscribed in the other. - the circle will be inscribed in the square - the circle will be inscribed in the equilateral triangle - the equilateral triangle and square can be inscribed in the same circle While scenario 3 is slightly different, in all three scenarios the two shapes have the same apothem! This enables us to take some shortcuts in solving the wire problem. How did they do this? First, the perimeter P and area A of a regular n-gon with apothem r are given by Say we have two shapes, one p-gon and one q-gon. It is possible that p=q. The sum of the perimeters of the two shapes will be equal to L and the sum of the areas will be equal to Q. If we take the derivative with respect to x we get the following, Simplifying the above equation gives us: Now let's look at the sum of the areas: We want to take the derivative of the above equation for Q with respect to x: Simplifying this gives us: Once again, we have a critical point. This time it is at x=y or when the shapes have the same apothem. Knowing that the two apothems must be equal in order to minimize area eliminates calculus from solving basic scenarios 1, 2, and 3. Say, for example we are solving scenario 1. The apothem of the circle will be equal to the apothem of the square. The apothem of the circle is equal to the radius; let’s call this r. The apothem of the square will also be equal to r so each side of the square will be equal to 2r. |Circumference of Circle = | Perimeter of Square = Once we know r we can use this value to calculate the circumference of the circle, and use that value to find the appropriate cutting point on the wire. What if the wire problem involves a triangle? We can still eliminate calculus from the problem, but it is slightly trickier. Note that the apothem of the triangle (labeled in Figure 11 as r1) is now equal to 1/3 the height. |The centroid of an equilateral triangle is the intersection of the medians of the triangle . The median of an equilateral triangle is congruent to the apothem. . | Because the height divides the original triangle into two right triangles, we can use trigonometry to find the value of x, and then easily find the perimeter of the triangle. |Next we need to set the sum of the perimeters equal to L1| All we need to know is the length of the wire (L1) and we can solve for r and find the optimum place to cut the wire (just like the problem above). - There are currently no teaching materials for this page. Add teaching materials. ReferencesNational Security Agency's K-12 Academia Program Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
http://mathforum.org/mathimages/index.php/Apothems_and_Area
13
68
Circulation is the amount of force that pushes along a closed boundary or path. It's the total "push" you get when going along a path, such as a circle. A vector field is usually the source of the circulation. If you had a paper boat in a whirlpool, the circulation would be the amount of force that pushed it along as it went in a circle. The more circulation, the more pushing force you have. Curl is simply the circulation per unit area, circulation density, or rate of rotation (amount of twisting at a single point). Imagine shrinking your whirlpool down smaller and smaller while keeping the force the same: you'll have a lot of power in a small area, so will have a large curl. If you widen the whirlpool while keeping the force the same as before, then you'll have a smaller curl. And of course, zero circulation means zero curl. Circulation is the amount of "pushing" force along a path. Curl is the amount of pushing, twisting, or turning force when you shrink the path down to a single point. Let's use water as an example. Suppose we have a flow of water and we want to determine if it has curl or not: is there any twisting or pushing force? To test this, we put a paddle wheel into the water and notice if it turns (the paddle is vertical, sticking out of the water like a revolving door -- not like a paddlewheel boat). If the paddle does turn, it means this field has curl at that point. If it doesn't turn, then there's no curl. What does it really mean if the paddle turns? Well, it means the water is pushing harder on one side than the other, making it twist. The larger the difference, the more forceful the twist and the bigger the curl. Also, a turning paddle wheel indicates that the field is "uneven" and not symmetric; if the field were even, then it would push on all sides equally and the paddle wouldn't turn at all. The fact that there is a "twist" means the field is not conservative (this has nothing to do with its political views). A conservative field is "fair" in the sense that work needed to move from point A to point B, along any path, is the same. For example, consider a river: its field is conservative. Sure, you can get a free ride downstream, but then you have to do work to get back to your starting point. Or, you can do work to move upstream, and get a free ride back. Either way, the amount of work you "put in" is the same as what you get back. However, in a field with curl (like a whirlpool), you can get a free ride by moving in the direction of the twist. In a whirlpool, you can get a free trip by moving with the current in a circle. If you fight the current and go the wrong way, you have to use energy with no free ride at all. Conservative fields have zero curl: there are no free twists to push you along. Alternatively, if a field has curl, it is not conservative. Gravity is another example of a conservative field. Technically, if you lift a rock and then let it fall, the energy you get from falling is the same as what you put in to lift the rock. Theoretically speaking, no energy was gained or lost in this transaction. To be technical, curl is a vector, which means it has a both a magnitude and a direction. The magnitude is simply the amount of twisting force at a point. The direction is a little more tricky: it's the orientation of the axis of your paddlewheel in order to get maximum rotation. In other words, it is the direction which will give you the most "free work" from the field. Imagine putting your paddlewheel sideways in the whirlpool - it wouldn't turn at all. If you put it in the proper direction, it begins turning. But wait a minute -- aren't there two directions to get a twisting motion? Couldn't you just turn the paddlewheel "upside down" and get the maximum curl as well? Yep, you're right. By convention alone, if the paddle wheel is rotating counterclockwise, its curl vector points out of the page. This is a type of right-hand rule: make a fist with your right hand and stick out your thumb. If the circulation/pushing force follows the twisting of your fingers (counterclockwise), then the curl vector will be in the direction of your thumb. Circulation is the integral of a vector field along a path - you are adding how much the field "pushes" you along a path. How do we find this? Well, we should expect some type of dot product, because we want to know the amount that one vector (the force) is pushing in the direction of another (the path). So, the two vectors we need are (1) the path vector and (2) the field vector at every point along the path. If we have a function that defines the position at any time, \( F(t) \), we can take the time derivative to get the velocity at that position. The velocity vector is always in the direction of movement -- if you are moving from A to B, the velocity vector will be an arrow from A to B, i.e. your change in position or your direction of movement. So, we can use the velocity to get our direction. It's important to understand why we aren't using the position vector itself -- it tells us where we are, but not where we're going. We need to know our direction to see how much "push" we are getting: Knowing your position in a river isn't important -- are you going upstream or downstream, and at what angle? The force vector (2) is defined by the field we are in. No derivatives or other changes are necessary -- every point in the field has some force acting on it. So, our formula for circulation is: Force at position r = F(r) Direction at position r = dr Total pushing force = Circulation = \int F(r) · dr Remember, velocity is simply the derivative of position \(r\), so \(dr\) is a vector giving us our direction. We integrate along the entire path and use the dot product to see how much pushing force is applied. We then sum up these "pushes" to get the total circulation. Since curl is the circulation per unit area, we can take the circulation for a small area (letting the area shrink to 0). However, since curl is a vector, we need to give it a direction -- the direction is normal (perpendicular) to the surface with the vector field. The magnitude is the same as before: circulation/area. Recall that by convention (a bunch of people agreeing), counterclockwise circulation will give a curl pointing out of the page. Using these facts, we can create the formula for curl: Where \(S\) is the surface we are considering; the direction of the curl is the normal to the surface. You'll see fancier equations for curl where the surface shrinks to zero (such as in wikipedia), but recognize the basic intuition -- curl is the circulation per unit area. You'll often see curl of a field \(F\) written like this: which is a cross-product of the gradient and the field \(F\). This has to do with how curl is actually computed, which will be material for another article (and probably in your textbook already -- see wikipedia for details). If I have been successful, you should understand intuitively what circulation and curl mean, and how we got the formulae above. They spring up naturally from our definition of circulation as "pushing force along a path" and curl as "pushing force/area". Math should be a tool for clearly stating what we already know. Understand the intuition and then tackle the complicated formulas. Happy math. PS. Have some fun and check out this video of a famous whirlpool. Imagine the circulation on this (go on, imagine): Other Posts In This Series - Vector Calculus: Understanding Flux - Vector Calculus: Understanding Divergence - Vector Calculus: Understanding Circulation and Curl (This post) - Vector Calculus: Understanding the Gradient - Understanding Pythagorean Distance and the Gradient - Vector Calculus: Understanding the Dot Product
http://betterexplained.com/articles/vector-calculus-understanding-circulation-and-curl/
13
100
Hydrocarbons are compounds of carbon and hydrogen only. Hydrocarbons are obtained mainly from petroleum, natural gas and coal. Examples of hydrocarbons are methane, ethane, acetylene and benzene. The important fuels like petrol, kerosene , coal gas, oil gas , CNG (compressed natural gas) , LPG (liquefied petroleum gas) etc. are all hydrocarbons or their mixtures. The hydrocarbons are divided into two main categories , aliphatic and aromatic. The aliphatic hydrocarbons are further classified into saturated (alkanes) , unsaturated (alkenes and alkynes) and alicyclic (cycloalkanes) hydrocarbons. ALKANES AND CYCLOALKANES They are also known as paraffins derived from the Latin words, ‘parum’ (little) and ‘affins’ affinity, that is with little affinity. The name is justified as under normal conditions, alkanes are inert towards reagents like acids, bases, oxidising and reducing agents. However, under drastic conditions, i.e., at high temperature and pressure alkanes undergo different types of reactions like halogenation, nitration, sulphonation , pyrolysis etc. Alkanes have tetrahedral structure around carbon atom with average C- C and C- H bond lengths of 154 pm and 112 pm respectively. Alkanes form homologous series with the general formula CnH2n+2 , where n is the number of carbon atoms in the molecule. Methane CH4 with n = 1 and ethane C2H6 with n = 2 are the first two members of the series. Cycloalkanes are cyclic hydrocarbons which form a homologous series with the general formula CnH2n, whose first member is cyclopropane ( with n = 3). The nomenclature has discussed in Unit 14. 1. Write the IUPAC names for the following structures. 2. Write the structures for the compounds having IUPAC names, 3. Write structures of different chain isomers of alkanes corresponding to molecular formula C6H14. Also write their IUPAC names. 4. Write structures of different isomeric alkyl groups corresponding to the molecular formula C5H11. Write IUPAC names of alcohols obtained by attachment of –OH groups at different carbons of the chain. 5. Write the IUPAC names of the following compounds. (i) (CH3)3CCH2C(CH3) 3 (ii) (CH3)2C(C2H5) 2 6. Write the structural formula of the following compounds : i) 3, 4, 4, 5-tetramethylheptane 7. Write the structures for each of the following compounds. Why are the given names incorrect ? Write the correct IUPAC names. 8. Write IUPAC names of the following compounds. 9. For the following compounds , write structural formula and IUPAC names for all possible isomers having the number double or triple bond as indicated : (a) C4H8 (one double bond) (b) C5H8 (one triple bond) PREPARATION OF ALKANES Alkanes can be prepared on a small scale in the laboratory by the following methods. 1. Hydrogenation of alkenes and alkynes Hydrocarbons containing double or triple bonds can be hydrogenated to the corresponding alkanes. The addition of hydrogen to unsaturated compounds is called hydrogenation. Alkenes and alkynes are hydrogenated to alkanes by Sabatier and Senderen’s reaction. It is the addition of hydrogen to alkenes and alkynes in the presence of nickel catalyst at 570 K. For example, ethylene (C2H4) when mixed with hydrogen and passed over nickel catalyst at 570 K gives ethane(C2H6) and acetylene (C2H2) on hydrogenation yields ethane. 2. From Alkyl halides Alkyl halides are halogen derivatives of alkanes with general formula R-X. They can be used to prepare alkanes by following methods. (a) It involves the chemical reaction between alkyl halide(usually bromides and iodides) and metallic sodium in presence of dry ether (wurtz reaction). The product is the symmetrical alkane containing twice the number of carbon atoms present in alkyl halide. In case, two different alkyl halides are taken in order to prepare alkane with odd number of carbon atoms, a mixture of three alkanes will be produced as follows: Therefore, this method is not suited to prepare alkanes containing odd number of carbon atoms. Methane, however, cannot be prepared by this method. (b) By the reduction of alkyl halides It involves the reduction of alkyl halides (usually bromides or iodides) by suitable reducing agents such as H2/ Pd or nascent hydrogen obtained from Zn and HCl or Zn-Cu couple and ethanol. Reduction of iodo-derivative can be carried out by the use of hydro-iodic acid (HI) in presence of red phosphorus. Red phosphorus removes iodine formed and pushes the reaction in the forward direction. If not removed, iodine will convert ethane back into ethyl iodide. (c) By the use of Grignard reagent Alkyl halides especially bromides and iodides react with magnesium metal in diethyl ether to form alkyl magnesium halides (RMgX). Alkyl magnesium halides are known as Grignard reagents. In the Grignard reagent, the carbon-magnesium bond is highly polar with carbon atom being relatively more electronegative than magnesium. It reacts with water or with other compounds having active hydrogen ( the H atom attached on N, O, F or with triple bonded carbon atoms are known as active hydrogen) to give alkane. In these reactions the alkyl group of alkyl magnesium halide gets converted into alkane. In general the reaction can be represented as : 3. From Fatty acids (monocarboxylic acids) (i) By heating with soda lime Sodium salts of carboxylic acids on heating with soda lime give alkanes. Soda lime is prepared by soaking quick lime (CaO) with a solution of sodium hydroxide (NaOH). For example, sodium acetate which is sodium salt of acetic acid(CH3COOH) on heating with soda lime gives methane. Sodium propanoate gives ethane. In this reaction CaO does not participitate , but helps in the fusion of the reaction mixture. (ii) Electrolytic method ( Kolbe’s method ) : An alkane is obtained when an aqueous solution of sodium or potassium salt of carboxylic acid is electrolysed. The reaction is known as decarboxylation reaction (Kolbe’s reaction) 2 RCOONa + 2 H2O ® R-R + 2 CO2 + 2 NaOH + H2 For example, ethane is obtained when a solution of sodium acetate is electrolysed. 2 CH3COONa + 2 H2O ® CH3CH3 + 2 CO2 + 2 NaOH + H2 At anode the carboxylate ion (RCOO- ) gives up one electron to produce free radical RCOO· which decomposes to give the alkyl radical and carbondioxide. The two such radicals combine together to yield higher alkane. Normally these methods are useful for preparing alkanes containing even number of carbon atoms. 10. How do you account for the formation of ethane during chlorination of methane ? PROPERTIES OF ALKANES The following are the physical properties of alkanes. State : The lower members in the alkane series are gases (methane, ethane, propane and butane). Alkanes containing 5 to 17 carbon atoms are liquids, while higher members are waxy solids. Thus alkanes change from gaseous state to the solid state with an increase in molecular mass. Non-polar nature : Alkanes are non-polar in nature. Therefore alkanes are soluble in non-polar solvents such as benzene, ether, chloroform, carbon tetrachloride etc. Liquid alkanes themselves are good solvents for other non-polar substances. Boiling points : Lower alkanes have lower boiling points. The boiling point gradually increases with increase in the molecular mass. The increase is by 20° to 30° for each –CH2- unit added to the chain. The reason for the increase in boiling point is that the alkane molecule is non-polar in nature. These are attracted to each other by weak van der Waal’s forces. In the case of alkanes with higher molecular masses, these forces are more because of the larger surface area. The variation in boiling points of n-alkanes with increase in number of carbon atoms per molecule is shown in Fig. Variation in boiling points of n-alkanes with increase in number of carbon atoms per molecule. Moreover, straight chain alkanes have higher boiling point than the corresponding branched chain hydrocarbons (isomers). Greater the branching in the chain is, lower is the boiling point. This is due to the fact that branching of the chain makes the molecule compact and brings it closer to a sphere. This decreases the surface area and hence the magnitude of inter-particle van der Waal’s forces leads to the decrease in boiling point. The boiling points of the isomeric pentanes are as follows. Melting points : The intermolecular forces in a crystal depend not only on the size of the molecues but also on how they are packed into a crystal. The rise in melting point with increase in number of carbon atoms is, therefore , not as regular as the boiling point is in the case of liquids. On plotting the melting points of n-alkanes against the number of carbon atoms in a chain , a sawtooth pattern shown in Fig is obtained. Variation in melting points of n-alkanes with increase in number of carbon atoms The increase in melting point is much more in moving from an alkane having odd number of carbon atoms to higher alkane than in moving from alkane having even number of carbon atoms to the higher alkanes as given below : This implies that the molecules with an odd number of carbons do not fit well into the crystal lattice. The carbon chains in the alkanes are zig-zag rather than straight. Now in n-alkanes, terminal methyl groups lie on the same side when the number of carbon atoms is odd and on opposite sides when the number of is even (Fig). Representation of alkanes with even and odd number of carbon atoms CH3 groups of odd and even carbon compounds appreciably affect the forces of interaction and hence the melting temperature. The energy required to break the crystal structure and thus melt the alkane is lesser in case of alkanes having an odd number of carbon atoms. This is because their molecules do not fit well into the crystal lattice. Colour : Alkanes are colourless gases, liquids or solids. Density : The density of alkanes increases with increase in chain length. Solubility : As ‘like dissolves like’ , therefore alkanes being non-polar in character are more soluble in non-polar solvents such as ether, carbon tetrachloride, etc. These are insoluble in water and other polar solvents. Alkanes are quite unreactive. They do not react with usual reagents. Alkanes are inert (less reactive). Inertness of alkanes is due to the presence of strong carbon-carbon and carbon-hydrogen bonds, which are difficult to break. However, hydrogen can be substituted by other atoms or radicals under drastic reaction conditions such as high temperature, presence of ultra violet light or catalyst etc. Alkanes undergo few reactions, the most important being halogenation, oxidation and thermal cracking. Alkanes react with chlorine or bromine in presence of sunlight or ultraviolet light to give halogen substituted products, i.e., alkyl halides containing one or more halogen atoms. Halogenation of alkanes is a substitution reaction. Hydrogen atoms of alkanes are replaced by halogen atoms. Alkanes also undergoes halogenation if a mixture of alkane and halogen is heated to 600 K. For example, when a mixture of methane and chlorine is exposed to diffused sunlight or ultra violet light or heated to 600 K, the hydrogen atoms are replaced by chlorine atoms, one after another. The nature of the products formed depends upon the amount of chlorine. If excess of chlorine is used, then the product contains larger amounts of carbon tetrachloride. Smaller amounts of chlorine give more of less chlorinated products (such as methyl chloride) in this reaction. In case the reaction is allowed to take place for smaller duration of time, then less chlorinated products are formed in larger amounts. The order of reactivity of halogens with alkanes is F2 > Cl2 > Br2 > I2 Fluorine reacts with a violent explosion. The reaction with chlorine is less vigorous than fluorine and with bromine less vigourous than chlorine. The reaction with iodine is slow and reversible. Therefore iodination is carried in presence of some oxidising agents like iodic acid (HIO3) or nitric acid which converts HI to I2 and this pushes the reaction in the forward direction. 5 HI + H IO3 ® 3 H2O + 3 I2 2 HNO3 + 2 HI ® 2 H2O + 2 NO2 + I2 Fluorination takes place with almost explosive violence to produce fluorinated compounds. It also involves rapture of C-C bonds in the case of higher alkane. The reaction can be made less violent by dilution of fluorine with nitrogen. The ease of substitution of a hydrogen atom by a halogen atom is : Tertiary > secondary > primary Mechanism of Halogenation Halogenation of alkanes is a free radical chain substitution. The generally accepted mechanism for chlorination of methane is given below: In the first step , Cl2 molecule breaks homolytically to give two Cl· free radicals. The step in which Cl-Cl bond homolysis occurs is called the initiation step. Each chlorine atom formed in the initiation step has seven valence electrons and is very reactive. Once formed, a chlorine atom abstracts a hydrogen atom from the methane as shown below : Hydrogen chloride , one of the isolated products from the overall reacton, is formed in this step. A methyl radical is also formed. Methyl radical formed in step 2, attacks a molecule of Cl2. Attack of methyl radical on Cl2 gives chloromethane, the other product of oveall reaction, along with a chlorine atom then cycles back to step 2, repeating the process. Steps 2 and 3 are called propagation steps of the reaction and , when added together give the overall reaction. Since one initiation step can result in a large number of propagation steps , the overall process is called a free-radical chain reaction. . In actual pracice, some side reactions are taking place in addition to propagation steps which reduce the efficiency of the propagation steps. The chain sequence is interrupted whenever two odd-electron species (free radicals) combine to form an even-electron species. Reactions of this type are called chain terminating steps. Some commonly observed chain terminating steps in the chlorination of methane are given below : It may be noted that termination steps are , in general , less likely to occur than the propagation steps. Each of the termination steps requires two very reactive free radicals to encounter each other in a medium that contains far greater quantities of other materials such as methane and chlorine molecules with with they can react. Thus some monochloromethane undoubtedly gets formed via direct combination of methyl radicals with chlorine atoms, most of it in steps 2 and 3. Nitration is a substitution in which a hydrogen atom of an alkane is replaced by nitro (-NO2 ) group. When alkanes having six or more than six carbon atoms boiled with fuming nitric acid, then a hydrogen atom is replaced by the nitro group. Lower alkanes cannot be nitrated by this method. However, when a mixture of such an alkane and nitric acid vapours is heated to a temperature of about 723-773 K in a sealed tube, one hydrogen atom of the alkane is substituted by nitro group. The process is called vapour phase nitration and compounds obtained are called nitroalkanes. Since the reaction is carried out at higher temperatures, the rapture of carbon-carbon bonds occurs during the process. As a result, the reaction yields a mixture of different products. For example, nitration of propane results in the formation of mixture of four nitro compounds as shown below : Nitration of ethane yields a mixture of nitroethane and nitromethane. When alkanes having six or more carbon atoms are heated with fuming sulphuric acid(H2S2O7), then hydrogen is replaced by the sulphonic acidgroup(-SO3H) group. The compound (RSO3H) thus formed is known as the alkane sulphonic acid. Lower alkanes except those having tertiary hydrogen cannot be sulphonated. Oxidation of alkanes gives different products under different conditions. (a) COMBUSTION OR COMPLETE OXIDATION Alkanes readily burn with non-luminous flame in excess air or oxygen to form carbon dioxide and water with evolution of large quantity of heat. It forms the basis of the use of alkanes as fuels. Burning in excess of oxygen : Alkanes burn with blue flame in air or oxygen. In this process alkanes get oxidised to carbondioxide and water. CH4 + 2 O2 ® CO2 + 2H2O : DH° = - 890.4 kJ/mol 2C2H6 + 7 O2 ® 4 CO2 + 6 H2O : DH° = - 1580.0 kJ/mol The cooking gas ,which is often called L.P.G (liquefied petroleum gas) is a mixture of propane and butane. (ii) Burning in limited supply of oxygen : Alkanes burn with a suity flame (smoky flame) . Incomplete oxidation of alkanes give carbon black (a variety of carbon used in the manufacture of printing inks and tyres). CH4 + O2 ® C + 2 H2O Catalytic oxidation : On controlled oxidation, alkanes give alcohols which are further oxidised to aldehydes(or ketones), acids and finally carbon dioxide and water. CH4 ® CH3OH ® HCHO ® HCOOH ® CO2 + H2O methane methanol formaldehyde formic acid Alkanes undergo oxidation under special conditions to yield a variety of products. Example, Alkanes having tertiary H can be oxidised to alcohols in presence of potassium permanganate. FRAGMENTATION OF ALKANES When higher members of the alkane family are heated to high temperatures (700 – 800 K) or to slightly lower temperature in presence of catalysts like alumina or silica, they break down to give alkanes and alkenes with lesser number of carbon atoms. E.g., The fragmentation of alkanes is also called pyrolysis or cracking . The chemical reactions taking place in cracking are mostly free radical reactions involving rupture of carbon-carbon and carbon-hydrogen bonds. Alkanes isomerise to branched chain alkanes when heated with anhydrous aluminium chloride and hydrogen chloride. The alkanes containing 6 or more carbon atoms when heated at about 773 K under high pressure (10-20 atm) , in presence of catalysts like oxides of chromium, vanadium or molybdenum supported over alumina get converted into aromatic hydrocarbons. This process is called aromatisation. Under similar conditions , n-heptane yields toluene. REACTION WITH STEAM Methane reacts with steam at 1273 K in presence of nickel catalyst forming carbon monoxide and hydrogen. The method is used for industrial preparation of hydrogen. CONFORMATIONS IN HYDROCARBONS A single covalent bond is formed between two atoms by axial overlap of half-filled atomic orbitals. In alkanes, the C- C bond is formed by axial overlapping of adjacent carbon atoms. The electron distribution in the molecular orbital of sp3 - sp3 sigma bond is cylindrical around the inter-nuclear axis. The single covalent bond therefore allows the freedom of rotation about it because of its axial symmetry. As a result of rotation about C- C bond, the molecule can have different spatial arrangement of atoms attached to the carbon atoms. Such different spatial arrangements of atoms which arise due to rotation around a single bond are called conformers or rotational isomers (rotomers) . The molecular geometry corresponding to a conformer is known as conformation.. The rotation around a sigma bond is not completely free ; it in fact is hindered by an energy barrier of 1 to 20 kJ mol-1. There is a possibility of weak repulsive interactions between bonds or electron pairs of the bonds on adjacent carbon atoms. Such type of repulsive interactions is referred to as torsional strain. CONFORMATION OF ETHANE In ethane (CH3CH3), the two carbon atoms are bonded by a single covalent bond and each of the carbon atom is further linked to three hydrogen atoms. If one of the carbon atom is held still and the other carbon atom is allowed to rotate around the C- C bond, a large number of different spatial arrangements of hydrogen atoms of one carbon atom with respect to the other carbon atom can be obtained. However, the basic structure of the molecule including C- C bond length and H - C - H or H - C - C bond angles will not change due to rotation. Out of the infinite number of possible conformations of ethane, two conformations represent the extremes. These are called staggered conformation (a) and eclipsed conformation (b). In staggered conformation, the hydrogen atoms of the two carbon atoms are oriented in such a way so that they lie far apart from one another. In other words, they are staggered away with respect to one another. In eclipsed conformation, the hydrogen atoms of one carbon atom are lying directly behind the hydrogen atoms of the other. In other words, hydrogen atoms of one carbon atom are eclipsing hydrogen atoms of other. Conformations can be represented by Sawhorse and Newmann projections. 1. Swhorse projection In this projection the molecule is viewed along the axis of the model from above and right. The central C- C is drawn as a straight line slightly tilted to the right for the sake of clarity. The line is drawn somewhat longer. The front carbon is shown as the lower left hand carbon whereas the rear carbon is shown as the upper right hand carbon. Each carbon has three lines corresponding to three atoms/groups ( H in the case of ethane). The saw horse projection formulae of the two extreme conformations of ethane are shown in the Figure. Sawhorse projections of ethane 2. Newman projection Newman proposed simpler formulae for representing the conformations. These are called Newman projection formulae. In Newman projection, the two carbon atoms forming the s - bond are represented by two circles, one behind the other, so that only the front carbon is seen. The hydrogen atoms attached to the front carbon atom are represented by C- H bonds from the centre of the circle. The C- H bonds of the back carbon are drawn from the circumference of the circle. Newman’s projection formulae for staggered and eclipsed conformations of ethane are shown in Fig. Newman projections of ethane It may be noted that one conformation of ethane can be converted into the other by rotation of 600 about the bond. The infinite other conformations of ethane lying between the two extremes are called skew conformations. Relative stabilities of the conformers of ethane The coformers of ethane do not have the same stability. In eclipsed conformation of ethane the hydrogen atoms eclipse each other resulting in crowding, whereas in staggered conformation hydrogen atoms are as far apart as possible. The infinite number of possible intermediate conformations between eclipsed and staggered are called as skew conformations. The staggered conformation of ethane is more stable than eclipsed conformation. It is because in staggered form the H-atoms are far apart, therefore, the magnitude of repulsion is lesser than in eclipsed conformation. Hence the order of stability follows the sequence : Staggered > gauche (skew) > eclipsed The repulsive interaction between electron clouds , which affects stability of a conformation , is called torsional strain. Magnitude of torsional strain depends upon the angle of rotation about C – C bond. This angle is also called dihedral angle or torsional angle. Of all the conformations of ethane, staggered form has the least torsional strain and eclipsed form, the maximum torsional strain. Thus it may be inferred that rotation around C – C bond in ethane is not completely free. The difference in energy content of the staggered and eclipsed conformation is 12.5 kJ mol-1. The variation in energy versus rotation about the bonds has been shown in Fig. Changes in energy during rotation about C- C bond in ethane. The difference in the energy of various conformations constitutes an energy barrier to rotation. But this energy barrier is not large enough to prevent rotation. Even at ordinary temperature, the molecule possesses sufficient thermal and kinetic energy to overcome the energy barrier through molecular collisions. Thus conformations keep on changing from one form to another very rapidly and cannot be isolated as separate conformers. CONFORMATIONS OF PROPANE AND BUTANE The Newman conformations of propane and butane are shown below. Newman projections of propane Newman projections of butane (changes with dihedral angle) 11. How many eclipsed conformations are possible in butane ? The unsaturated hydrocarbons containg C=C bond and having general formula CnH2n are called alkenes . The simplest member of the alkene family is ethene, C2H4 , which contains 5 s and 1 p bond. The carbon-carbon double bond is made up of a s bond and a p bond. The bond enthalpy of a C=C is 681 kJ mol-1 while carbon-carbon bond enthalpy of ethane is (348 kJ mol-1 ). As a result of this C- C bond length in ethene (134 pm) is shorter than the C- C bond length in ethane(154 pm). The presence of the pi(p) bond makes alkenes behave sources of loosely held mobile electrons. Therefore alkenes are attacked by reagents or compounds which are in search of electrons. Such reagents are called electrophilic reagents. The presence of weaker weaker p -bond makes alkenes unstable molecules in comparison to alkenes and thus alkenes can be changed into single bond compounds by combining with the electrophilic reagents . Strength of the double bond(681 kJ mol-1) is greater than that of a carbon-carbon single bond in ethane (348 kJ mol-1). Orbital diagrams of ethane molecule are shown the following Figures. Orbital picture of ethane depicting s-bonds only Orbital picture of ethane showing formation of (a) p-bond (b) p-cloud and (c) bond angles and bond lengths ISOMERISM IN ALKENES Alkenes generally show the types of isomerism given below: · Position isomerism · Chain isomerism · Geometrical isomerism 1. Position isomerism First two members of alkenes (ethene and propene) do not show this type of isomerism. However, butene exhibits position isomerism as but-1-ene and but-2-ene differ in the position of the double bond. 2. Chain isomerism Butene also shows chain isomerism as isobutene has branched chain. 3. Geometrical isomerism Alkenes exhibit geometrical isomerism due to restricted rotation around the C=C bond. This results in two possible arrangements of groups attached to the two doubly bonded carbon atoms, as shown below for but-2-ene. This isomerism is called cis-trans isomerism. When the groups of similar nature are on the same side of the double bond , the isomer is designated as cis and when they are on opposite sides , as trans. The necessary and sufficient condition for geometrical isomerism is that the two groups attached to the same carbon must be different , i.e., olefins of the type abC=Cab or abC=Cax or abC=Cbx show geometical isomerism. When two groups of of highest priority are on the same side of the double bond, the isomer is designated as Z (Zusammen in German means together) and when these groups are on opposite sides, the isomer is designated as E (Entegegen in German meaning opposite). The priority of groups decided by sequence rules given by Cahn-Ingold and Prelog. According to these rules the atom having higher atomic number gets higher priority. Thus, between carbon(atomic number 6 ) and oxygen (atomic number 8 ), oxygen gets priority over carbon. If the relative priority of the two groups cannot be decided i.e., their bonded atoms are same , then the next atoms in the groups (and so on) are compared. Thus between the groups methyl (-CH3) and ethyl (-CH2CH5), the latter will get the priority since the next atoms in methyl is H while in ethyl it is C(carbon has higher atomic number than hydrogen) , as shown below. There are two systems for naming alkenes. 1. The common system The common names of first few alkenes are derived from corresponding alkanes by replacement of ‘ane’ by ‘ylene’, e.g,. H2C=CH2 is ethylene. 2. IUPAC system In this most commonly used system the ending ‘ane’ of corresponding alkanes is replaced by ‘ene’. The parent chain , is considered to be one corresponding to the longest chain of carbon atoms containing the double bond. The next longest side chain may be selected provided it contains the maximum number of double bonds present. It is then numbered, starting from the end nearest to the double bond. The position of the double bond is indicated by the number of carbon atom preceding the double bond. If there are two or more double bonds the ending ‘ane’ is replaced by ‘adiene’ or ‘atriene’ etc. The remaining rules are the same as applicable to alkanes. Some examples are : The locant for double bond may be written preceding or following the name of parent alkane or following the root name followed by the suffix , ene , diene or triene. All the the three names, i.e., 1,3-butadiene , butadiene-1,3 or buta-1,3-diene are correct for butadiene. 12. Which of the following compounds will show cis-trans isomerism ? (i) (CH3)2C=CHCH3 (ii) CH2=CCl2 (iii) C6H5CH=CHCH3 (iv) CH3CH=CBr(CH3) 13. Classify the following as Z or E isomers. 14. Write IUPAC names of following compounds. 15. Calculate the number of s and p bonds in the above structures ( i – iv) . 16. Draw cis and trans isomers of the following compounds. Also write their IUPAC names. i) CHCl=CHCl ii) C2H5C(CH3)=C(CH3)C2H5 17. Which of the following compounds will show cis-trans isomerism ? PREPARATION OF ALKENES Some of the general methods of preparation of alkenes are : 1. From alkyl halides Alkenes can be prepared from alkyl halides (preferably bromides or iodides) by treatment with alcoholic solution of caustic potash (KOH) at about 353-363 K. The reaction is known as dehydrohalogenation of alkyl halides. CH3CH2Br + KOH(alco) ® H2C=CH2+ KBr + H2O Similarly , treatment of n-propyl bromide with ethanolic solution of potassium hydroxide produce propene. This is an elimination reaction. The ease of dehydrohalogenation for different halides is: iodide > bromide > Chloride while for carbons, it is , tertiary > secondary > primary. i.e., a tertiary alkyl iodide is most reactive. 2. From alcohols Alkenes can be prepared by dehydration (removal of water molecule) of alcohols. The two common methods for carrying out dehydration are to heat the alcohol with either alumina or a mineral acid such as phosphoric acid or Con. Sulphuric acid. In dehydration reaction the OH group is lost from a-carbon while H atom is lost by b-carbon atom creating a double bond between a and b-carbons. Some examples are : This is an elimination reaction , i.e., a molecule of water is eliminated. 3 From dihalogen derivatives Dihalogen derivatives are the derivatives of alkanes containing two halogen atoms. Alkenes can be prepared from vicinal dihalogen derivatives (having halogen atoms on adjacent carbon atoms) by the action of zinc. The process is called dehalogenation of vicinal dihalides. 4. From alkynes Alkynes on partial reduction with calculated amount of dihydrogen in presence of palladised charcoal partially deactivated with poisons like sulphur compounds or quinoline give alkenes. Partially deactivated charcoal is known as Lindlar’s catalyst. Alkenes thus obtained are having cis geometry. However, alkynes on reduction with sodium in liquid ammonia form trans alkenes. Physical properties of alkenes Alkenes as a class have physical properties similar to those of alkanes. (i) First three members are gases, next fourteen members are liquids and higher ones are solids. (ii) They are lighter than water and insoluble in water but soluble in organic solvents like benzene, ether etc. (iii) Their boiling points (b.p) increase with increase in the number of carbon atoms, for each added –CH2 group b.p. rises by 20° to 30°. Their b.ps. are comparable to alkanes with corresponding carbon skeleton. (iv) Alkenes are weakly polar. The p electrons of the double bond can be easily be polarised. Therefore , their dipole moments are higher than those of alkanes. (v) The dipole moments, melting points and boiling points of alkenes are dependent on the position of groups bonded to the two doubly bonded carbons. Thus, cis-but-2-ene with two methyl groups on the same side has a small resultant dipole while trans–but-2-ene , the bond moments cancel out. Due to relatively high polarity of the cis-isomer compared to its trans-isomer it fits well into crystalline lattice and therefore generally has higher m.p. than the cis-isomer. However, this is not a general rule and therefore may be exceptions. The measurement of dipole moment is a method to assign configuration to geometrical isomers. Chemical properties of alkenes Alkenes contain two types of bonds viz., sigma and pi-bonds. The pi-electrons are loosely bound between carbon atoms and are quite mobile. Hence electron deficient reagents are attracted by pi-electrons. Therefore such reagents readily react with alkenes to give addition products. The presence of mobile (or loosely held pi-electrons) are responsible for the reactive nature of alkenes. The most important reactions of alkenes are electrophilic additions and free radical additions. Other important common types of reactions of alkenes are oxidation and polymerisation. A. Electrophilic addition 1. Addition of Hydrogen Halides Hydrogen halides (HCl, ) readily add to alkenes forming alkyl halides. The order of reactivity of hydrogen halides in this reaction is : HBr, HI HI > HBr > HCl CH2=CH2 + HX ® CH3CH2X ( X = I, Br, Cl ) It is an electrophilic addition reaction and proceeds through the formation of carbocations. Addition of HX takes place in two steps. In step (i) , addition of proton (electropile) to the double bond is slow, while step (ii) i.e., addition of nucleophile (X- ) to carbocation is fast. Any addition in which electrophile adds first is an electrophilic addition. In the case of unsymmetrical alkenes the addition takes place according to Markownikov rule, i.e., the more electronegative of the addentum adds to that carbon which contains lesser number of hydrogen atoms. Thus a molecule of HBr adds to propene in such a way that bromine adds to to the central carbon atom which has lesser number of hydrogen atom. CH3CH=CH2 + HBr ® CH3CHBrCH3 If we look at the structure of propene it has one methyl group attached to one of the doubly bonded carbons. In this case there are two possibilities of formation of carbocation in step (i) i.e., the proton either adds to terminal carbon (path 1) or to the central carbon (path 2). The two carbocations are (CH3)2C+H, a secondary (2°) carbocation and H2C+CH2CH3 a primary (1°) carbocation. The secondary carbocation (CH3)2C+H is more stable than primary carbocation H2C+CH2CH3 , therefore the cation formed in step (1) i.e., CH3C+HCH3 will be more stable and thus nucleophile (negatively charged X:- ) will add to the central carbon. Tertiary carbonium ion (3°) is is even more stable than secondary(2°) and primary (1°) carbocations. Markowinkov‘s rule can also be interpreted in terms of electronic interaction that is ‘’electrophilic addition to a carbon-carbon double bond involves the intermediate formation of the more stable carbocation’’. Decreasing order of reactivity of alkenes towards electrophilic addition is : R2C=CR’2 > R2C=CHR’ > R2C=CH2 ³ RCH=CHR > RCH=CH2 > CH2=CH2 > CH2=CHX R and R’ are alkyl groups and X is a halogen. Free radical addition - Peroxide Effect Markowinkoff rule is general, but not universal. In 1933 M.S. Kharash discovered that, the addition of HBr to unsymmetrical alkenes in the presence of organic peroxide (R-O-O-R) takes a course opposite to that suggested by Markowinkoff . This phenomenon of anti Markonikov addition of HBr in the presence of peroxide is known as peroxide effect. For example, when propylene reacts with HBr in the presence of a peroxide, the major product is n-propyl bromide, whereas in absence of a peroxide, the major product is isopropyl bromide. Remember that HCl and HI do not give anti-Markonikov’s products in the presence of peroxides. 2. Addition of sulphuric acid Alkenes react with sulphuric acid to produce alkyl hydrogen sulphates. Markowinkoff’s rule is followed in the case of unsymmetrical alkenes. Alkyl hydrogen sulphates on hydrolysis yield alcohols. The overall result of the above reaction appears to be Markowinkoff addition of H2O (hydration) to double bond. This method is used for the industrial preparation of alcohols. 3. Addition of halogens Halogens (chrorine and bromine only) add, at ordinary temperature and without exposure to UV light to alkene to give vicinal dihalides. The order of reactivity is : Fluorine > Chlorine > Bromine > Iodine CH2=CH2 + Br2 ® CH2BrCH2Br Ethene red 1,2-Dibromoethane (colourless) This is a test of unsaturation , since the colour of bromine solution is decolourised. 4. Hydoboration oxidation Diboranes adds to alkenes to give trialkyl borane, which on oxidation gives alcohol. The net addition is that of a water molecule, but follows anti-Markowinkov’s addition. Alkenes can be oxidised by different reagents to get different products. (a) Burning in excess of air or oxygen gives carbon dioxide and water. The reaction is exothermic. CH2=CH2 + 3 O2 ® 2 CO2 + 2 H2O (b) Bayer’s reagent : Bayer’s reagent is a cold dilute and weakly alkaline solution of KMnO4. Bayer’s reagent oxidises alkenes to diols. Two hydroxyl groups are introduced, where there is a double bond in the alkene. This reaction is called hydroxylation of alkene. (c) Hot alkaline KMnO4 : Hot alkaline KMnO4 oxidises alkene to carbonyl compounds. In case a hydrogen atom is attached to the carbon atoms containing a double bond, the hydrogen atom is replaced by a hydroxyl group to form a carboxylic acid on oxidation with hot alkaline KMnO4. Thus by identifying the products formed on oxidation, it can be possible to fix the location of the double bond in alkene molecule. Ozonolysis is a method of locating unsaturation in a hydrocarbon. Ozone reacts with alkenes (also with alkynes) to form ozonides. On reduction with zinc / water , the products formed are aldehydes or ketones or both. By identifying the products, it is possible to fix the position of double bond in an alkene. If a hydrogen is attached to the carbon atom forming the double bond, an aldehyde results ; otherwise ketones are formed. 18. What are the products obtained when the following molecules are subjected to ozonolysis ? (i) 1-Pentene (iii) 2-Methyl-2-butene (ii) 2-Pentene (iv) ethane 19. Write IUPAC names of the products obtained by addition of HBr to hex-1-ene . i) in the absence of peroxide and ii) in the presence of peroxide 20. Write IUPAC names of the products obtained by the ozonolysis of the following compounds : i) Pent-2-ene ii) 3,4-Dimethylhept-3-ene iii) 2-Ethylbut-1-ene iv) 1-phenylbut-1-ene 21. An alkene ‘A’ on ozonolysis gives a mixture of ethanal and pentan-3-one. Write structure and IUPAC name of ‘A’. 22. An alkene ‘A’ contains three C – C , eight C – H s bonds and one C – C p bond. ‘A’ on ozonolysis gives two moles of an aldehyde of molecular mass 44 u. Write IUPAC name of ‘A’. 23. Propanal and pentan-3-one are the ozonolysis products of an alkene ? What is the structural formula of the alkene ? The addition of hydrogen is called hydrogenation. Alkenes add hydrogen to give alkanes when heated under pressure in presence of suitable catalyst such as finely divided nickel, platinum or palladium. In the absence of a suitable catalyst, the hydrogenation reaction is extremely slow. Polymerisation is the process in which a large number of simple molecules combine under suitable conditions to form large molecule, known as macromolecule or a polymer. The simple molecules are known as monomers. Alkenes undergo polymerisation in the presence of catalysts. In this process, one alkene molecule links to another molecule as represented by the following reaction of ethene. 2 CH2=CH2 ® -CH2-CH2-CH2-CH2 or (-CH2-CH2-)2 When ethene is heated in the presence of traces of oxygen at about 500 - 675 K under pressure, ‘n’ molecules of ethene participate in the reaction as shown below : n CH2=CH2 ® (-CH2-CH2-)n ethene polyethylene or polythene A variety of polymers are obtained by using substituted ethenes in place of ethene. For example : Uses of various Polymers Polythene is used in making electrical insulators, laboratory articles like funnel, burette, beaker etc. Polyvinyl chloride (PVC) is used in making plastic bottles, plastic syringes, rain coats , pipes etc. Polystyrene is used in making household goods, toys and models. Polytetrafluoroethylene is also known as teflon is inert towards the action of chemicals. It is used in making chemically resistant pipes and some surgical tubes. The high thermal stability and chemical inertness of teflon makes it advantageous in the manufacture of non-stick cooking utensils, where a thin layer of teflon is coated on the interior of the vessel. 24. An alkene with molecular formula C7H14 gives propanone and butanal on ozonolysis. Write down the structural formula. The hydrocarbons having carbon-carbon triple bond (CºC) and general formula CnH2n-2 are called alkynes.. The simplest member of this class is ethyne C2H2. Ethyne has a total of 3 s bonds and 2 pbonds. The triple bond is made up of one s and 2 pbonds. C2H2 is linear molecule in which CºC has a bond strength of 823 kJ mol-1 in ethyne. It is stronger than the C=C of ethene (610 kJ mol-1) and C- C of ethane (370 kJ mol-1) In IUPAC system the suffix –ane of the corresponding alkane is replaced by yne , e.g. The remaining rules of nomenclature are same as in the case of alkanes and alkenes. Ethyne and propyne have got only one structure but there are two possible structures for butyne- (i) but-1-yne and (ii) but-2-yne. These two compounds differ in their structures due to position of triple bond, they are known as position isomers. 25. Write structures of different isomers corresponding to the 5th member of alkyne series. What type of isomerism is exhibited by different pairs of isomers ? Structure of Triple bond Ethyne is the simplest molecule of alkyne series. Structure of ethyne is shown in Fig. Orbital picture of ethyne showing (a) sigma overlaps (b) pi overlaps Each carbon atom of ethyne has two sp hybridized orbitals. Carbon-carbon sigma (s) bond is obtained by the head-on overlapping of two sp hybridized orbitals of two carbon atoms. The remaining sp hybridized orbital of each carbon atom undergoes overlapping along the internuclear axis with 1s orbital of each of the two hydrogen atoms forming two C – H sigma bonds. H – C – C bond angle is 180°. Each carbon has two unhybridised p-orbitals which are perpendicular to each other as well as to the plane of the C – C sigma bond. The 2p-orbitals of one carbon atom are parallel to 2p orbitals of other carbon atom, which undergo lateral or sideways overlapping to form two pi (p) bonds between two carbon atoms. Thus ethyne molecule consists of one C –C s -bond, two C – H s- bonds and two C – C p- bonds. The strength of CºC bond (bond enthalpy 823 kJ mol-1) is more than those of C = C (bond enthalpy 681 kJ mol-1) and C – C bond(bond enthalpy 348kJ mol-1). The CºC (133 pm) and C – C (154 pm) . The electron cloud between two carbon atoms is cylindrically symmetrical about the internuclear axis. Thus ethyne is a linear molecule. PREPARATION OF ALKYNES 1. From vicinal dihalides : Acetylene and its higher homologoues can be prepared by treatment of alcoholic alkali with vicinal dihalides. 2. By dehalogenation of tetrahalides From tetrahalides, the alkynes can be prepared by the action of zinc. 3. Synthesis from carbon and hydrogen Acetylene can be prepared by passing a stream of hydrogen through electric arc struck between carbon electrodes. 4. By electrolysis of potassium salt of fumaric acid 5. Industrial preparation : Acetylene is obtained by the action of water on calcium carbide(CaC2) . Calcium carbide is prepared by heating quick lime(CaO) with carbon at high temperature. CaO + 3 C ® CaC2 + CO CaC2 + 2 H2O ® H-CºC-H + Ca(OH) 2 6. Formation of higher alkynes Higher alkynes can be prepared by the action of alkyl halides on sodium acetylide. Sodium acetylide can be obtained from acetylene by the action of sodamide. H-CºC-H + NaNH2 ® H-CºC-Na + NH3 ethyne sodium acetylide H-CºC-Na + BrCH2CH3 ® H- CºC- CH2-CH3 + NaBr Physical properties of alkynes Alkynes have the following general properties. 1. State : Lower members in alkynes series are gases, while higher members are liquids and solids. 2. Colour : Alkynes have no colour. 3. Non-polar nature : Alkynes are non-polar in nature. Therefore, alkynes are soluble in non-polar(organic) solvents such as benzene. Alkynes are unsaturated compounds. Therefore, like alkenes, they are quite reactive. The most common type of reactions of alkyne are addition reactions. On addition, alkynes ultimately give saturated compound. Alkynes also undergo oxidation and polymerisation. 1. Hydrogenation : In the presence of a catalyst, hydrogen adds to alkynes ultimately give alkane. 2. Addition of Halogen acids : This adds on to alkynes to give a dihalide. The addition follows Markowinkoff’s rule. 3. Addition of Halogen : Halogen adds to alkynes to give halogen substituted alkanes. 4. Hydration : Alkynes react with water in the presence of mercuric sulphate and sulphuric acid to form aldehyde or ketone. When acetylene is bubbled through 40% sulphuric acid in the presence of mercuric sulphate(HgSO4) acetaldehyde is obtained. The reaction can be considered as the addition of water to acetylene. 5. Reaction with alcohols, hydrogen cyanide and carboxylic acids (vinylation) Ethyne adds a molecule of alcohol in the presence of alkali to give vinyl ether. With HCN , ethyne gives vinyl cyanide. Similarly, alkynes add acids in the presence of a Lewis acid catalyst or Hg2+ ions to give vinyl esters. 6. Oxidation : Alkynes can be oxidised to different products using different reagents and conditions of oxidation. (i) Burning in air : Acetylene burns in air with suity flame, emitting a yellow light. For this reason it is used for illumination. (ii) Burning in excess of air : Acetylene burns with a blue flame when burnt in excess of air or oxygen. A very high temperature (3000 K) is obtained by this method. Therefore, in the form of oxy-acetylene flame, it is used for welding and cutting metals. 2 H-C º C-H + 5 O2 ® 4 CO2+ 2 H2O + heat (iii) Degradation with KMnO4 : The oxidation of alkynes with strong alkaline potassium permanganate give carboxylic acids generally containing lesser number of carbon atoms. (iv) Ozonolysis : Alkynes react with ozone to give ozonides which are decomposed by water to form diketones and hydrogen peroxide. Diketones are oxidised by hydrogen peroxide to carboxylic acids by the cleavage of carbon-carbon bonds. 6. Self addition or Polymerisation Alkynes polymerise to give linear or cyclic compounds depending upon their temperature and catalyst used. However, these polymers are different from the polymers of alkenes as they are usually low molecular weight polymers. When acetylene is passed through red hot tube of iron or quartz, it trimerises to benzene. Similarly, propyne polymerises to form mesitylene. Polymerisation of acetylene produces linear polymer polyacetylene. It is high molecular weight conjgated polymer containing repeating units (-CH=CH- CH=CH -)n. Under proper conditions this material conducts electricity. The films of polyacetylene can be used as electrodes in batteries. Having much higher conductance than metal conductors, lighter and cheaper, batteries can be made from it. 26. How will you convert ethanoic acid into benzene ? ACIDIC NATURE OF ACETYLENE Acetylene forms salt like compounds because hydrogen atoms are slightly acidic. Therefore hydrogen atoms directly attached to carbon atoms linked by triple bond can be replaced by highly electropositive metals such as sodium, silver, copper etc. Salts of acetylenes are known as acetylides. (i) Sodium acetylide is formed when acetylene is passed over molten sodium. H-CºC-H + Na ® H-CºC-Na + ½ H2 ethyne sodium acetylide Sodium acetylide is also formed when acetylene is treated with sodamide (NaNH2). Sodamide is obtained by dissolving sodium metal in liquid ammonia. Na + NH3 ® NaN H2 + ½ H2 NaNH2 + H-CºC-Na ® Na-CºC-Na + NH3 Other alkynes can also react in a similar manner. NaNH2 + CH3 -CºC-H ® CH3 -CºC-Na + NH3 (ii) Silver acetilide is obtained as a white precipitate, when acetylene is passed through ammoniacal silver nitrate solution(Tollen’s reagent). H-CºC-H + 2 AgNO3 +2 NH4OH ® Ag-CºC-Ag+ NH4NO3 + 2 H2O silver acetylide (white ppt) (iii) Copper acetilide is obtained as a red precipitate when acetylene is passed through ammoniacal solution of cuprous chloride. H-CºC-H + Cu2Cl2 +2 NH4OH ®Cu-CºC-Cu+ NH4Cl + 2 H2O copper acetilide (red ppt) Alkynes form insoluble silver acetylide when (alkynes having acidic hydrogen) passed through ammoniacal silver nitrate (Tollen’s reagent). R-Cº C-H + Ag+® R-Cº C- Ag + H+ An alkyne having no hydrogen atom attached to carbon linked by a triple bond does not form acetylide e.g., CH3-CºC-CH3 (2-Butyne) as it lacks an acidic hydrogen(acetylinic hydrogen). Thus, salt formation by alkyne is shown by those alkynes which contain a triple bond at the end of the molecule (i.e., when triple bond is terminal group). Alkynes having a non-terminal triple bond do not yield acetylides. Therefore, this reaction is used for distinguishing terminal alkynes from non-terminal alkynes. Cause of acidic character Hydrogen attached to carbon atoms by sp-hybrid orbital is slighly acidic, the reason for acidic character is that sp-hybrid orbital has greater s-character than sp2 or sp3 hybrid orbitals. The s-character in sp3, sp2 and sp hybrid orbitals are 25, 33.3 and 50%respectively. An s-orbital tends to keep electrons closer to the nucleus than a p-orbital. It means that greater the share of s-orbital in the hybrid orbital, the nearer will be the shared pair of electrons to the nucleus of carbon atom. This makes the sp-hybrid carbon more electronegative than sp2 or sp3 hybridised carbon atom. Thus, the hydrogen attached through sp-hybid orbital acquires a slight positive charge. Therefore, it can be replaced by highly electropositive metals (such as sodium, copper, silver etc). The acidic character of hydrocarbons varies as : Alkane < Alkene < Alkyne However , alkynes are extremely weak acids. Compared to carboxylic acids like acetic acid, ethyne is 1020 times less acidic. Ethene is 1040 times less acidic than acetic acid. Test for alkane, alkene and alkynes Alkanes, alkenes and akynes can be distinguished from one another by the following tests. Alkanes do not decolourise bromine water. Baeyer’s reagent (alkaline KMnO4) remains unchanged on treating with alkanes. Alkenes and alkynes decolourise bromine water. It is used to distinguish alkanes from unsaturated compounds such as alkenes and alkynes. Alkenes and alkynes decolourise Bayer’s reagent. Therefore, this test is used for distinguishing alkenes from alkanes. 27. How will you separate propene from propyne ? The name alkadiene is often shortened as diene. These are unsaturated hydrocarbons having two carbon-carbon double bonds per molecule. The general formula of alkadiene is similar to alkynes CnH2n-2, hence these are isomeric with alkynes. Classification of Dienes The dienes are classified into three types depending upon the relative positions of the two double bonds. 1. Isolated dienes The double bonds are separated by more than one single Penta-1,4-diene Hexa - 1,5-diene 2. Cumulated dienes The double bonds between successive carbon atoms are called cumulated double bonds. E.g. Propa-1,2-diene (allene) Buta-1,2-diene (methylallene) 3. Cojugated dienes The dienes in which double bond is alternate with single bond are called conjugated dienes, e.g CH2=CH- CH = CH2 CH3- CH=CH-CH=CH-CH3 Relative stabilities of Dienes A conjugated diene is more stable as compared with non-conjugated dienes. The relative order of stability of diene is Cojugated diene > Isolated diene > Cumulated diene Exceptional stability of conjugated dienes can be explained in terms of orbital structure. Consider Buta-1,3-diene as an example of conjugated diene. All the four carbon atoms in Buta-1,3-diene are sp2 hybridised. Each carbon atom also has an unhybridised p-orbital which is used for p bonding. The hybrid orbitals of each carbon atom are used for the formation of s bonds. The p-orbital of C-2 can overlap equally with the p-orbitals of C-1 and C-3 . Similarly, the p-orbital of C-3 can overlap equally with the p-orbitals of C-2 and C-4 as shown in Fig. As a result the p-electrons in conjugated dienes are delocalised. Delocalisation of p - electrons in conjugated diens The delocalisation of p-electrons in conjugated dienes makes them more stable because now p-electrons feel simultaneous attraction of all the four nuclei of four carbon atoms. This type of delocalisation of p-electrons is not possible in isolated or cumulated dienes. The non-conjugated dienes behave exactly in the same way as simple alkenes except that the attacking reagent is consumed in twice the amount required for one double bond. But due to the mutual interactions of the double bonds, i.e., delocalisation of the electrons in conjugated dienes, their properties are entirely different. 1,3-Butadiene and HBr taken in equimolar amounts yield two products 3-Bromo-but-1-ene and 1-Bromo-but-2-ene through 1,2 and 1,4-addition respectively. The resonating structures of the intermediate carbocation , formed after addition of the electrophile, i.e., H+ , can add to the anion in two alternative ways (path a or path b ) to yield 1,2 and 1,4 addition products. 27. A conjugated alkadiene having molecular formula C13H22 on ozonolysis yielded ethyl methyl ketone and cyclohexanal. Identify the diene, write its structural formula and give its IUPAC name. The term aromatic (Greek ; aroma means fragrance) was first used in compounds having pleasant odour although structure was not known. Now the term aromatic is used for a class of compounds having a characteristic stability despite having unsaturation. These may have one or more benzene rings (benzanoid) or may not have benzene ring (non-benzanoid) . Benzanoid compounds include benzene and its drivatives having aliphatic side chains (arenes) or polynuclear hydrocarbons, e.g. naphthalene, anthracene etc. The nomenclature of aromatic compounds discussed in Unit 12. AROMATIC HYDROCARBONS (ARENES) Aromatic hydrocarbons or arenes are the compounds of carbon and hydrogen with at least one benzene type ring (hexagonal ring of carbons) in their molecules. We have hydrocarbons of benzene series (containing benzene rings) and so on. STRUCTURE OF BENZENE The molecular formula of benzene is C6H6 . It contains eight hydrogen atoms less than the corresponding parent hydrocarbon, i.e., hexane(C6H14). It took several years to assign a structural formula to benzene because of its peculiar properties. In 1865 Kekule suggested the first structure of benzene. In this structure, there is a hexagonal ring of carbon atoms distributed in a symmetrical manner, with each carbon atom carrying one hydrogen atom. The fourth valence of the carbon atom is fulfied by the presence of alternate system of single and double bond as shown. The above formula had many draw backs as described below : (i) The presence of three double bonds should make the molecule highly reactive towards addition reactions. But contrary to this, benzene behaves like saturated hydrocarbons. (ii) The carbon-carbon bond lengths in benzene should be 154 pm (C-C) and 134 pm (C=C) . It implies that the ring should not be regular hexagon but actually all carbon-carbon distances in benzene have been found to be 139 pm. (iii) Finally two isomers should result in a 1,2-disubstituted benzene as shown in Fig While Kekule formula could not explain the differences in properties between benzenes and alkenes based on this structure, he explained the lack of isomers as in figure by postulating a rapid interchange in the postion of the double bond as shown below: Orbital picture of Benzene According to orbital structure , each carbon atom in benzene assumes sp2 hybrid orbitals lying in one plane and oriented at an angle of 120º. There is one unhybridised p-orbital having two lobes lying perpendicular to the plane of hybrid orbitals for the axial overlap with 1s-orbital of hydrogen atom to form C- H sigma bond. The other two hybrid orbitals are used for axial overlap of similar orbital of two adjacent carbon atoms on either side to form C- C sigma bonds. The axial overlapping of hybrid orbitals to form C- H and C- C bonds have been shown in Fig. Sigma bond formation in benzene As is evident, the frame work of carbon and hydrogen atoms is coplanar with H-C-C or C-C-C bond angle as 120º. The unhybridised p-orbital on each carbon atom can overlap to a small extent with p-orbital of the two adjacent carbon atoms on either side to constitute pi bonds as shown in Fig. Side-wise orverlapping of p-orbitals The molecular orbitals containing pi-electrons spreads over the entire carbon skeleton as shown in Fig. Orbital picture of benzene The spreading of pi-electrons in the form of ring of pi electrons above and below the plane of carbon atoms is called delocalisation of pi-electrons. The delocalisation of pi-electrons results in the decrease in energy and hence accounts for the stability of benzene molecule. The delocalised structure of benzene accounts for the X-ray data (all C- C bond lengths equal) and the absence of the type of isomerism shown in Fig ( Page 18 ). Furthermore molecular orbital theory predicts that those cyclic molecules which have alternate single and double bonds with ( 4 n + 2 ) ( where n = 0, 1, 2, ….) electrons in the delocalised p-cloud are particularly stable and have chemical properties different from other unsaturated hydrocarbons. Resonance structure of Benzene Structure of benzene can also be explained on the basis of resonance. Benzene may be assigned following structures A and B . Structure A and B have same arrangement of atoms and differ only in electronic arrangement. Any of these structures alone cannot explain all properties of benzene. Resonance structure of benzene According to these structures , there should be three single bonds (bond length 154 pm) and three double bonds (bond length 134 pm) between carbon atoms in the benzene molecule. But actually, it has been found that all the carbon-carbon bonds in benzene are equivalent and have a bond length 139 pm. Structures A and B are known as resonating or cannonical structures of benzene. The actual structure of benzene lies somewhere between A and B may be represented as C referred to as resonance hybrid. To indicate two structures which are resonance forms of the same compound, a double headed arrow is used as shown in Fig (above). The resonance hybrid is more stable than any of the contributing (or canonical) structures. The difference between the energy of the most stable contributing structure and the energy of the resonance hybrid is known as resonance energy. In the case of benzene, the resonance hybrid (actual molecule) has 147 kJ/mol less energy than either A or B. Thus, resonance energy of benzene is 147 kJ/mol. It is this stabilization due to resonance which is responsible for the aromatic character of benzene. STRUCTURAL ISOMERISM IN ARENES Benzene forms a number of mono, di or poly-substituted derivatives by replacement one, two or more hydrogen atoms of the ring by other monovalent atoms or groups. In many cases, the derivative of benzene exist in two or more isomeric forms. The isomerism of these derivatives is discussed as follows. The molecule of benzene is symmetrical and the six carbon atoms as well as the hydrogen atoms occupy similar positions in the molecule. If atom of hydrogen is substituted by a monovalent group or a radical (say methyl group), resulting mono substitution product exist in one form only. The position assigned to the substituent group does not matter because of the equivalent positions of the six hydrogen atoms. Thus, we have only one compound having the formula C6H5X where X is a monovalent group. The various positions in the monosubstituted derivatives are not equivalent with respect to the position already occupied by the substituent. For example, taking the positions of the substituent as number 1, the other positions are as shown : A close examination at the structures reveal that : (i) Positions 2 and 6 are equivalent and are called ortho(-o). (ii) Positions 3 and 5 are equivalent and are called meta(-m) with respect to the position 1. (iii) Position 4 is called para (p) with respect to the position 1. For example in the case of dimethyl benzene (CH3)2C6H4 , commonly known as xylene. There can be three xylenes depending upon the positions of methyl groups as shown below : Besides , the three possible dimethyl benzenes, a fourth isomer, ethyl benzene is also known. In the case of naphthalene, even monosubstituted compounds display positional isomerism as in 1-Methyl and 2-Methyl naphthalenes. AROMATICITY OR AROMATIC CHARACTER The term aromatic was first used for a group of compounds having pleasant odour. These compounds have properties which are quite different from those of the aliphatic compounds. The set of these properties is called aromatic character or aromaticity. Some typical properties of aromatic compounds are : · These are highly unsaturated compounds, but do not give addition reactions easily. · These give electrophilic substitution reactions very easily. · These are cyclic compounds containing five, six or seven membered rings. · These molecules are flat (planar). · These are quite stable compounds. The aromaticity in benzene is considered to be due the presence of six delocalised p-electrons. The modern theory of aromaticity was given by Eric Huckel in 1931. According to this theory, for a compound to exhibit aromaticity , it must have the following properties. · Delocalisation of p-electrons of the ring. · Planarity of the molecules. To permit sufficient or total delocalisation of p-electrons, the ring must be planar to allow cyclic overlap of the p-orbitals. HUCKEL RULE or ( 4 n + 2 ) RULE This rule states that for a compound to exhibit aromatic character, it should have a conjugated , planar cyclic system containing 4 n + 2 ( where n = 1, 2, 3. … ) delocalised p-electrons forming a cyclic cloud of delocalised p-electrons above and below the plane of the molecule. This is known as Huckel rule of (4n + 2 ) p-electrons. Benzene , naphthalene, anthracene and phenanthrene are aromatic as they contain ( 4 n + 2 ) i.e., 6, 10 , 14 p-electrons in a conjugated cyclic array. The cyclopentadiene and cyclooctatetrene are non-aromatic as instead of (4n+2) p-electrons, these have 4n p-electrons. Moreover they are non-planar. 28. Predict which of the following systems would be aromatic and PREPARATION OF BENZENE AND ITS HOMOLOGUES 1. From alkyne Alkynes polymerize at high temperatures to yield arenes e.g. benzene is obtained from ethyne. 2. Decarboxylation of aromatic acids In laboratory benzene is prepared by heating sodium benzoate with sodalime. 3. Reduction of Benzene diazonium salts In presence of hypophosphorus acid, benzene diazonium chloride is convered to benzene (diazo group is replaced by H) 4. Friedel Craft’s reaction Benzene can yield alkyl benzene by treating with alkyl halide in presence of anhydrous aluminium chloride. 5. Wurtz-Fittig reaction Arenes can be obtained by the action of sodium metal on a mixture of aryl halide and alkyl halide in ether 6. From Grignard reagents Arenes can be prepared by reacting Grignard reagent with alkyl halide, e.g. Aromatic hydrocarbons are usually colouless , insoluble in water but soluble in organic solvents. They are inflammable , burn with sooty flame and have characteristic odour. They are toxic and carcinogenic in nature. Their boiling points increase with increase in molecular mass. Aromatic hydrocarbons are unsaturated cyclic compounds. They are also known as arenes. Benzene and its homologoues are better solvents as they dissolve a large number of compounds. This is because the electron clouds makes it polar to some extent and therefore even polar molecules are attracted towards it. This helps in dissolving in them a large number of compounds. The reason for this inertness of benzene and its homologues is due to the presence of pi-electron clouds above and below the plane of the ring of carbon atoms. Therefore, nucleophilic species (electron rich species such as Cl-, OH-, CN- etc ) cannot attack the benzene ring due to repulsion between the negative charge on the nucleophile and delocalised pi-electron clouds. However, electrophiles (such as H+, Cl+, NO2+ etc) can attack the benzene ring and for this reason benzene and its homologues can be replaced by nitro(-NO2), halogen(-X), sulphonic acid group (-SO3H) etc. However, arenes also undergo a few addition reactions under more drastic conditions, such as increased concentration of the reagent, high pressure, high temperature, the presence of catalyst etc. ELECTROPHILIC SUBSTITUTION REACTIONS The chemical reaction which involves the replacement of an atom or group of atoms from organic molecule by some other atom or group with out changing the structure of the remaining part of the molecule is called substitution reaction. The new group which finds place in the molecule is called substituent and the product formed is referred to as substitution product. MECHANISM OF ELECTROPHILIC AROMATIC SUBSTITUTION According to experimental evidences SE ( S = substitution ; E = electrophilic) reactions are supposed to proceed via the following three steps : · Generation of electrophile · Formation of carbocation intermediate · Removal of proton from the carbocation intermediate (a) Generation of electrophile E+ During chlorination, alkylation and acylation of benzene , anhydrous AlCl3 , being a Lewis acid helps in generation of electrophile Cl+ , R+ , RC+O (acylium ion) respectively by combing with the attacking reagent. In case of nitration, the electrophile , nitronium ion +NO2 is produced by transfer of a proton (from sulphuric acid) to nitric acid in the following manner. In the process of generation of nitronium ion , sulphuric acid serves as an acid and nitric acid as a base. Thus it is a simple acid-base equilibrium. (b) Formation of Carbocation (arenium ion ) Attack of electrophile results in the formation of s-complex or arenium ion in which one of the carbon is sp3 hybridised. The arenium ion gets stabilized by resonance. Sigma complex or arenium ion loses its aromatic character because of delocalization of electrons stops at sp3 hybridised carbon. (c) Removal of proton To restore the aromatic character, s-complex releases proton from sp3 hybridised carbon on attack by [AlCl4]- (in case of halogenation , alkylation and acylation ) and [HSO4]- (in case of nitration. Chlorine or brominre react with benzene in the presence of Lewis acids like ferric chloride or aluminium salts of corresponding halogens, which act as catalysts to give chlorobenzne or bromobenzne. The function of Lewis acids like AlCl3, FeCl3, FeBr3 etc is to carry halogen to aromatic hydrocarbons. Hence, they are called halogen carriers. Fluorine is very reactive and hence this method is not suited for the preparation of fluorobenzene. Iodobenzene is obtained by heating iodine and nitric acid, mercuric oxide etc. The replacement of hydrogen atom of benzene by a sulphonic acid group(-SO3H) is known as suphonation. The reaction is carried out by treating an arene with concentrated sulphuric acid containing dissolved sulphur trioxide or with chlorosulphonic acid. A nitro group (-NO2) can be introduced into the benzene ring using nitrating mixture ( a mixture of concentrated sulphuric acid and concentrated nitric acid) 4. Friedel Craft’s reaction On treatment with an alkyl halide or acid halide (acyl halide ) in presence of anhydrous aluminium chloride as catalyst, benzene forms an alkyl or acyl benzene as described below. The reaction is known as Friedel crafts alkylation or acylation respectively. Arenes can be oxidised to different products depending upon their structure and conditions of the reaction. (i) Combustion : Aromatic hydrocarbons burn with luminous and sooty flame. 2 C6H6+ 15 O2 ® 12 CO2 + 6 H2O 2 C6 H5 CH3 + O2 ® 7 CO2 + 4 H2O (ii) Side chain oxidation : Arenes with side chain on oxidation with strong oxidising agents such as alkaline potassium permanganate, give carboxylic acid. However, a hydrocarbon without a side chain remain unaffected. In case , there are more carbon atoms in the side chain, on oxidation all the carbon atom gets oxidised to carbon dioxide except, the one that is directly attached to the aromatic ring. (iii) Catalytic oxidation : Benzene can be catalytically oxidised to an aliphatic compound, maleic anhydride. Benzene when heated with excess of air at 800 K in the presence of vanadium pentoxide(V2O5) gives maleic anhydride. Addition Reactions of Arene 1. Addition of Chlorine Benzene and its homologues undergo some addition reactions similar to alkynes. However, extremely drastic conditions are required for carrying out addition reactions in arenes. For example, benzene can be chlorinated in presence of sunlight to form benzne hexachloride, BHC. ii. Addition of hydrogen Hydrogen adds on to benzene when heated (475 K) under pressure in presence of a nickel catalyst to cyclohexane (hexa hydrobenzene). Similarly, toluene gives methyl cyclohexane ( hexa hydro toluene). iii) Addition of ozone Benzene slowly reacts with ozone to form triozonide. The triozonide on hydrolysis with water gives glyoxal. ORINENTATION IN BENZENE RING The arrangement of substituents on the benzene ring is termed as orientation. The term is often used for the process of determining the position of the substituents on the benzene ring. Whenever substitution is made in the benzene ring , the substituent can occupy any position –all position being equivalent. The position taken by the second substituent depends upon the nature of the substituent already present in the ring. In other words, the substituent already present on the ring directs the incoming substituent. This is called directive influence of the substituents. Depending up on the directive influence of the substituent already present in the ring, the substituents are classified into following groups. Ortho and para directing groups The substituents that direct the incoming substituent to ortho (o) and para (p) positions relative to theirs are called ortho and para directing groups. The ortho and para directing substituents permit the electrophilic substitution at the ortho and para positions. If the substituent S is an ortho and para directing, then the overall reaction may be described as : The proportion of the ortho and para disubstituted products depends upon the reaction conditions. The ortho and para directing substituents are arranged in the decreasing directing influence as follows: The ortho and para directing substituents have ‘electron-donating influence’ on the aromatic ring. Thus ortho and para directing substituents increase the electron density on the ring and therefore are called activating groups (or activators). Activating groups enhance the rate of reaction. Ortho and para directing but deactivating groups The only exception to the above rule is the halogen substituents. The halogens are ortho and para directing but deactivate the benzene ring relative to benzene. For example, if we carry out nitration of toluene, a mixture of ortho and para nitro toluene is formed. Meta directing Groups The substituents that direct the incoming substituents to meta(m-) relative to theirs are called meta directing groups. The meta directing groups permit the electrophilic substitution at meta position. If S is the meta directing group, then the overall reaction may be described as : The meta-directing substituents are arranged in the decreasing directing influences as follows : The meta directing substituents have ‘electron-withdrawing influence ‘ on the benzene ring. Thus meta directing substituents decrease the overall electron density on the ring and are therefore called deactivating groups(or deactivators) . The deactivating groups decrease (or lower) the rate of reaction. For example, nitration of benzoic acid produces m-nitro benzoic acid. Examples of Directing influence of substituents It is seen that : · -OH (phenolic) group is o- and p-directing. · -NO2 group is meta-directing The o- and p-directing influence of –OH group in phenol Phenolic ( -OH) group has lone pairs of electrons on its oxygen atom. The lone pairs of electrons on the O atom of the –OH group interact with p-electrons of the benzene ring and give rise to various resonance forms as shown below. This shows that the electron density is more concentrated on the two ortho- and para- positions. The electrophilic substitution takes place at these positions. The m-directing influence on –NO2 group The –NO2 group is a meta directing and deactivating group. The more electronegative O atoms in–NO2 group withdraw electronic density from N atom and places a slight positive charge on it. This atom pulls electrons from the benzene ring and gives rise to various resonance forms as shown below. resonance in nitrobenzene These resonance structures show that the electron density at o- and p-positions is generally reduced. As a result , electrophilic attack is at meta-position. The o- and p- directing but deactivating effect of halogen atom Halogen atoms are o- and p- directing but deactivating substituents. This anomalous behaviour of halogen atoms (when present in the ring) is due to two opposing effects operating at the same time. A halogen substituent , due to its strong electronegative character , pulls electron from the ring due to inductive effect. This decreases the electron density on the ring. Due to the availability of lone-pairs of electrons, the halogen atom releases electrons to the ring giving rise to various resonance structures. Resonance structures of chlorobenzene Due to poor 2p (C) - 3p (Cl) ovelap , the halogen atom of the ring is not a good electron releasing substituent. As a result , in the case of halogen substituent , the inductive effect predominates and the benzene ring gets deactivated despite o- and p- directing effect of halogen substituent. POLYNUCLEAR AROMATIC HYDROCARBONS Polynuclear aromatic hydrocarbons contain more than one benzene rings and have two carbon atoms shared by two or three aromatic rings. Anthracene and phenanthrene have two pairs of carbon atoms shared by two two rings, each pair is shared by a different pair of rings. Coal tar is the main source of naphthalene (6-10%) , anthracene (1%) and phenanthrene. Naphthalene is obtained from the middle oil fraction (b.p 443 – 503 K) of coal tar by cooling. Crude naphthalene is washed successively with dil H2SO4 , sodium hydroxide and water. The dried sample is then purified by sublimation. Naphthalene is used as moth-balls to protect woollenns. It is also used for the manufacture of phthalic anhydride, 2-naphthol, dyes etc. Anthracene is obtained by cooling green oil fraction (b.p. 543-633K) of coal tar distillation. The crude sample of anthracene is purified by washing successively with solvent naphtha (it removes phenanthrene) and pyridine. Phenanthrene is obtained from its solution in solvent naphtha (obtained during purification of crude anthracene crystals) by evaporation. CARCINOGENICITY AND TOXICITY All chemical substances are believed to be harmful in one way or other. On finding that prolonged exposure to coal tar could cause skin cancer, it was discovered that the high boiling fluorescent fraction of coal tar was responsible for causing cancer. This fraction of coal tar was found to contain 1,2-benzanthracene. Later some more compounds were also found to be carcinogenic. The names and formulae of some carcinogenic compounds are given below. There is no rule available so far to predict the carcinogenic activity of any hydrocarbon or its derivatives. However, the number of groups like –CH3, -OH, -CN, -OCH3 etc. has been found to influence carcinogenic activity of compounds. 29. Bring out the following conversions : (i) Methane to ethane (ii) Ethyne to methane (iii) Ethane to ethene (iv) Ethane to butane 30. Suggest a method to separate a mixture of ethane, ethene and ethyne. 31. Describe a method to distinguish between ethene and ethyne. 32. What is Grignard reagent ? How it is prepared ? 33. How is propane prepared from Grignard reagent ? 34. Arrange the following in the increasing order of boiling points : Hexane, heptane, 2-Methylpentane, 2,2-Dimethyl pentane 35. How would you obtain : i) ethene from ethanol ii) ethyne from ethylene dibromide 36. Write equations for the preparation of propyne from ethyne. 37. How are the following conversions carried ? (i) propene to propane (ii) ethyne to ethane (iii) ethanol to ethene (iv) sodium acetate to methane (v) benzene to nitrobenzene 38. How will you convert benzene to : (v) Benzoic acid 39. What happens when : (i) 1-Bromopropane is heated with alcoholic KOH. (ii) 2-Propanol is heated with alumina at 630 K. (iii) Benzene is treated with a mixture of Con. Sulphuric acid and nitric acid. (iv) Ethene is treated with an alkaline solution of cold KMnO4. 40. Write chemical equations for combustion reaction of the following hydrocarbons ? i) butane ii) pentene iii) hexyne iv) Toluene 41. Draw the cis and trans structures of hex-2-ene. Which isomer will have higher b.p and why ? 42. Why is benzene extra ordinarily stable though it contains three double bonds ? 43. Explain why the following systems are not aromatic ? 44. How will you convert benzene into : i) p-nitrobromobenzene ii) m-nitrochlorobenzene iii) p-nitrotoluene iv) acetophenone 45. In the alkane H3CCH2C(CH3)2CH2CH(CH3)2 , identify 1°, 2° , 3° carbon atoms and give the number of H atoms bonded to each one of these. 46. Addition of HBr to propene yields 2-bromopropane, while in the presence of benzoyl peroxide, the same reaction yields 1-bromopropane. Explain and give mechanism. 47. Write ozonolysis products of 1,2-dimethylbenzene (o-Xylene). How does the result support Kekule structure for benzene ? 48. Arrange benzene, n-hexane and ethyne in decreasing order of acidic behaviour. Also give reason for this behaviour. 49. Why does benzene undergo electrophilic substitution reactions easily and nucleophilic substitutions with difficulty ? 50. How would you convert the following compounds into benzene ? i) ethyne ii) ethane iii) hexane 51. Write the structures of all the alkenes which on hydrogenation give 2-methylbutane. 52. Arrange the following set of compounds in order of decreasing relative reactivity with electreophile E+, (a) chlorobenzene , 2,4-dinitrochlorobenzene , (b) toluene , p-CH3C6H4-NO2 , p-O2N-C6H4-NO2 53. Out of benzene, m-dinitrobenzene and toluene which will undergo nitration most easily and why ? 54. Suggest the name of a Lewis acid other than anhydrous aluminium chloride which can be used during ethylation of benzene. 55. Why Wurtz reaction is not preferred for the preparation of alkanes containing odd number of carbon atoms ? Illustrate your answer by taking one example. 1. Why carbon forms a large number of compounds ? Write a note on optical isomers of tartaric acid. 2. Bring out the following conversions : (i) Methane to ethane (ii) Ethane to ethane (iii) Ethyne to methane (iv) Ethane to butane 3. Suggest a method to separate a mixture of ethane, ethene and ethyne. 4. Describe a method to distinguish between ethene and ethyne. 5. What is Grignard reagent ? How it is prepared ? 6. How is propane prepared from Grignard reagent ? 7. Arrange the following in the increasing order of boiling points : Hexane, heptane, 2-Methylpentane, 2,2-Dimethyl pentane 8. How would you obtain : (i) ethene from ethanol (ii) ethyne from ethylene dibromide 9. Write equations for the preparation of propyne from ethyne. 10. How are the following conversions carried ? (i) propene to propane (ii) ethyne to ethane (iii) ethanol to ethene (iv) sodium acetate to methane (v) benzene to nitrobenzene 11. How will you convert benzene to : (iii) Benzoic acid 12. What happens when : (i) 1-Bromopropane is heated with alcoholic KOH. (ii) 2-Propanol is heated with alumina at 630 K. (iii) Benzene is treated with a mixture of Con. Sulphuric acid and nitric acid. (iv) Ethene is treated with an alkaline solution of cold KMnO4. 13. What is coal ? 14. What are the natural sources of hydrocarbons ? 15. What is petroleum ? 16. Compare the composition of coal and petroleum. 17. Give the origin of coal. 18. Give the origin of petroleum. 19. How are aromatic hydrocarbons obtained from coal ? 20. What do you understand by pyrolysis and discuss it with coal 21. Give a brief account of petroleum refining. Name the various useful products obtained from it. 22. What is straight-run gasoline ? Describe the principle of obtaining straight-run gasoline from petroleum. 23. Explain the following processes: 24. Explain the term ‘knocking’ What is the relationship between the structure of a hydrocarbon and knocking ? 25. Explain the term ‘knocking’. A sample of petrol produces, the same knocking as a mixture containing 30% n-heptane and 70% iso-octane. What is the octane number of sample. 26. Describe different methods to improve the quality of a fuel used in gasoline engine. 27. How can you obtain aliphatic hydrocarbon from coal ? 28. Describe two methods by which petroleum can be obtained artificially from coal. 29. Write a note on synthetic petrol. 30. Discuss various methods for laboratory preparation of alkanes. 31. How can you obtain alkane from (I) Unsaturated hydrocarbons (ii) alkyl halides (iii) carboxylic acids 32. Describe the laboratory preparation of methane. 33. Decribe various methods for laboratory preparation of alkenes. 34. How can alkenes be prepared from : (i) alcohols (ii) alkyl halides ? 35. Decribe the laboratory preparation of ethene ? 36. Give the methods of preparation of ethyne. 37. Describe the laboratory preparation of acetylene. 38. Give the general methods of preparation of higher alkynes. 39. Give the important chemical reactions of alkanes. 40. Give the important chemical properties of alkenes. 41. Give the important chemical properties of alkynes. 42. Give important uses of (i) ethene (ii) ethyne 43. Write a note on halogenation of alkanes. 44. Give the addition reactions of benzene. 45. Discuss the halogenation of benzene. 46. What is sulphonation ? Discuss the sulphonation of benzene. 47. Describe a method to distinguish between ethene and ethyne. 48. Why does ethene decolourise bromine water ; while ethane does not do so ? 49. Give one example each of (i) an addition reaction of chlorine (ii) substitution reaction of chlorine 50. Explain the term ‘polymerisation’ with two examples. 51. Give one chemical equation in which chlorine adds to a hydrocarbon by substitution. 52. What is Grignard reagent ? How is propane prepared from a Grignard reagent ? 53. Give the reason for the following : (i) The boiling points of hydrocarbons decrease with increase in branching. (ii) Unsaturated compounds undergo addition reactions. 54. Account for the following : (i) Boiling points of alkenes and alkynes are higher than the corresponding alkanes. (ii) Hydrocarbons with odd number of carbon atoms have low melting points than those with even number of carbon atoms. (iii) The melting points of cis-isomer of an alkene is lower than that of trans-isomer. 55. Why does acetylene behave like a very weak acid ? 56. Acetylene is acidic in character. Give reason. 57. Write a reaction of acetylene which shows its acidic character. 58. What is Baeyer’s test ? Give its utility. 59. What happens when bromine water is treated with : (i) ethylene (ii) acetylene ? What is its utility ? 60. Explain the terms (i) acetylation (ii) alkylation 61. Write notes on (i) Friedel-Craft’s reaction (ii) Addition reaction. (iii) Markowinkoff’s reaction. 62. Explain the following with examples (i) Wurtz reaction (ii) Kolbe’s electrolytic method 63. Write the equations for the preparation of propyne from acetylene. 64. How are the following conversions carried out ? (i) Propene to propane. (ii) acetylene to ethane (iii) Ethanol to ethene (iv) Methane to ethane (v) Propene to 2-Bromopropane (vi) Methane to tetrachloromethane 65. How will you convert : (i) Acetylene into acetaldehyde. (ii) Methane to ethane (iii) acetic acid to methane. (iv) ethene to ethanol. (v) acetylene to acetic acid. (vi) 2-Chlorobutane to 1-butene. (vii) Ethylene into glyoxal. (viii) Phenol to benzene 66. How will you convert : (i) Ethane to ethene (ii) Ethyl iodide to ethane (iii) Ethyl iodide into butane (iv) Ethyl alcohol into ethyne (v) Propyl chloride to propene 67. What happens when : (i) 1-Bromopropane is heated with alcoholic KOH. (ii) 2-Propanol is heated with alumina at 630 K (iii) Benzene is treated with a mixture of Con. Sulphuric acid and nitric acid. (iv) Ethane is treated with alkaline pot. Permanganate soln (cold). (v) benzene is treated with bromine in presence of aluminium bromide as catalyst. 68. What happens when : (i) Ethyl bromide is treated with alcoholic KOH. (ii) Ethylene dibromide is treated with zinc dust. (iii) Propene reacts with water in presence of a mineral acid. (iv) Ethylene is passed through alkaline KMnO4.Acetylene is hydrated in presence of mercuric sulphate and dil Sulphuric acid. (v) Ethanol is heated with Con. Sulphuric acid. (vi) Ethene is passed through Baeyer’s reagent (vii) Ethyne is passed through ammoniacal silver nitrate. (viii) Methyl bromide is treated with sodium. (ix) Ethanol is treated with HI acid. (x) Ozone is passed in ethylene in an organic solvent. 69. Use Markowinkoff’s rule to predict the product of reaction of (i) HCl with CH2C(Cl)=CH2 (ii) HCl with CH2CH=C(CH3) 2 70. Write chemical equation describing the general mechanism for electrophilic substitution in benzene ring. 71. Why are o- and p-directing groups called activating groups, whereas m-directing groups are deactivating groups ? 72. Name polynuclear hydrocarbon obtained from coal tar. 73. Name some carcino genic compounds. 74. What effect does branching of alkane chain has its boiling point ? 75. What happens when : i) 1-Bromopropane is heated with alcoholic KOH. ii) 2-Propanol is heated with alumina at 630 K. iii) Benzene is treated with a mixture of Con. Sulphuric acid and nitric acid. iv) Ethene is treated with an alkaline solution of cold KMnO4.
http://yourchemistrymaster.blogspot.jp/2009/11/unit-13-hydrocarbons.html
13
52
As we have seen, vector addition and scalar multiplication can produce new vectors out of old ones. For instance, we produce the vector A + B by adding the two vectors A and B. Of course, there is nothing that makes A + B at all distinct as a vector from A or B: all three have magnitudes and directions. And just as A + B can be construed as the sum of two other vectors, so can A and B. In problems involving vector addition, it’s often convenient to break a vector down into two components, that is, two vectors whose sum is the vector in question. We often graph vectors in an xy-coordinate system, where we can talk about vectors in purely numerical terms. For instance, the vector (3,4) is the vector whose tail is at the origin and whose tip is at the point (3,4) on the coordinate plane. From this coordinate, you can use the Pythagorean Theorem to calculate that the vector’s magnitude is 5 and trigonometry to calculate that its direction is about 53.1º above Two vectors of particular note are (1,0), the vector of magnitude 1 that points along the x , the vector of magnitude 1 points along the y -axis. These are called the basis and are written with the special hat notation: The basis vectors are important because you can express any vector in terms of the sum of multiples of the two basis vectors. For instance, the vector (3,4) that we discussed above—call —can be expressed as the vector sum is called the “x called the “y -component” of A In this book, we will use subscripts to denote vector components. For example, the x -component of A -component of vector A The direction of a vector can be expressed in terms of by which it is rotated counterclockwise from the x The process of finding a vector’s components is known as “resolving,” “decomposing,” or “breaking down” a vector. Let’s take the example, illustrated above, of a vector, A with a magnitude of A and a direction form a right triangle, we can use trigonometry to solve this problem. Applying the trigonometric definitions of cosine and sine, Vector Addition Using Components Vector decomposition is particularly useful when you’re called upon to add two vectors that are neither parallel nor perpendicular. In such a case, you will want to resolve one vector into components that run parallel and perpendicular to the other vector. ropes are tied to a box on a frictionless surface. One rope pulls due east with a force of 2.0N. The second rope pulls with a force of 4.0N at an angle 30º west of north, as shown in the diagram. What is the total force acting on the box? To solve this problem, we need to resolve the force on the second rope into its northward and westward components. Because the force is directed 30º west of north, its northward and its westward component is Since the eastward component is also 2.0N, the eastward and westward components cancel one another out. The resultant force is directed due north, with a force of approximately 3.4N. You can justify this answer by using the parallelogram method. If you fill out the half-completed parallelogram formed by the two vectors in the diagram above, you will find that the opposite corner of the parallelogram is directly above the corner made by the tails of those two vectors.
http://www.sparknotes.com/testprep/books/sat2/physics/chapter4section5.rhtml
13
64
Specifically, if c times b equals a, written: In the above expression, a is called the dividend, b the divisor and c the quotient. Conceptually, division describes two distinct but related settings. Partitioning involves taking a set of size a and forming b groups that are equal in size. The size of each group formed, c, is the quotient of a and b. Quotative division involves taking a set of size a and forming groups of size b. The number of groups of this size that can be formed, c, is the quotient of a and b. When division is taught to students in elementary school, which occurs following the teaching of multiplication, this usually results in the concept of non-integers being introduced to students. Unlike addition, subtraction, and multiplication, which always produce integers when the numbers previously involved were integers, division problems involving two or more integers do not always lead to one. Division is often shown in algebra and science by placing the dividend over the divisor with a horizontal line, also called a vinculum or fraction bar, between them. For example, a divided by b is written A typographical variation, which is halfway between these two forms, uses a solidus (fraction slash) but elevates the dividend, and lowers the divisor: Any of these forms can be used to display a fraction. A fraction is a division expression where both dividend and divisor are integers (although typically called the numerator and denominator), and there is no implication that the division needs to be evaluated further. A second way to show division is to use the obelus (or division sign), common in arithmetic, in this manner: Modern computers compute division by methods that are faster than long division: see Division (digital). A person can calculate division with an abacus by repeatedly placing the dividend on the abacus, and then subtracting the divisor the offset of each digit in the result, counting the number of divisions possible at each offset. In modular arithmetic, some numbers have a multiplicative inverse with respect to the modulus. We can calculate division by multiplication in such a case. This approach is useful in computers that do not have a fast division instruction. Division of integers is not closed. Apart from division by zero being undefined, the quotient will not be an integer unless the dividend is an integer multiple of the divisor; for example 26 cannot be divided by 10 to give an integer. In such a case there are four possible approaches. One has to be careful when performing division of integers in a computer program. Some programming languages, such as C, will treat division of integers as in case 4 above, so the answer will be an integer. Other languages, such as MATLAB, will first convert the integers to real numbers, and then give a real number as the answer, as in case 2 above. Names and symbols used for integer division include div, /, , and %. Definitions vary regarding integer division when the quotient is negative: rounding may be toward zero or toward minus infinity. Divisibility rules can sometimes be used to quickly determine whether one integer divides exactly into another. The result of dividing two rational numbers is another rational number when the divisor is not 0. We may define division of two rational numbers p/q and r/s by All four quantities are integers, and only p may be 0. This definition ensures that division is the inverse operation of multiplication. Division of two real numbers results in another real number when the divisor is not 0. It is defined such a/b = c if and only if a = cb and b ≠ 0. Division of any number by zero (where the divisor is zero) is not defined. This is because zero added to zero, no matter how many times the equation is repeated, will always result in a sum of zero. Entry of such an equation into most calculators will result in an error message being issued. Dividing two complex numbers results in another complex number when the divisor is not 0, defined thus: All four quantities are real numbers. r and s may not both be 0. Division for complex numbers expressed in polar form is simpler than the definition above: Again all four quantities are real numbers. r may not be 0. Note that with left and right division defined this way, A/(BC) is in general not the same as (A/B)/C and nor is (AB)C the same as A(BC), but A/(BC) = (A/C)/B and (AB)C = B(AC). In abstract algebras such as matrix algebras and quaternion algebras, fractions such as are typically defined as or where is presumed to be an invertible element (i.e. there exists a multiplicative inverse such that where is the multiplicative identity). In an integral domain where such elements may not exist, division can still be performed on equations of the form or by left or right cancellation, respectively. More generally "division" in the sense of "cancellation" can be done in any ring with the aforementioned cancellation properties. If such a ring is finite, then by an application of the pigeonhole principle, every nonzero element of the ring is invertible, so division by any nonzero element is possible in such a ring. To learn about when algebras (in the technical sense) have a division operation, refer to the page on division algebras. In particular Bott periodicity can be used to show that any real normed division algebra must be isomorphic to either the real numbers R, the complex numbers C, the quaternions H, or the octonions O. There is no general method to integrate the quotient of two functions. Plug in the brand plan: adding a dash of electronic marketing to any overarching brand strategy will not only help focus disparate marketing tactics into a cohesive plan, but it will also help drive the message home by increasing its relevance and reach.(Eye on E-Marketing) Dec 01, 2006; By now, most product managers have completed the 2007 planning process and are awaiting approval of the final budgets for next...
http://www.reference.com/browse/e-iq
13
92
Pioneering the Space Frontier The Report of the National Commission on Space Table of Contents The Universe is the true home of humankind. Our Sun is only one star in the billions that comprise the Milky Way Galaxy, which in turn is only one of the billions of galaxies astronomers have already identified. Our beautiful planet is one of nine in our Solar System. Understanding our Universe is not just an intellectual and philosophical quest, but a requirement for continuing to live in, and care for, our tiny part of it, as our species expands outward into the Solar System. Beginnings: The Big Bang, the Universe, and Galaxies In the 1920s scientists concluded that the Universe is expanding from its origin in an enormous explosion—the "Big Bang"—10 to 20 billion years ago. In the future, it could either expand forever or slow down and then collapse under its own weight. Recent studies in particle physics suggest that the Universe will expand forever, but at an ever decreasing rate. For this to be true, there must be about ten times more matter in the Universe than has ever been observed; this "hidden matter" may be in the form of invisible particles that are predicted to exist by modem theory. Thus, in addition to normal galaxies there may be invisible "shadow galaxies" scattered throughout space. The Universe contains 100 billion or more galaxies, each containing billions of stars. Our Galaxy, the Milky Way, is the home of a trillion sun, many of which resemble our Sun. Each of these stars, when it is formed from an interstellar cloud, is endowed with hydrogen and helium—simple chemical elements that formed in the Big Bang, as well as with heavier elements that formed in previous ste1lar furnaces. Hydrogen is consumed by a thermonuclear fire in the star's core, producing heavier chemical elements that accumulate there before becoming fuel for new, higher temperature burning. In massive stars, the process continues until the element iron dominates the core. No further energy-producing nuclear reactions are then possible, so the core collapses suddenly under its own weight, producing vast amounts of energy in a stellar explosion known as a supernova. The temperature in a supernova is so high that virtually all of the chemical elements produced are flung into space, where they are available to become incorporated in later generations of stars. About once per century a supernova explosion occurs in each galaxy, leaving behind a compact object that may be a neutron star—as dense as an atomic nucleus and only a few miles in diameter—or a stellar black hole, in which space-time is so curved by gravity that no light can escape. The Solar System Our Solar System consists of the Sun, nine planets, their moons and rings, the asteroids, and comets. Comets spend most of their time in the "Oort cloud," located 20,000-100,000 astronomical units from the Sun (an astronomical unit is the distance from Earth to the Sun, 93 million miles). The Solar System formed 4.5 billion years ago near the edge of our Galaxy. Heir to millions of supernova explosions, it contains a full complement of heavy elements. Some of these, like silicon, iron, magnesium, and oxygen, form the bulk of the composition of Earth; the elements hydrogen, carbon, and nitrogen are also present, providing molecules essential for life. A Grand Synthesis The Universe has evolved from the Big Bang to the point we see it today, with hundreds of billions of galaxies and perhaps countless planets. There is no evidence that the processes which govern the evolution from elementary particles to galaxies to stars to heavy elements to planets to fife to intelligence differ significantly elsewhere in the Universe. By integrating the insights obtained from virtually every branch of science, from particle physics to anthropology, humanity may hope one day to approach a comprehensive understanding of our position in the cosmos. A Nobel Prize-winning theory predicts the change in beat capacity of liquid helium at uniform pressure as it makes a transition to the superfluid state. Although equipment has been developed to hold samples at a steady temperature within one part in 10 billion, the variation of pressure through the sample due to gravity is so large that experiments have yielded far less accurate results than desired. Reducing gravity by a factor of 100,000, as is possible in the Space Station, can provide a high-quality test of the theory. Research is also proceeding on "fractal aggregates," structures that have the remarkable property that their mean density literally approaches zero the larger they become. Such structures are neither solid nor liquid, but represent an entirely new state of matter. So far, experiments on such structures are limited by the fact that the aggregates tend to collapse under their own weight as soon as they reach .0004 inches in size. In a microgravity environment, it should be possible to develop structures 100,000 times larger, or three feet across. Such sizes are essential if measurements of the physical properties of fractal structures are to be made. Research on many other processes, including fractal gels, dendritic crystallization (the process that produces snowflakes), and combustion of clouds of particles, will profit substantially from the microgravity environment of the Space Station. It is not unlikely that novel applications will develop from basic research in these areas; just as the transistor grew out of basic research on the behavior of electrons in solids. An especially promising avenue of research in space is the pursuit of new tests of Einstein's theory of general relativity. It has long been recognized that because deviations from the Newtonian theory of gravitation within the Solar System are minute, extremely sensitive equipment is required to detect them. Many experiments require the ultraquiet conditions of space. Because Einstein's theory is fundamental to our understanding of the cosmos—in particular, to the physics of black holes and the expanding Universe—it is important that it be experimentally verified with the highest possible accuracy. Relativity predicts a small time delay of radio signals as they propagate in the Solar System; the accuracy in measuring this effect can be continuously improved by tracking future planetary probes. A Mercury orbiter would further improve the accuracy of measurement of changes in Newton's gravitational constant, already shown to be less than one part in 100 billion per year. An experiment in Earth orbit called Gravity Probe B will measure the precession of a gyroscope in Earth orbit with extreme precision, permitting verification of another relativistic effect. Einstein's theory also predicts that a new type of radiation should be produced by masses in motion. There is great interest in detecting this so-called gravitational radiation, not only because it would test Einstein's theory in a fundamental way, but because it could open a new window through which astronomers could study phenomena in the Universe, particularly black holes. Gravitational radiation detectors are in operation, or are being built, on the ground, but they are sensitive only to wave periods less than 0.1 second because of Earth's seismic noise. The radiation predicted from astronomical objects would have much longer periods if it is due to orbiting double stars and black holes with masses greater than 10,000 Suns, such as are believed to exist in the nuclei of galaxies. An attempt will be made to detect such radiation by ranging to the Galileo spacecraft en route to Jupiter. A more powerful approach for the future is to use a large baseline detector based upon optical laser ranging between three spacecraft in orbit about the Sun; detecting minute changes in their separations would indicate the passage of a gravitational wave. Finally, instruments deployed for more general purposes can make measurements to test general relativity. For example, a 100-foot optical intederometer in Earth orbit designed for extremely accurate determination of stellar positions could measure the relativistic bending of light by the Sun with unprecedented precision. A spacecraft that plunges close to the Sun to study plasma in its vicinity could measure the gravitational red-shift of the Sun to high precision. In summary, a variety of space-based experiments on the shuttle and Space Station, in free flyers, and in orbit around the Sun and other planets have the capacity to test general relativity with a high degree of accuracy. Gravitational radiation from certain astronomical sources can be detected only in space. When that happens, astronomers will have an exciting new tool with which to study the Universe. The objective of this field of study is to understand the physics of the Sun and the heliosphere, the vast region of space influenced by the Sun. Other regions of interest include the magnetospheres, ionospheres, and upper atmospheres of Earth, the planets, and other bodies of the Solar System. With this in mind, studies of the basic processes which generate solar energy of all kinds and transmit it to Earth should be emphasized, both because the physical mechanisms involved are of interest, and because there are potential benefits to life on Earth. There are a number of sub-goals within this discipline: To understand the processes that fink the interior of the Sun to its corona; the transport of energy, momentum, plasma, and magnetic fields through interplanetary space by means of the solar wind; the acceleration of energetic particles on the Sun and in the heliosphere; Earth's upper atmosphere as a single, dynamic, radiating, and chemically active fluid; the effects of the solar cycle, solar activity, and solar-wind disturbances upon Earth; the interactions of the solar wind with Solar System bodies other than Earth; and magnetospheres in general. Without assuming specific direct connection, the possible influence of solar-terrestrial interactions upon the weather and climate of Earth should be clarified. A number of near-term activities are essential to the advancement of solar and space physics. Advanced solar observatories will study detailed energy production mechanisms in the solar atmosphere, while the European Space Agency's Ulysses spacecraft will make measurements of activity at the poles of the Sun. Spacecraft with sufficient velocity to leave the inner Solar System will make possible measurements in the outer heliospbere, including its transition to the interstellar medium of the Galaxy. The International Solar-Terrestrial Physics program, which wilI be carried out jointly by the United States, Japan, and Europe, will trace the flow of matter and energy from the solar wind through Earth's magnetosphere and into the upper atmosphere; investigate the entry, storage, and energization of plasma in Earth's neighborhood; and assess how time variations in the deposition of energy in the upper atmosphere affect the terrestrial environment. Interactions of solar plasma with other planets and with satellites and comets will be investigated by a number of planetary probes already in space or on the drawing boards. Up to now, information about Earth's magnetosphere has been based upon measurements made continuously as various spacecraft move through the plasma and magnetic field in that region. An instantaneous global image of the entire magnetosphere can be made using ultraviolet emissions from ionized helium in the magnetosphere. It may also be possible to form an image of energetic particles by observing energetic neutral atoms as they propagate from various regions, having exchanged charge with other atoms there. Innovative experiments will be conducted from the shuttle to investigate the effects of waves, plasma beams, and neutral gases injected into Earth's magnetosphere. To date, our knowledge of the outer atmosphere of the Sun has been based upon remote sensing from the distance of Earth. In a new concept, a spacecraft would be sent on a trajectory coming to within 4 solar radii of the surface of the Sun, only 1/50th of Earth's distance. The spacecraft would carry instruments to measure the density, velocity, and composition of the solar-wind plasma, together with its embedded magnetic field, in an attempt to discover where the solar wind is accelerated to the high velocities observed near Earth. Possible trajectories include a Jupiter swingby or a hypersonic flyby in the upper atmosphere of Venus. Such a mission would yield precise data on the gravitational field of the Sun with which to study its interior, and would test general relativity with higher precision. If a thruster were fired at the closest approach to the Sun, the energy change would be so great that the spacecraft would leave the Solar System with high velocity, reaching 100 times the distance of Earth in only nine years. This would provide measurements where the solar wind makes a transition to the local interstellar medium. To acquire high-resolution information about the poles of the Sun over a long-period, a solar polar orbiter should be flown. A network of four spacecraft at the distance of Earth, but positioned every 90 degrees around the Sun, would provide stereoscopic views of solar features which are otherwise difficult to locate in space, and would also monitor solar flare events over the whole Sun. Such a network would also give early warning to astronauts outside the protective shield of Earth's magnetic field. Finally, plasmas in space should be studied for their own sake. Plasma is an inherently complex state of matter, involving many different modes of interaction among charged particles and their embedded magnetic fields. Our understanding of the plasma state is based upon theoretical research, numerical simulations, laboratory experiments, and observations of space plasmas. The synergy among these approaches should be developed and exploited. If neutral atoms and dust particles are present, as in planetary ring systems and in comets, novel interactions occur; they can be studied by injecting neutral gases and dust particles into space plasmas. With the exception of the samples returned from the Moon by the Apollo astronauts and the Soviet robotic Luna spacecraft, and meteoritic materials that are believed to have fallen naturally on Earth from the asteroids, the Moon, and Mars, we have no samples of materials from bodies elsewhere in the Solar System for analysis in Earth-based laboratories. Decades of study of meteoritic materials and lunar samples have demonstrated that vast amounts of information can be learned about the origin, evolution, and nature of the bodies from which samples are derived, using laboratory techniques which have progressed to the point where precise conclusions can be drawn from an analysis of even a microscopic sample. The laboratory apparatus involved is heavy, complex, and requires the close involvement of people. Thus, given the substantial round-trip travel time for radio signals between Earth and these objects, it appears impractical to operate this equipment effectively under radio control on the bodies of greatest interest. The best method is to acquire and return samples, as was done by Apollo and Luna. Robot vehicles will be the most cost-effective approach to sample acquisition and return in the foreseeable future. Unlike meteoritic materials, the samples will be obtained from known sites, whose location in an area which has been studied by remote sensing makes it possible to generalize the results to the body as a whole. Because of the variations among different provinces, samples are required from several sites in order to develop an adequate understanding of a specific object. Considerable thought has been given to which targets are the most promising. They must be reachable, and samples must be able to be returned with technology that can be developed in the near future. Their surfaces must be hospitable enough so that collecting devices can survive on them, and they must be well-enough understood that a complex sample-return mission can be planned and successfully executed. For these reasons, as well as others noted in the text, we recommend that a sample return from Mars be accomplished as soon as possible. Though at present no individual comet meets the criteria discussed above, comets in general are promising targets for sample return. A start on the study of comets has been made by the 1985 encounter with Comet Giacobini-Zinner, and the 1986 encounters with Comet Halley. The proposed mission to a comet and an asteroid in the Solar System Exploration Committee's core program wi1l yield much more information. Comets are probably composed of ices of methane, ammonia, and water, together with silicate dust and organic residues. The evidence suggests that these materials accumulated very early in the history of the Solar System. Because comets are very small (a few miles in diameter), any heat generated by radioactivity readily escaped, so they never melted, unlike the larger bodies in the Solar System. It is quite possible, therefore, that the primitive materials which accumulated to form the Sun, planets, moons, and asteroids are preserved in essentially their original form within comets. It is even possible that comets contain some dust particles identical to those astronomers have inferred to be present in interstellar clouds. If so, a comet could provide a sample of the interstellar matter that pervades our Galaxy. The Space Science Board has given high priority to determining the composition and physical state of a cometary nucleus. No mission short of a sample return will provide the range and detail of analyses needed to definitively characterize the composition and structure of a comet nucleus. Beyond the asteroid belt he four giant ringed planets (Jupiter, Saturn, Uranus, and Neptune), the curiously small world Pluto, more than 40 moons (two of which—Titan and Ganymede—are larger than the planet Mercury), and two planetary magnetospheres larger than the Sun itself. The center of gravity of our planetary system is here, since these worlds (chiefly Jupiter and Saturn) account for more than 99 percent of the mass in the Solar System outside of the Sun itself. The outer planets, especially Jupiter, can provide unique insights into the formation of the Solar System and the Universe. Because of their large masses, powerful gravitational fields, and low temperatures, these giant planets have retained the hydrogen and helium they collected from the primordial solar nebula. The giant worlds of the outer Solar System differ greatly from the smaller terrestrial planets, so it is not surprising that different strategies have been developed to study them. The long-term exploration goal for terrestrial planets and small bodies is the return of samples to laboratories on Earth, but the basic technique for studying the giant planets is the direct analysis of their atmospheres and oceans by means of probes. Atmospheric measurements, which will be undertaken for the first time by Galileo at Jupiter, provide the only compositional information that can be obtained from a body whose solid surface (if any) lies inaccessible under tens of thousands of miles of dense atmosphere. Atmospheric probe measurements, like measurements on returned samples, will provide critical information about cosmology and planetary evolution, and will permit fundamental distinctions to be made among the outer planets themselves. Exciting possible missions include: (1) Deep atmospheric probes (to 500 bars) to reach the lower levels of the atmospheres of Jupiter and Saturn and measure the composition of these planets; (2) hard and soft landers for various moons, which could emplace a variety of seismic, heat-flow, and other instruments; (3) close-up equipment in low orbits; (4) detailed studies of Titan, carried out by balloons or surface landers; (5) on-site, long-term observations of Saturn's rings by a so-called "ring rover" spacecraft able to move within the ring system; and (6) a high-pressure oceanographic probe to image and study the newly-discovered Uranian Ocean. The size of the current generation of "great observatories" reflects the limitations on weight, size, and power of facilities that can be launched into low Earth orbit by the space shuttle. In the future, the permanently occupied Space Station will furnish a vitally important new capability for astronomical research—that of assembling and supporting facilities in space that are too large to be accommodated in a single shuttle launch. Such large facilities will increase sensitivity by increasing the area over which radiation is collected, and will increase angular resolution using the principle of interferometry, in which the sharpness of the image is proportional to the largest physical dimension of the observing system. Though one or the other goal will usually drive the design of any particular instrument, it is possible to make improvements in both areas simultaneously. When we can construct very large observatories in space, these improvements wil1 be achieved over the whole electromagnetic spectrum. Although the Moon will also offer advantages for astronomical facilities once a lunar base becomes available, we focus our remaining discussion upon facilities in low Earth orbit. A large deployable reflector of 65 to 100 feet aperture for observations in the far infrared spectrum, that is, diffraction limited down to 30 microns wavelength (where it would produce images of a fraction of an arc second across), will permit angular resolutions approaching or exceeding that of the largest ground—based optical telescopes. This project would yield high-resolution infrared images of planets, stars, and galaxies rivaling those routinely available in other wavelength ranges. Assembly in Earth orbit is the key to this observatory. A large space telescope array composed of several 25-foot-diameter telescopes would operate in the ultraviolet, visible, and infrared. The combination of larger diameter telescopes with a large number of telescopes would make this instrument 100 times more sensitive than Hubble Space Telescope. Because the image would be three times sharper, the limiting faintness for long exposures would increase more than 100 times. Such an instrument would with exquisite angular and spectral resolution enable detailed studies of the most distant galaxies and studies of planets. A set of radio telescopes 100 feet or more in diameter could be constructed in Earth orbit by astronauts to provide a very long baseline array for observing radio sources, with the radio signals transmitted to a ground station. Such radio telescopes in space could greatly extend the power of the ground—based Very Long Baseline Array now under construction. The angular resolution of the latter, 0.3 milliarcseconds (the size of a person on the Moon as seen from Earth), could be improved 300-fold by putting telescopes in orbits ranging out as far as 600,000 miles. The resulting resolution of 1/1000th of a milliarcsecond—or one microarcsecond—would enable us to image activity in the center of our Galaxy—believed to be due to a black hole—very nearly down to the black hole itself. It would also provide images of larger, more massive black holes suspected to he at the centers of several nearby galaxies. A long-baseline optical space interferometer composed of two or more large telescopes separated by 300 miles would also provide resolution of 1 microarcsecond, although not complete information about the image. This resolution would permit us to detect a planet no lager than Earth in orbit around a nearby star (by means of its gravitational pull on the star) and to measure the gravitational deflection of light by the Sun as a high—precision test of general relativity. A high-sensitivity x-ray facility, having about 100 times the collecting area of the planned Advanced X-ray Astrophysics Facility, could be assembled in orbit. A space station-serviced x-ray observatory would make possible the detection of very faint objects, such as stellar explosions in distant galaxies, as well as high-spectral resolution of brighter objects. This would make possible a study of x-ray signatures of the composition, temperature, and motion of emitted gases. For example, the theory that most heavy elements are produced in supernovae can be tested by studying die gaseous ejecta in supernova, remnants. A hard x-ray imaging facility with a large (1,000 square feet) aperture is needed to study x-rays with energies in the range from 10 KeV to 2 MeV. Sources of such radiation are known, but are too faint for smaller-aperture instruments to analyze in detail. It is important to find out whether the known faint background radiation at these energies is coming from very distant objects, such as exploding galaxies, or from gas clouds heated by the stellar explosions that accompany galaxy formation. The future development of gamma-ray astronomy will depend upon the results of planned gamma-ray observatory, but it is anticipated that larger collecting areas and higher spectral and angular resolution will be needed to sort out the sources and carry out detailed spectroscopy. Cosmic-ray studies will require a superconducting magnet in space with 1,000 square feet of detectors to determine the trajectories of individual particles and hence their energy and charge. The great observatories of the next century will push technology to its limits, including the capability to assemble large structures in orbit and on the Moon, the design of extremely rigid structures that can be tracked and moved with great precision, and the development of facilities on the Space Station for repairing and maintaining astronomical facilities in orbit. Because of the huge information rates anticipated from such observatories, great advances in computing will be required, especially massive data storage (up to 100 billion bits) accessible at high rates. The preliminary analysis of the data will be performed by supercomputers in orbit, transmitting only the results to the ground. The program will require a long—term commitment to education and support of young scientists, who will be the life blood of the program, as well as the implementation of high-priority precursor missions, including first-generation great observatories and moderate-scale projects. The Evolution of Earth and Its Life Forms Earth is the only one of our Solar System's nine planets that we know harbors life. Why is Earth different from the other planets? Life as we know it requires tepid liquid water, and Earth alone among the bodies of the Solar System has had that throughout most of its history. Biologists have long pursued the hypothesis that living species emerge very gradually, as subtle changes in the environment give decisive advantages to organisms undergoing genetic mutations. The recent discovery that the extinction of the dinosaurs (and many other species as well) some 65 million years ago appears to have coincided with the collision of Earth with a large object from outer space—such as a comet or asteroid—has led to new interest in "punctuated equilibrium." According to this concept, a drastic change in environment, in this case the pall cast upon Earth by the giant cloud of dust that resulted from the collision, can destroy some branches of the tree of life in a short span of time, and thereby open up new opportunities for organisms that were only marginally competitive before. The story of the evolution of life on Earth—once the sole province of biology—thus depends in pan upon astronomical studies of comets and asteroids which may collide with our planet, the Physics of high-velocity impact, and the complex processes that govern the movement of dust in Earth’s atmosphere. Atmospheric scientists are finding that within such short times as decades or centuries the character of life on Earth may depend upon materials originating in the interior of the planet (including dust and gases from volcanoes), chemical changes in the oceans and the atmosphere (including the increase in carbon dioxide due to agricultural and industrial activity), and specific radiations reaching us from the Sun (such as the ultraviolet rays which affect the chemical composition of Earth's atmosphere). Through mechanisms still not understood, changes in Earth’s climate may in turn depend upon the evolution of life. It has become apparent that fife on Earth exists (in a complex and delicate balance not only with its own diverse elements, but with Earth itself, the Sun, and probably even comets and asteroids. Interactions among climatology, geophysics, geochemistry, ecology, astronomy, and solar physics are all important as we contemplate the future of our species; space techniques are playing an increasing role in these sciences. Space techniques are also valuable for studying Earth's geology. The concept of continental drift, according to which the continents change their relative positions as the dense rocks on which they rest slowly creep, is proving to be a key theory in unraveling the history of Earth as recorded in die layers of sediments bud down over millions of years. The Possibility of Other Life in the Universe Are we alone in the Universe? Virtually all stars are composed of the same chemical elements, and our current understanding of the process by which the Solar System formed suggests that all Sun-like stars are likely locales for planets. The search for life begins in our own Solar System, but based on the information we have gleaned from robotic excursions to Mercury, Venus, the Moon, Mars, Jupiter, Saturn, and Uranus, it now appears that Mars, and perhaps Titan, a moon of Saturn, are the most likely candidates for the existence of rudimentary fife forms now or in the past. The existence of water on Mars in small quantities of surface ice and in atmospheric water vapor, and perhaps in larger quantities frozen beneath the surface, leaves open the possibility that conditions on Mars may once have been favorable enough to support life in some areas. Samples returned from regions where floods have occurred may provide new clues to the question of life on Mars. Titan has a thick atmosphere of nitrogen, along with methane and traces of hydrogen cyanide—one of the building blocks of biological molecules. Unfortunately, the oxygen atoms needed for other biological molecules are missing, apparently locked forever in the ice on Titan's surface. How do we search for planets beyond our Solar System? The 1983 Infrared Astronomy Satellite discovered that dozens of stars have clouds of particles surrounding them emitting infrared radiation; astrophysicists believe that such clouds represent an early stage in the formation of planets. Another technique is to track the position of a star over a number of years. Although planets are much less massive than stars, they nevertheless exert a significant gravitational force upon them, causing them to wobble slightly. Through a principle called interferometry, which combines the outputs of two telescopes at some distance apart to yield very sharp images, it should be possible to detect planets—if they exist—by the perturbations they cause as they orbit nearby sun similar to our Sun. With sufficiently large arrays of telescopes in space we might obtain images of planets beyond the Solar System. By searching for evidence of water and atmospheric gases we might even detect the existence of life on those planets. If life originated by the evolution of large molecules in the oceans of newly-formed planets, then other planets scattered throughout our Galaxy could be inhabited by living species, some of which may possess intelligence. If intelligent life does exist beyond our Solar System, we might detect its messages. The Search for Extraterrestrial Intelligence, or SETI, is a rapidly advancing field. For several decades it has been technically possible to detect radio signals (if any) directed at Earth by alien civilizations on planets orbiting nearby stars. It is now possible to detect such signals from anywhere in our Galaxy, opening up the study of over 100 billion candidate stars. Such a detection, if it ever occurs, would have profound implications not only for physical and biological sciences, but also for anthropology, political science, philosophy, and religion. Are we alone? We still do not know. To lift payloads in Earth's gravitational field and place them in orbit, we must expend energy. We generate it first as the energy of motion—hence the great speeds our rockets must attain. As rockets coast upward after firing, their energy of motion converts, according to Newton's laws, to the energy of height. In graphic terms, to lift a payload entirely free of Earth's gravitational clutch, we must spend as much energy as if we were to haul that payload against the full force of gravity that we feel on Earth, to a height of 4,000 miles. To reach the nearer goal of low Earth orbit, where rockets and their payloads achieve a balancing act, skimming above Earth's atmosphere, we must spend about half as much energy—still equivalent to climbing a mountain 2,000 miles high. Once in "free space," the region far from planets and moons, we can travel many thousands of miles at small expenditure of energy. A biosphere is an enclosed ecological system. It is a complex, evolving system within which flora and fauna support and maintain themselves and renew their species, consuming energy in the process. A biosphere is not necessarily stable; it may require intelligent tending to maintain species at the desired levels. Earth supports a biosphere; up to now we know of no other examples. To explore and settle the inner Solar System, we must develop biospberes of smaller size, and learn how to build and maintain them. In order to grow food crops and the entire range of plants that enrich and beautify our lives, we need certain chemical elements, the energy of sunlight, gravity, and protection from radiation. All can be provided in biospheres built on planetary surfaces, although normal gravity is available only on Earth. These essentials are also available in biospheres to be built in the space environment, where Earth-normal gravity can be provided by rotation. Both in space and on planetary surfaces, certain imported chemicals will be required as initial stocks for biospheres. Within the past two decades biospheres analogous to habitats in space or on planetary surfaces have been built both in the U.S.S.R. and in the United States. An example of biosphere technologies can be seen at the "Land" pavilion at the EPCOT Center (Experimental Prototype Community of Tomorrow) near Orlando, Florida. Specialists in biospheres are now building, near Tucson, Arizona, a fully dosed ecological system which will be a simulation of a living community in space. It is Called Biosphere II. Its volume, three million cubic feet, is about the volume over a land area of four acres with a roof 20 feet above it. Biosphere II is much more than greenhouse agriculture. Within it will be small versions of a farm, an ocean, a savannah, a tropical jungle and other examples of Earth's biosystems. Eight people will attempt to live in Biosphere II for two years. The builders of Biosphere II have several goals: to enhance greatly our understanding of Earth's biosphere; to develop pilot versions of biospheres, which could serve as refuges for endangered species; and to prepare for the building of biospheres, in space and on planetary surfaces, which would become the settlements of the space frontier. In the early 1960s, the Government, through NASA, developed and launched the first weather satellites. When the operation of weather satellites matured, they were turned over to the Department of Commerce's Environmental Science Services Administration, which became pan of the newly-established National Oceanic and Atmospheric Administration (NOAA) in 1970. Today, NOAA continues to operate and manage the U.S. civilian weather satellite system, comprised of two polar-orbiting and two geostationary satellites. The Landsat remote sensing system had similar origins. Developed initially by NASA, the first Landsat satellite was launched in 1972; the most recent spacecraft in the series, Landsat 5, was orbited in 1984. Although remote sensing data are provided by the United States at re1atively low cost, many user nations have installed expensive equipment to directly receive Landsat data. Their investments in Landsat provide a strong indication of the data's value. Successive administrations and Congresses wrestled with the question of how best to deal with a successful experimental system that had, in fact, become operational. Following exhaustive governmental review, President Carter decided in 1979 that Landsat would be transferred to NOAA with the eventual goal of private sector operation after 7 to 10 years. Following several years of transition between NASA and NOAA, the latter formally assumed responsibility for Landsat 4 in 1983. By that time, the Reagan Administration had decided to accelerate the privatization of Landsat, but despite the rapid growth in the demand for these services, no viable commercial entity appeared ready to take it over without some sort of Government subsidy. In 1984, Congress passed the Land Remote Sensing Commercialization Act to facilitate the process. Seven qualified bidders responded to the Government's proposal to establish a commercial land remote sensing satellite system, and two were chosen by the Department of Commerce for final competition, One later withdrew after the Reagan Administration indicated that it would provide a considerably lower subsidy than anticipated. The remaining entrant, EOSAT, negotiated a contract that included a Government subsidy and requires them to build at least two more satellites in the series. In the fall of 1985, EOSAT, a joint venture between RCA Astro-Electronics and Hughes Santa Barbara Aerospace, assumed responsibility for Landsat. The Government's capital assistance to EOSAT is in limbo at this time because of the current budget situation, even though EOSAT was contractually targeted for such financial support. It is, therefore, too soon to say whether the Landsat privatization process will provide a successful model for the transfer of a Government-developed space enterprise to the private sector. Factories that could replicate themselves would be attractive for application in space because the limited carrying capacity of our rocket vehicles and the high costs of space transport make it difficult otherwise to establish factories with large capacities. The concept of self-replicating factories was developed by the mathematician John von Neumann. Three components are needed for industrial establishment in space: a transporting machine, a plant to process raw material, and a "job shop" capable of making the heavy, simple parts of more transporting machines, process plants, and job shops. These three components would all be tele-operated from Earth, and would normally be maintained by robots. Intricate parts would be supplied from Earth, but would be only a small percentage of the total. Here is an example of how such components, once established, could grow from an initial "seed" exponentially, the same way that savings grow at compound interest, to become a large industrial establishment: Suppose each of the three seed components had a mass of 10 tons, so that it could be transported to the Moon in one piece. The initial seed on the Moon would then be 30 tons. A processing plant and job shop would also be located in space—20 tons more. After the first replication, the total industrial capacity in space and on the Moon would be doubled, and after six more doublings it would be 128 times the capacity of the initial seed. Those seven doublings would give us the industrial capacity to transport, process, and fabricate finished products from over 100,000 tons of lunar material each year from then onward. That would be more than 2,000 times the weight of the initial seed—a high payback from our initial investment. In an electromagnetic accelerator, electric or magnetic fields are used to accelerate material to high speeds. The power source can be solar or nuclear. There are two types of accelerators for use in space: the "ion engine" and the "mass-driver." The ion engine uses electric fields to accelerate ions (charged atoms). Ion engines are compact, relatively fight in weight, and well-suited to missions requiring low thrust sustained for a very long time. Mass-drivers are complementary to ion engines, developing much higher thrusts but not suited to extreme velocities. A mass-driver accelerates by magnetic rather than electric fields. It is a magnetic linear accelerator, designed for long service fife, and able to launch payloads of any material at high efficiency. Mass-drivers should not be confused with "railguns," which are electromagnetic catapults now being designed for military applications. A mass-driver consists of three parts: the accelerator, the payload carrier, and the Payload. For long lifetime, the system is designed to operate without physical contact between the payload carrier and accelerator. The final portion of the machine operates as ii, decelerator, to slow down each payload carrier for its return and reuse. A key difference between the mass-driver and the ion engine is that the mass-driver can accelerate any solid or liquid material without regard to its atomic properties. Used as a propulsion system, the mass-driver could use as propellant, raw lunar soil, powdered material from surplus shuttle tankage in orbit, or any material found on asteroids. Its characteristics make it suitable for load-carrying missions within the inner solar system. Another potential application for a mass-driver is to launch payloads from a fixed site. The application studied in the most depth at this time is the launch of raw material from the Moon to a collection point in space, for example, one of the lunar Lagrange points. A mass-driver with the acceleration of present laboratory models, but mounted on the lunar surface, would be able to accelerate payloads to lunar escape speed in a distance of only 170 yards. Its efficiency would be about 70 percent, about the same as that of a medium-size electric motor. Loads accelerated by a mass-driver could range from a pound to several tons, depending on the application and available power supply. Technological advance across a broad spectrum is the key to fielding an aerospace plane. A highly innovative propulsion design can make possible horizontal takeoff and single-stage-to-orbit flight with high specific impulse (Isp). The aerospace plane would use a unique supersonic combustion ramjet (SCRAMJET) engine which would breathe air up to the outer reaches of the atmosphere. This approach virtually eliminates the need to carry liquid oxygen, thus reducing propellant and vehicle weight. A small amount of liquid oxygen would be carried to provide rocket thrust for orbital maneuvering and for cabin atmosphere. A ramjet, as its name implies, uses the ram air pressure resulting from the forward motion of the vehicle to provide compression. The normal ramjet inlet slows down incoming air while compressing it, then bums the fuel and air subsonically and exhausts the combustion products through a nozzle to produce thrust. To fly faster than Mach 6, the internal geometry of the engine must be varied in order to allow the air to remain at supersonic speeds through the combustor. This supersonic combustion ramjet could potentially attain speed capability of Mach 12 or higher. Such a propulsion system must cover three different flight regimes: takeoff, hypersonic, and rocket. For takeoff and acceleration to Mach 4, it would utilize air-turbo-ramjets or cryojets. From Mach 4 to Mach 6, the engine would operate as a conventional subsonic combustion ramjet. From Mach 6 to maximum airbreathing speeds, the engine would employ a supersonic combustion SCRAMJET. At speeds of about Mach 12 and above, the SCRAMJET engine might have additional propellant added above the hydrogen flow rates needed for utilization of all air captured by the inlet. This additional flow would help cool the engine and provide additional thrust. Final orbital insertion could be achieved with an auxiliary rocket engine. Such a system of propulsion engines must be carefully integrated with the airframe. Proper integration of the airbreathing inlets into the 6frame is a critical design problem, since the shape of the aircraft itself determines in large part the performance of the engine. During SCRAMJET operation, the wing and forward underbody of the vehicle would generate oblique shock waves which produce inlet air flow compression. The vehicle afterbody shape behind the engine would form a nozzle producing half the thrust near orbital speeds. Second-generation supercomputers can now provide the computational capability needed to efficiently calculate the flow fields at these extremely high Mach numbers. These advanced design tools provide the critical bridge between wind tunnels and piloted flight in regimes of speed and altitude that are unattainable in ground-based facilities. In addition, supercomputers permit the usual aircraft design and development time to be significantly shortened, thus permitting earlier introduction of the aerospace plane into service. The potential performance of such an airframe-inlet-engine-nozzle combination is best described by a parameter known as the net "Isp," which is the measure of the pounds of thrust, minus the drag from the engine, per pounds of fuel flowing through each second. The unit of measure is seconds; the larger the value, the more efficient the propulsion. For the aerospace plane over the speed range of Mach 0 to Mach 25, the engines should achieve an average Isp in excess of 1,200 seconds burning liquid hydrogen. This compares with an Isp of about 470 seconds for the best current hydrogen-oxygen rocket engines, such as the space shuttle main engine. It is the high Isp of an air-breathing engine capable of operating over the range from takeoff to orbit that could make possible a single-stage, horizontal takeoff and landing aerospace plane. For "airliner" or "Orient Express" cruise at Mach 4 to Mach 12, the average Isp is even larger, making the SCRAMJET attractive for figure city-to-city transportation. Another key technology is high strength-to-weight ratio materials capable of operating at very high temperatures while retaining the properties of reusability and long life. These can make possible low maintenance, rapid turnaround, reduced logistics, and low operational costs. Promising approaches to high-temperature materials include rapid-solidification-rate metals, carbon-carbon composites, and advanced metal matrix composites. In extremely hot areas, such as the nose, the use of active cooling with liquid hydrogen or the use of liquid metals to rapidly remove heat win also be employed. The use of these materials and cooling technologies with innovative structural concepts results in important vehicle weight reductions, a key to single-stage-to-orbit flight. The Performance of rocket vehicles is primarily determined by the effective specific impulse of the propulsion system and the dry weight of the entire vehicle. Best Performance can be attained by burning a hydrocarbon fuel at low altitudes, then switching to hydrogen for the rest of the flight to orbit. This assures high effective specific impulse, thus minimizing the volume and weight of the tankage required for Propellants. Rocket engines have been studied which combine into one efficient design the ability to operate in a dual-fuel, combined-cycle mode. Lightweight versions of such engines are clearly possible, but will require technology demonstration and development. The greatest leverage for high performance can be obtained by reducing the inert weight of the tanks, airframe, and other components, since they are lifted all the way into orbit and thus displace payload on a pound-for-pound basis. This holds for the entire vehicle in a single-stage design, and for the final stage (and to a lesser amount for the initial stage) in a two-stage vehicle. The use of new materials with very high strength-to-weight ratios at elevated temperatures could greatly reduce the weight of the tankage, primary structure, and thermal protection system. Thus, aluminum tankage and structure could be replaced with composite and metal matrix materials. Separate heat insulating thermal protection layers could be replaced with heat rejection via radiation by allowing the skin to get very hot, and perhaps by providing active cooling of some substructure. Wing and control surface weight can be minimized by using a control-configured design and small control surfaces. Advances in these technologies, which should be feasible by the early 1990s, have the potential of reducing the vehicle dry weight dramatically, compared to designs for the same payload weight using shuttle technology. The performance of rocket vehicles using such technology would far exceed today's values. Depending on the dry weight reductions actually achieved, the best vehicle could have either a single-stage fully-reusable design, or a fully-reusable two-stage design. Attainment of low operating costs will depend most heavily on technology for handling and processing the launch vehicle and cargo in an automated, simple, and rapid manner. This includes self-checkout and launch from the vehicle's cockpit, high reliability and fault-tolerance in the avionics, adaptive controls, lightweight all-electric actuators and mechanisms, standardized mechanisms for modularized servicing of the vehicle., and automated flight planning. What goes up must come down—even in Earth orbit! The difference in space is that it can take millions of years for objects to be pulled back to Earth by friction with Earth's atmosphere, depending on how dose they am to Earth. An object 100 miles above Earth will return in a matter of days, while objects in geostationary orbit will take millions of years to reenter. Since the dawn of the Space Age, thousands of objects with a collective mass of millions of pounds have been deposited in space. While some satellites and pieces of debris are reentering, others are being launched, so the space debris population remains constant at approximately 5,000 pieces large enough to be tracked from Earth (thousands more are too small to be detected), This uncontrolled space population presents a growing hazard of reentering objects and in-space collisions. As objects reenter, they usually bum up through the heat of friction with Earth's atmosphere, but large pieces may reach the ground. This can constitute a danger to people and property, although there is no proof that anyone has ever been struck by a piece of space debris. There are numerous cases of such debris reaching the ground, however, including the reentry of the U.S. Skylab over Australia in 1979, and the unexpected reentry of two Soviet nuclear reactor powered satellites in 1978 and 1983. The hazard of in-space collisions is created both by multiple collisions between pieces of debris and by intentional or unintentional explosions or fragmentations of satellites. When space objects collide with each other or explode, thousands of smaller particles are created, increasing the probability of further collisions among themselves and with spacecraft. A spacecraft is now more likely to be damaged by space debris than by small micrometeorites. For large, long-life orbital facilities, such as space stations and spaceports, the collision probabilities will become serious by the year 2000, requiring bumper shields or other countermeasures, and more frequent maintenance. All spacefaring nations should adopt preventive measures to minimize: the introduction of new uncontrolled and long-lived debris into orbit. Such countermeasures include making all pieces discarded from spacecraft captive, deorbiting spent spacecraft or stages, adjusting the orbits of transfer stages so that rapid reentry is assured due to natural disturbances, and designating long-life disposal orbits for high altitude spacecraft. The increasing hazard of space debris must be halted and reversed. In a purely physical sense, the Space Station will overshadow all preceding space facilities, Although often referred to as the "NASA" Space Station, it win actually be international in character; Europe, Canada, and Japan, in particular, plan to develop their own hardware components for the Station. As currently visualized, the initial Station will be a 350-foot by 300-foot structure containing four pressurized modules (two for living and two for working), assorted attached pallets for experiments and manufacturing, eight large solar panels for power, communications and propulsion systems, and a robotic manipulator system similar to the shuttle arm. When fully assembly, the initial Station will weigh about 300,000 pounds and carry a crew of six, with a replacement crew brought on board every 90 days. To deliver and assemble the Station’s components, 12 shuttle flights will be required over an 18-month period. The pressurized modules used by the Station will be about 14 feet in diameter and 40 feet long to fit in the shuttle's cargo bay. The Station will circle Earth every 90 minutes at 250-mile altitude and 28.5 degree orbital inclination. Thus the Station will travel only between 28.5 degrees north and south latitude. Unoccupied associate platforms that can be serviced by crews will be in orbits similar to this, as well as in polar orbits circling Earth over the North and South Poles. Polar-orbiting platforms will carry instruments for systems that require a view of the entire globe. The Station will provide a versatile, multifunctional facility. In addition to providing housing, food, air, and water for its inhabitants, it will be a science laboratory performing scientific studies in astronomy, space plasma physics, Earth sciences (including the ocean and atmosphere), materials research and development, and life sciences. The Station will also be used to improve our space technology capability, including electrical power generation, robotics and automation, life support Systems, Earth observation sensors, and communications. The Station will provide a transportation hub for shuttle missions to and from Earth. When the crew is rotated every 90 days, the shuttle will deliver food and water from Earth, as well as materials and equipment for the science laboratories and manufacturing facilities. Finished products and experiment results will be returned to Earth. The Station will be the originating point and destination for flights to nearby Platforms and other Earth orbits. The orbital maneuvering vehicle used for these trips will be docked at the Station. Space tethers have been known in principle for almost 100 years and were crudely tested in two Gemini flights in the 1960s. They were first seriously proposed for high atmosphere sampling from the shuttle by Italy’s Giuseppe Colombo in 1976, which led to a cooperative program between NASA and Italy scheduled to fly in 1988. In the past few years NASA has systematically explored tethering principles in innovative ways for many applications, so the value of space tethers is now becoming clear, and they will be incorporated in several space facilities. Tethers in space capitalize on the fundamental dynamics of bodies moving through central gravity and magnetic fields. They can even provide a pseudo-force field in deep space where none exists. Energy and momentum can be transferred from a spacecraft being lowered on a tether below the orbital center of mass to another spacecraft being raised on a tether above it by applying the principle of conservation of angular momentum of mass in orbit. Upon release, the lower spacecraft will fly to a lower perigee, since it is in a lower energy orbit, while the upper will fly to a higher apogee. Thus, for example, a shuttle departing from a space station can tether downward and then release, reentering the atmosphere without firing its engines, while transferring some energy and momentum to a transfer vehicle leaving the station upward bound for geostationary orbit or the Moon. The result is significant propellant savings for both. Since the process of transfer and storage of energy and M0111entwn are reversible, outgoing vehicles can be boosted by the slowing of incoming vehicles. This can be applied in Earth orbit, in a lunar transfer station, or even in a two-piece elevator system of tethers on Phobos and Deimos that could greatly reduce propellant requirements for Mars transportation. The generation of artificial gravity via tethers offers another class of opportunities. Spacecraft in orbit tethered together will experience an artificial gravity proportional to their distance from their center of mass. Current materials such as Kevlar can support tether lengths of hundreds of miles, allowing controlled gravity fields up to about 0.2g to be generated. By varying tether length, the forces can be set to any level between 0 and 0.2g. This can be used for settling and storing propellants at a space station, for life science research, and for simplifying living and working in space. By deliberately spinning a habitat on a tether only 1,000 feet long, levels of 1g or more can be generated at low revolutions per minute, with low Coriolis forces, to prevent nausea. Long tethers minimize the required mass of the structure, and can alter synthetic gravity by varying spin rate via reeling in or out of tethered counterbalancing masses, If a tether is made of conducting material in orbit about a planet with a magnetic field (like Earth), it will act as a new type of space electric power generator, obtaining energy directly from the orbital energy of the spacecraft, or from a chemical or ion propellant used to keep the orbit from decaying. If power is driven into the tether instead (from a solar array or other source) it will act as an electric motor, and the spacecraft will change altitude, the tether acting as propellantless propulsion with a specific impulse of above 300,000 seconds. This feature can also be exploited by a tethered spacecraft in Jupiter’s strong magnetic field. Propulsion can be provided for maneuvers to visit the Jovian satellites and very high power can be simultaneously generated for the spacecraft and its transmitters. As humans move out to settle space, the consequences of long-term exposure to less than Earth's gravity must be fully understood. In our deliberations, the Commission has found a serious lack of data regarding the effects on the health of humans living for long periods of time in low-gravity environments. NASA’s experience suggests that the "space sickness" syndrome that afflicts as many as, half the astronauts and cosmonauts is fortunately self-limiting. Of continuous concern to medical specialists, however, me the problems of cardiovascular deconditioning after months of exposure to microgravity, the demineralization of the skeleton, the loss of muscle mass and red blood cells, and impairment of the immune response. Space shuttle crews now routinely enter space for periods of seven to nine days and return with no recognized long-term health problems, but these short-term flights do not permit sufficiently detailed investigations of the potentially serious problems. For example, U.S. medical authorities report that Soviet cosmonauts who returned to Earth in 1984 after 237 days in space emerged from the flight with symptoms that mimick severe cerebellar disease, or cerebellar atrophy. The cerebellum is the part of the brain that coordinates and smooths out muscle movement, and helps create the proper muscle force for the movement intended. These pioneering cosmonauts apparently required 45 days of Earth gravity before muscle coordination allowed them to remaster simple children’s games, such as playing catch, or tossing a ring at a vertical peg. As little as we know about human adaptation to microgravity, we have even less empircal knowledge of the long-term effects of the one-sixth gravity of the Moon, or the one-third gravity of Mars. We need a vigorous biomedical research program, geared to understanding the problems associated with long-term human spaceflight. Our recommended Variable-g Research Facility in Earth orbit will help the Nation accumulate the needed data to support protracted space voyages by-humankind and life on worlds with different gravitational forces. We can also expect valuable new medical information useful for Earth-bound patients from this research. Five U.N. treaties are currently in force regarding activities in space: the Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and other Celestial Bodies (1967); the Agreement on the Rescue of Astronauts, the Return of Astronauts, and the Return of Objects Launched into Outer Space (1968); the Convention on International Liability for Damage Caused by Space Objects (1972); the Convention on Registration of Objects Launched into Outer Space (1976); and the Treaty on Principles Governing Activities on the Moon and Other Celestial Bodies (1979). The major space nations, including the United States and Soviet Union, have ratified all but the last, which is more commonly referred to as the "Moon Treaty." Only five countries have signed and ratified that agreement. In addition to deliberations at the United Nations, there is an organization called the International Institute of Space Law, which is pan of the International Astronautical Federation that provides a forum for discussing space law at its annual meetings. A specific opportunity for global space cooperation will occur in 1992. Called the International Space Year (ISY), it will take advantage of a confluence of anniversaries in 1992: the 500th anniversary of the discovery of America, the 75th anniversary of the founding of the Union of Soviet Socialist Republics, and the 35th anniversaries of the International Geophysical Year and the launch of the first artificial satellite, Sputnik 1. During this period, it is also expected that the International Geosphere/ Biosphere Program will be in progress, setting the stage for other related space activities. In 1985, Congress approved the ISY concept in a bill that authorizes funding for NASA. The legislation calls on the President to endorse the ISY and consider the possibility of discussing it with other foreign leaders, including the Soviet Union. It directs NASA to work with the State Department and other Government agencies to initiate interagency and international discussions exploring opportunities for international missions and related research and educational activities. As stated by Senator Spark Matsunaga on the tenth anniversary of the historic Apollo-Soyuz Test Project, July 17, 1985, "An International Space Year won’t change the world. But at the minimum, these activities help remind all peoples of their common humanity and their shared destiny aboard this beautiful spaceship we call Earth." PIONEERING THE SPACE FRONTIER: Table of Contents
http://www.nss.org/resources/library/pioneering/sidebars.html
13
150
Version 1.89.J01 - 3 April 2012 Units is a program for computations on values expressed in terms of different measurement units. It is an advanced calculator that takes care of the units. You can try it here: Suppose you want to compute the mass, in pounds, of water that fills to the depth of 7 inches a rectangular area 5 yards by 4 feet 3 inches. You recall from somewhere that 1 liter of water has the mass of 1 kilogram. To obtain the answer, you multiply the water's volume by its specific mass. Enter this after You have above: 5 yards * (4 feet + 3 in) * 7 in * 1 kg/liter then enter pounds after You want and hit the Enter key or press the Compute button. The following will appear in the result area: 5 yards * (4 feet + 3 in) * 7 in * 1 kg/liter = 2321.5398 pounds You did not have to bother about conversions between yards, feet, inches, liters, kilograms, and pounds. The program did it all for you behind the scenes. Units supports complicated expressions and a number of mathematical functions, as well as units defined by linear, nonlinear, and piecewise-linear functions. See under Expressions for detailed specifications. Units has an extensive data base that, besides units from different domains, cultures, and periods, contains many constants of nature, such as: pi ratio of circumference to diameter c speed of light e charge on an electron h Planck's constant force acceleration of gravity As an example of using these constants, suppose you want to find the wavelength, in meters, of a 144 MHz radio wave. It is obtained by dividing the speed of light by the frequency. The speed of light is 186282.39 miles/sec. But, you do not need to know this exact number. Just press Clear and enter this after You have: c / 144 MHz Enter m after You want and hit the Enter key. You will get this result: c /144MHz = 2.0818921 m Sometimes you may want to express the result as a sum of different units, for example, to find what is 2 m in feet and inches. To try this, press Clear and enter 2 m after You have. Then enter after You want: ft;inand hit Enter. You will get this result: 2 m = 6 ft + 6.7401575 in Other examples of computations: Feet and inches to metric: 6 ft + 7 in = 200.66 cm Time in mixed units: 2 years = 17531 hours + 37 min + 31.949357 s Angle in mixed units: radian = 57 deg + 17 ' + 44.806247 " Fahrenheit to Celsius: tempF(97) = tempC(36.111111) Electron flow: 5 mA = 3.1207548e16 e/sec Energy of a photon: h * c / 5896 angstrom = 2.1028526 eV Mass to energy: 1 g * c^2 = 21.480764 kilotons tnt Baking: 2 cups flour_sifted = 226.79619 g Weight as force: 5 force pounds = 22.241108 newton You can explore the units data base with the help of the four buttons under You have field. By entering any string in You have field and pressing the Search button, you obtain a list of all unit names that contain that string as a substring. For example, if you enter year at You have and press Search, you get a list of about 25 different kinds of year, including marsyear and julianyear. Pressing Definition displays this in the result area: year = tropicalyear = 365.242198781 day = 31556926 s, which tells you that year is defined as equal to tropicalyear, which is equal to 365.242198781 days or 31556926 seconds. If you now enter tropicalyear at You have and press the Source button, you open a browser on the unit data base at the place containing the definition of tropicalyear. You find there a long comment explaining that unit. You may then freely browse the data base to find other units and facts about them. Pressing Conformable units will give you a list of all units for measuring the same property as tropicalyear, namely the length of a time interval. The list contains over 80 units. Instead of the applet shown above, you can use Units as a stand-alone application. As it is written in Java, you can use it under any operating system that supports Java Runtime Environment (JRE) release 1.5.0 or later. To install Units on your computer, download the Java archive (JAR) file that contains the executable Java classes. Save the JAR file in any directory, under any name of your choice, with extension .jar. If your system has an association of .jar files with javaw command (which is usually set up when you install JRE), just double-click on the JAR file icon. If this does not work, you can type java -jar jarfile at the command prompt, where jarfile is the name you gave to the JAR file. Each way should open the graphic interface of Units, similar to one at the beginning of this page. With Units installed on your computer, you can use it interactively from command line, or invoke it from scripts. It imitates then almost exactly the behavior of GNU Units from which it has evolved. See under Command interface for details. You also have a possibility to modify the file that contains unit definitions, or add your own definitions in separate file(s). (The applet can only use its own built-in file.) See under Adding own units for explanation how to do it. The complete package containing the JAR and the Java source can be downloaded as a gzipped tar file from the SourceForge project page. You use expressions to specify computations on physical and other quantities. A quantity is expressed as the product of a numerical value and a unit of measurement. Each quantity has a dimension that is either one of the basic dimensions such as length or mass, or a combination of those. For example, 7 mph is the product of number 7 and unit mile/hour; it has the dimension of length divided by time. For a deeper discussion, see articles on physical quantity and dimensional analysis. For each basic dimension, Units has one primitive unit: meter for length, gram for mass, second for time, etc.. The data base defines each non-primitive unit in such a way that it can be converted to a combination of primitive units. For example, mile is defined as equal to 1609.344 m and hour to 3600 s. Behind the scenes, Units replaces the units you specify by these values, so 7 mph becomes: 7 mph = 7 * mile/hour = 7 * (1609.344*m)/(3600*s) = 3.12928 m/s This is the quantity 7 mph reduced to primitive units. The result of a computation can, in particular, be reduced to a number, which can be regarded as a dimensionless quantity: 17 m / 120 in = 5.5774278 In your expressions, you can use any units named in the units data base. You find there all standard abbreviations, such as ft for foot, m for meter, or A for ampere. For readability, you may use plural form of unit names, thus writing, for example, seconds instead for second. If the string you specified does not appear in the data base, Units will try to ignore the suffix s or es. It will also try to remove the suffix ies and replace it by y. The data base contains also some irregular plurals such as feet. The data base defines all standard metric prefixes as numbers. Concatenating a prefix in front of a unit name means multiplication by that number. Thus, the data base does not contain definitions of units such as milligram or millimeter. Instead, it defines milli- and m- as prefixes that you can apply to gram, g, meter, or m, obtaining milligram, mm, etc.. Only one prefix is permitted per unit, so micromicrofarad will fail. However, micro is a number, so micro microfarad will work and mean .000001 microfarad. Numbers are written using standard notation, with or without decimal point. They may be written with an exponent, for example 3.43e-8 to mean 3.43 times 10 to the power of -8. By writng a quantity as 1.2 meter or 1.2m, you really mean 1.2 multiplied by meter. This is multiplication denoted by juxtaposition. You can use juxtaposition, with or without space, to denote multiplication also in other contexts, whenever you find it convenient. In addition to that, you indicate multiplication in the usual way by an asterisk (*). Division is indicated by a slash (/) or per. Division of numbers can also be indicated by the vertical dash (|). Examples: 10cm 15cm 1m = 15 liters 7 * furlongs per fortnight = 0.0011641667 m/s 1|2 meter = 0.5 m The multiplication operator * has the same precedence as / and per; these operators are evaluated from left to right. Multiplication using juxtaposition has higher precedence than * and division. Thus, m/s s/day does not mean (m/s)*(s/day) but m/(s*s)/day = m/(s*s*day), which has dimension of length per time cubed. Similarly, 1/2 meter means 1/(2 meter) = .5/meter, which is probably not what you would intend. The division operator | has precedence over both kinds of multiplication, so you can write 'half a meter' as 1|2 meter. This operator can only be applied to numbers. Sums are written with the plus (+) and minus (-). Examples: 2 hours + 23 minutes - 32 seconds = 8548 seconds 12 ft + 3 in = 373.38 cm 2 btu + 450 ft lbf = 2720.2298 J The quantities which are added together must have identical dimensions. For example, 12 printerspoint + 4 heredium results in this message: Sum of non-conformable values: 0.0042175176 m 20186.726 m^2. Plus and minus can be used as unary operators. Minus as a unary operator negates the numerical value of its operand. Exponents are specified using the operator ^ or **. The exponent must be a number. As usual, x^(1/n) means the n-th root of x, and x^(-n) means 1/(x^n): cm^3 = 0.00026417205 gallon 100 ft**3 = 2831.6847 liters acre^(1/2) = 208.71074 feet (400 W/m^2 / stefanboltzmann)^0.25 = 289.80881 K 2^-0.5 = 0.70710678 An exponent n or 1/n where n is not an integer can only be applied to a number. You can take the n-th root of non-numeric quantity only if that quantity is an n-th power: foot^pi = Non-numeric base, 0.3048 m, for exponent 3.1415927. hectare**(1/3) = 10000 m^2 is not a cube. An exponent like 2^3^2 is evaluated right to left. The operators ^ and ** have precedence over multiplication and division, so 100 ft**3 is 100 cubic feet, not (100 ft)**3. On the other hand, they have a lower priority than prefixing and |, so centimeter^3 means cubic centimeter, but centi meter^3 is 1/100 of a cubic meter. The square root of two thirds can be written as 2|3^1|2. Abbreviation. You may concatenate a one-digit exponent, 2 through 9, directly after a unit name. In this way you abbreviate foot^3 to foot3 and sec^2 to sec2. But beware: $ 2 means two dollars, but $2 means one dollar squared. Units provides a number of functions that you can use in your computation. You invoke a function in the usual way, by writing its name followed by the argument in parentheses. Some of them are built into the program, and some are defined in the units data base. The built-in functions include sin, cos, tan, their inverses asin, acos, atan, and: ln natural logarithm log base-10 logarithm log2 base-2 logarithm exp exponential sqrt square root, sqrt(x) = x^(1/2) cuberoot cube root, cuberoot(x) = x^(1/3) The argument of sin, cos, and tan must be a number or an angle. They return a number. The argument of asin, acos, atan, ln, log, log2, and exp must be a number. The first three return an angle and the remaining return a number. The argument of sqrt and cuberoot must be a number, or a quantity that is a square or a cube. circlearea area of circle with given radius pH converts pH value to moles per liter tempF converts temperature Fahrenheit to temperature Kelvin wiregauge converts wire gauge to wire thickness Most of them are used to handle nonlinear scales, as explained under Nonlinear meaures. By preceding a function's name with a tilde (~) you obtain an inverse of that function: circlearea(5cm) = 78.539816 cm^2 ~circlearea(78.539816 cm^2) = 5 cm pH(8) = 1.0E-8 mol/liter ~pH(1.0E-8 mol/liter) = 8 tempF(97) = 309.26111 K ~tempF(309.26111 K) = 96.999998 wiregauge(11) = 2.3048468 mm ~wiregauge(2.3048468 mm) = 11 The following table summarizes all operators in the order of precedence. prefix concatenated exponent number division | (left to right) unary + - exponent ^ ** (right to left) multiplication by juxtaposition (left to right) multiplication and division * / per (left to right) sum + - (left to right) A plus and minus is treated as unary only if it comes first in the expression or follows any of the operators ^, **, *, /, per, +, or -. Thus, 5 -2 is interpreted as '5 minus 2', and not as '5 times -2'. Parentheses can be applied in the usual way to indicate the order of evaluation. The syntax of expressions is defined as follows. Phrases and symbols in quotes represent themselves, | means 'or', ? means optional occurrence, and * zero or more occurrences. expr = term (('+' | '-') term)* | ('/' | 'per') product term = product (('*' | '/' | 'per') product)* product = factor factor* factor = unary (('^' | '**') unary)* unary = ('+' | '-')? primary primary = unitname | numexpr | bfunc '(' expr ')' | '~'? dfunc '(' expr ')' | '(' expr ')' numexpr = number ('|' number)* number = mantissa exponent? mantissa = '.' digits | digits ('.' digits?)? exponent = ('e' | 'E') sign? digits unitname = unit name with optional prefix, suffix, and / or one-digit exponent bfunc = built-in function name: sqrt, cuberoot, sin, cos, etc. dfunc = defined function name Names of syntactic elements shown above in italics may appear in error messages that you receive if you happen to enter an incorrect expression. For example: You have: 1|m After '1|': expected number. You have: cm^per $ After 'cm^': expected unary. You have: 3 m+*lbf After '3 m+': expected term. Spaces are in principle ignored, but they are often required in multiplication by juxtaposition. For example, writing newtonmeter will result in the message Unit 'newtonmeter' is unknown; you need a space in the product newton meter. To avoid ambiguity, a space is also required before a number that follows another number. Thus, an error will be indicated after 1.2 in 1.2.3. Multiplication by juxtaposition may also result in another ambiguity. As e is a small unit of charge, an expression like 3e+2C can be regarded as meaning (3e+2)*C or (3*e)+(2*C). This ambiguity is resolved by always including as much as possible in a number. In the Overview, it was shown how you specify the result by entering a unit name at You want. In fact, you can enter there any expression specifying a quantity with the same dimension as the expression at You have: You have: 10 gallons You want: 20 cm * circlearea(5cm) 10 gallons = 24.09868 * 20 cm * circlearea(5cm) This tells you that you can almost fit 10 gallons of liquid into 24 cans of diameter 10 cm and 20 cm tall. However: You have: 10 gallons You want: circlearea(5cm) Conformability error 10 gallons = 0.037854118 m^3 circlearea(5cm) = 0.0078539816 m^2 Some units, like radian and steradian, are treated as dimensionless and equal to 1 if it is necessary for conversion. For example, power is equal to torque times angular velocity. The dimension of expression at You have below is kg m^2 radian/s^3, and the dimension of watt is kg m^2/s^3 The computation is made possible by treating radian as dimensionless: You have: (14 ft lbf) (12 radians/sec) You want: watts (14 ft lbf) (12 radians/sec) = 227.77742 watts Note that dimensionless units are not treated as dimensionless in other contexts. They cannot be used as exponents so for example, meter^radian is not allowed. You can also enter at You want an expression with dimension that is an inverse of that at You have: You have: 8 liters per 100 km You want: miles per gallon reciprocal conversion 1 / 8 liters per 100 km = 29.401823 miles per gallon Here, You have has the dimension of volume divided by length, while the dimension of You want is length divided by volume. This is indicated by the message reciprocal conversion, and by showing the result as equal to the inverse of You have. You may enter at You want the name of a function, without argument. This will apply the function's inverse to the quantity from You have: You have: 30 cm^2 You want: circlearea 30 cm^2 = circlearea(0.030901936 m) You have: 300 K You want: tempF 300 K = tempF(80.33)Of course, You have must specify the correct dimension: You have: 30 cm You want: circlearea Argument 0.3 m of function ~circlearea is not conformable to 1 m^2. If you leave You want field empty, you obtain the quantity from You have reduced to primitive units: You have: 7 mph You want: 3.12928 m / s You have: 2 m You want: ft;in;1|8 in 2 m = 6 ft + 6 in + 5.9212598 * 1|8 inNote that you are not limited to unit names, but can use expressions like 1|8 in above. The first unit is subtracted from the given value as many times as possible, then the second from the rest, and so on; finally the rest is converted exactly to the last unit in the list. Ending the unit list with ';' separates the integer and fractional parts of the last coefficient: You have: 2 m You want: ft;in;1|8 in; 2 m = 6 ft + 6 in + 5|8 in + 0.9212598 * 1|8 inEnding the unit list with ';;' results in rounding the last coefficient to an integer: You have: 2 m You want: ft;in;1|8 in;; 2 m = 6 ft + 6 in + 6|8 in (rounded up to nearest 1|8 in)Each unit on the list must be conformable with the first one on the list, and with the one you entered at You have: You have: meter You want: ft;kg Invalid unit list. Conformability error: ft = 0.3048 m kg = 1 kg You have: meter You want: lb;oz Conformability error meter = m lb = 0.45359237 kgOf course you should list the units in a decreasing order; otherwise, the result may not be very useful: You have: 3 kg You want: oz;lb 3 kg = 105 oz + 0.051367866 lbA unit list such as cup;1|2 cup;1|3 cup;1|4 cup;tbsp;tsp;1|2 tsp;1|4 tspcan be tedious to enter. Units provides shorthand names for some common combinations: hms hours, minutes, seconds dms angle: degrees, minutes, seconds time years, days, hours, minutes and seconds usvol US cooking volume: cups and smallerUsing these shorthands, or unit list aliases, you can do the following conversions: You have: anomalisticyear You want: time 1 year + 25 min + 3.4653216 sec You have: 1|6 cup You want: usvol 2 tbsp + 2 tspYou cannot combine a unit list alias with other units: it must appear alone at You want. Some measures cannot be expressed as the product of a number and a measurement unit. Such measures are called nonlinear. An example of nonlinear measure is the pH value used to express the concentration of certain substance in a solution. It is a negative logarithmic measure: a tenfold increase of concentration decreases the pH value by one. You convert between pH values and concentration using the function pH mentioned under Functions: You have: pH(6) You want: micromol/gallon pH(6) = 3.7854118 micromol/gallon For conversion in the opposite direction, you use the inverse of pH, as described under Specifying result: You have: 0.17 micromol/cm^3 You want: pH 0.17 micromol/cm^3 = pH(3.7695511) Other example of nonlinear measures are different "gauges". They express the thickness of a wire, plate, or screw, by a number that is not obviously related to the familiar units of length. (Use the Search button on gauge to find them all.) Again, they are handled by functions that convert the gauge to units of length: You have: wiregauge(11) You want: inches wiregauge(11) = 0.090742002 inches You have: 1mm You want: wiregauge 1mm = wiregauge(18.201919) The most common example of nonlinear measure is the temperature indicated by a thermometer, or absolute temperature: you cannot really say that it becomes two times warmer when the thermometer goes from 20°F to 40°F. Absolute temperature is expressed relative to an origin; such measure is called affine. To handle absolute temperatures, Units provides functions such as tempC and tempF that convert them to degrees Kelvin. (Other temp functions can be found using the Search button.) The following shows how you use these functions to convert absolute temperatures: You have: tempC(36) You want: tempF tempC(36) = tempF(96.8) meaning that 36°C on a thermometer is the same as 96.8°F. You can think of pH(6), wiregauge(11), tempC(36), or tempF(96.8) not as functions but as readings on the scale pH, tempC, or tempF, used to measure some physical quantity. You can read the examples above as: 'what is 0.17 micromol/cm^3 on the pH scale?', or 'what is 1 mm on the wiregauge scale?', or 'what is the tempF reading corresponding to 36 on tempC scale?' Note that absolute temperature is not the same as temperature difference, in spite of their units having the same names. The latter is a linear quantity. Degrees Celsius and degrees Fahrenheit for measuring temperature difference are defined as linear units degC and degF. They are converted to each other in the usual way: You have: 36 degC You want: degF 36 degC = 64.8 degF Some units have different values in different locations. The localization feature accomodates this by allowing the units database to specify region-dependent definitions. In the database, the US units that differ from their British counterparts have names starting with us: uston, usgallon, etc.. The corresponding British units are: brton, brgallon, etc.. When using Units, you have a possibility to specify en_US or en_GB as 'locale'. Each of them activates a portion of the database that defines short aliases for these names. Thus, specifying en_US as locale activates these aliases: ton = uston gallon = usgallon etc.while en_GB activates these: ton = brton gallon = brgallon etc. The US Survey foot, yard, and mile can be obtained by using the US prefix. These units differ slightly from the international length units. They were in general use until 1959, and are still used for geographic surveys. The acre is officially defined in terms of the US Survey foot. If you want an acre defined according to the international foot, use intacre. The difference between these units is about 4 parts per million. The British also used a slightly different length measure before 1959. These can be obtained with the prefix UK. The units data base is defined by a units data file. A default units file, called units.dat, is packaged in the JAR file together with the program. (You can extract it from there using the jar tool of Java.) If you want to add your own units, you can write your own units file. See how to do it under Writing units file. If you place that file in your home directory under the name units.dat, it will be read after the default units file. You may also supply one or more own unit files and access them using the property list or command option. In each case, you specify the order in which Units will read them. If a unit with the same name is defined more than once, Units will use the last definition that it encounters. Note that adding your own unit files is possible only if you run Units from a downloaded JAR file. An applet can only use the default units.dat file. If you want, you may run Units from command line. It imitates then almost exactly the behavior of GNU Units. (The differences are listed under What is different.) To use the command line interface, you need to download the Java archive (JAR) file that contains the executable classes and the data file. You can save the JAR in any directory of your choice, and give it any name compatible with your file system. The following assumes that you saved the JAR file under the name jarfile. It also assumes that you have a Java Runtime Environment (JRE) version 1.5.0 or later that is invoked by typing java at your shell prompt. java -jar jarfile -i or java -jar jarfile options at your shell prompt. The program will print something like this: 2192 units, 71 prefixes, 32 nonlinear units You have: At You have prompt, type the expression you want to evaluate. Next, Units will print You want. There you tell how you want your result, in the same way as in the graphical interface. See under Expressions and Specifying result. As an example, suppose you just want to convert ten meters to feet. Your dialog will look like this: You have: 10 meters You want: feet * 32.8084 / 0.03048 The answer is displayed in two ways. The first line, which is marked with a * to indicate multiplication, says that the quantity at You have is 32.8084 times the quantity at You want. The second line, marked with a / to indicate division, gives the inverse of that number. In this case, it tells you that 1 foot is equal to about 0.03 dekameters (dekameter = 10 meters). It also tells you that 1/32.8 is about .03. Units prints the inverse because sometimes it is a more convenient number. For example, if you try to convert grains to pounds, you will see the following: You have: grains You want: pounds * 0.00014285714 / 7000 From the second line of the output you can immediately see that a grain is equal to a seven thousandth of a pound. This is not so obvious from the first line of the output. If you find the output format confusing, try using the -v ('verbose') option, which gives: You have: 10 meters You want: feet 10 meters = 32.8084 feet 10 meters = (1 / 0.03048) feet You can suppress printing of the inverse using the -1 ('one line') option. Using both -v and -1 produces the same output as the graphical interface: You have: 5 yards * (4 feet + 3 in) * 7 in * 1 kg/liter You want: pounds 5 yards * (4 feet + 3 in) * 7 in * 1 kg/liter = 2321.5398 pounds If you request a conversion between units which measure reciprocal dimensions, Units will display the conversion results with an extra note indicating that reciprocal conversion has been done: You have: 6 ohms You want: siemens reciprocal conversion * 0.16666667 / 6Again, you may use the -v option to get more comprehensible output: You have: 6 ohms You want: siemens reciprocal conversion 1 / 6 ohms = 0.16666667 siemens 1 / 6 ohms = (1 / 6) siemens When you specify compact output with -c, you obtain only the conversion factors, without indentation: You have: meter You want: yard 1.0936133 0.9144 When you specify compact output and perform conversion to mixed units, you obtain only the conversion factors separated by semicolons. Note that unlike the case of regular output, zeros are included in this output list: You have: meter You want: yard;ft;in 1;0;3.3700787 If you only want to find the reduced form or definition of a unit, simply press return at You want prompt. For example: You have: 7 mph You want: 3.12928 m/s You have: jansky You want: Definition: jansky = fluxunit = 1e-26 W/m^2 Hz = 1e-26 kg / s^2 The definition is shown if you entered a unit name at You have prompt. The example indicates that jansky is defined as equal to fluxunit which in turn is defined to be a certain combination of watts, meters, and hertz. The fully reduced form appears on the far right. If you type ? at You want prompt, the program will display a list of named units which are conformable with the unit that you entered at You have prompt. Note that conformable unit combinations will not appear on this list. Typing help at either prompt displays a short help message. You can also type help followed by a unit name. This opens a window on the units file at the point where that unit is defined. You can read the definition and comments that may give more details or historical information about the unit. Typing search followed by some text at either prompt displays a list of all units whose names contain that text as a substring, along with their definitions. This may help in the case where you aren't sure of the right unit name. To end the session, you type quit at either prompt, or press the Enter (Return) key at You have prompt. You can use Units to perform computations non-interactively from the command line. To do this, type java -jar jarfile [options] you-have [you-want] at your shell prompt. (You will usually need quotes to protect the expressions from interpretation by the shell.) For example, if you type java -jar jarfile "2 liters" "quarts"the program will print * 2.1133764 / 0.47317647and then exit. If you omit you-want, Units will print out definition of the specified unit. The following options allow you to use alternative units file(s), check your units file, or change the output format: The Java imitation is not an exact port of the original GNU units. The following is a (most likely incomplete) list of differences. You can supply some parameters to Units by setting up a Property list. It is a file named units.opt, placed in the same directory as the JAR file. It may look like this: GUIFONT = Lucida ENCODING = Cp850 LOCALE = en_GB UNITSFILE = ; c:\\Java\\gnu\\units\\my.dat The options -e, -f, -g, and -l specified on the command line override settings from the Property list. You embed a Units applet in a Web page by means of this tag: <APPLET CODE="units.applet.class" ARCHIVE="http://units-in-java.sourceforge.net/Java-units.1.89.J01.jar" WIDTH=500 HEIGHT=400> <PARAM NAME="LOCALE" VALUE="locale"> <PARAM NAME="GUIFONT" VALUE="fontname"> </APPLET> Notice that because an applet cannot access any files on your system, you can use only the default units file packaged in the JAR file. You may view the source of this page for an example of Web page with an embedded Units applet. The units data base is defined by a units data file. A default units file, called units.dat, is packaged in the JAR file together with the program. This section tells you how to write your own units file that you can use together with, or instead of, the default file, as described under Adding own units. The file has to use the UTF-8 character encoding. Since the ASCII characters appear the same in all encodings, you do not need to worry about UTF-8 as long as your definitions use only these characters. Each definition occupies one line, possibly continued by the backslash character (\) that appears as the last character. Comments start with a # character, which can appear anywhere in a line. Following #, the comment extends to the end of the line. Empty lines are ignored. A unit is specified on a single line by giving its name followed by at least one blank, followed by the definition. A unit name must not contain any of the characters + - * / | ^ ( ) ; #. It cannot begin with a digit, underscore, tilde, decimal point, or comma. It cannot end with an underscore, decimal point, or comma. If a name ends in a digit other than zero or one, the digit must be preceded by a string beginning with an underscore, and afterwards consisting only of digits, decimal points, or commas. For example, NO_2, foo_2,1 or foo_3.14 would be valid names but foo2 or foo_a2 would be invalid. The definition is either an expression, defining the unit in terms of other units, or ! indicating a primitive unit, or !dimensionless indicating a dimensionless primitive unit. Be careful to define new units in terms of old ones so that a reduction leads to the primitive units. You can check this using the -C option. See under Checking your definitions. Here is an example of a short units file that defines some basic units: m ! # The meter is a primitive unit sec ! # The second is a primitive unit rad !dimensionless # A dimensionless primitive unit micro- 1e-6 # Define a prefix minute 60 sec # A minute is 60 seconds hour 60 min # An hour is 60 minutes inch 0.0254 m # Inch defined in terms of meters ft 12 inches # The foot defined in terms of inches mile 5280 ft # And the mile A unit which ends with a - character is a prefix. If a prefix definition contains any / characters, be sure they are protected by parentheses. If you define half- 1/2 then halfmeter would be equivalent to 1 / 2 meter. Here is an example of function definition: tempF(x) [1;K] (x+(-32)) degF + stdtemp ; (tempF+(-stdtemp))/degF + 32 The definition begins with the function name followed immediately (with no spaces) by the name of the parameter in parentheses. Both names must follow the same rules as unit names. Next, in brackets, is a specification of the units required as arguments by the function and its inverse. In the example above, the tempF function requires an input argument conformable with 1. The inverse function requires an input argument conformable with K. Note that this is also the dimension of function's result. Next come the expressions to compute the function and its inverse, separated by a semicolon. In the example above, the tempF function is computed as tempF(x) = (x+(-32)) degF + stdtemp The inverse has the name of the function as its parameter. In our example, the inverse is ~tempF(tempF) = (tempF+(-stdtemp))/degF + 32 This inverse definition takes an absolute temperature as its argument and converts it to the Fahrenheit temperature. The inverse can be omitted by leaving out the ; character, but then conversions to the unit will be impossible. If you wish to make synonyms for nonlinear units, you still need to define both the forward and inverse functions. So to create a synonym for tempF you could write fahrenheit(x) [1;K] tempF(x); ~tempF(fahrenheit) The example below is a function to compute the area of a circle. Note that this definition requires a length as input and produces an area as output, as indicated by the specification in brackets. circlearea(r) [m;m^2] pi r^2 ; sqrt(circlearea/pi) Empty or omitted argument specification means that Units will not check dimension of the argument you supply. Anything compatible with the specified computation will work. For example: square(x) x^2 ; sqrt(square) square(5) = 25 square(2m) = 4 m^2 Some functions cannot be computed using an expression. You have then a possibility to define such a function by a piecewise linear approximation. You provide a table that lists values of the function for selected values of the argument. The values for other arguments are computed by linear interpolation. An example of piecewise linear function is: zincgauge[in] 1 0.002, 10 0.02, 15 0.04, 19 0.06, 23 0.1 In this example, zincgauge is the name of the function. The unit in square brackets applies to the result. Tha argument is always a number. No spaces can appear before the ] character, so a definition like foo[kg meters] is illegal; instead write foo[kg*meters]. The definition is a list of pairs optionally separated by commas. Each pair defines the value of the function at one point. The first item in each pair is the function argument; the second item is the value of the function at that argument (in the units specified in brackets). In this example, you define zincgauge at five points. We have thus zincgauge(1) = 0.002 in. Definitions like this may be more readable if written using continuation characters as zincgauge[in] \ 1 0.002 \ 10 0.02 \ 15 0.04 \ 19 0.06 \ 23 0.1 If you define a piecewise linear function that is not strictly monotone, the inverse will not be well defined. In such a case, Units will return the smallest inverse. Unit list aliases are treated differently from unit definitions, because they are a data entry shorthand rather than a true definition for a new unit. A unit list alias definition begins with !unitlist and includes the alias and the definition; for example, the aliases included in the standard units data file are: !unitlist hms hr;min;sec !unitlist time year;day;hr;min;sec !unitlist dms deg;arcmin;arcsec !unitlist ftin ft;in;1|8 in !unitlist usvol cup;3|4 cup;2|3 cup;1|2 cup;1|3 cup;1|4 cup;\ tbsp;tsp;1|2 tsp;1|4 tsp;1|8 tsp Unit list aliases are only for unit lists, so the definition must include a ';'. Unit list aliases can never be combined with units or other unit list aliases, so the definition of time shown above could not have been shortened to year;day;hms. As usual, be sure to run Units with option -C to ensure that the units listed in unit list aliases are conformable. A locale region in the units file begins with !locale followed by the name of the locale. The locale region is terminated by !endlocale. The following example shows how to define a couple of units in a locale. !locale en_GB ton brton gallon brgallon !endlocale A file can be included by giving the command !include followed by full path to the file. You are recommended to check the new or modified units file by invoking Units from command line with option -C. Of course, the file must be made available to Units as described under Adding own units. The option will check that the definitions are correct, and that all units reduce to primitive ones. If you created a loop in the units definitions, Units will hang when invoked with the -C option. You will need to use the combined -Cv option which prints out each unit as it checks them. The program will still hang, but the last unit printed will be the unit which caused the infinite loop. If the inverse of a function is omitted, the -C option will display a warning. It is up to you to calculate and enter the correct inverse function to obtain proper conversions. The -C option tests the inverse at one point and prints an error if it is not valid there, but this is not a guarantee that your inverse is correct. The -C option will print a warning if a non-monotone piecewise linear function is encountered. Units works internally with double-byte Unicode characters. The unit data files use the UTF-8 encoding. This enables you to use Unicode characters in unit names. However, you can not always access them. The graphical interface of Units can display all characters available in its font. Those not available are shown as empty rectangles. The default font is Monospaced. It is a so-called logical font, or a font family, with different versions depending on the locale. It usually contains all the national characters and much more, but far from all of Unicode. You may specify another font by using the property GUIFONT, or an applet parameter, or the command option -g. You can enter into the Units window all chracters available at your keyboard, but there is no facility to enter any other Unicode characters. The treatment of Unicode characters at the command interface depends on the operating system and the Java installation. The operating system may use character encoding different from the default set up for Java Virtual Machine (JVM). As the result, names such as ångström typed in the command window are not recognized as unit names. If you encounter this problem, and know the encoding used by the system, you can identify the encoding to Units with the help of the property ENCODING or command option -e. (In Windows XP, you can find the encoding using the command chcp. In one case investigated by the author, the encoding was Cp437, while the JVM default was Cp1252.) The units.dat file supplied with Units contains commands !utf8 and !endutf8. This is so because it is taken unchanged from GNU units. The commands enclose the portions of file that use non-ASCII characters so they can be skipped in environments that do not support UTF-8. Because Java always supports the UTF-8 encoding for input files, the commands are ignored in Units. The program documented here is a Java development of GNU Units 1.89e, a program written in C by Adrian Mariano ([email protected]). The file units.dat containing the units data base was created by Adrian Mariano, and is maintained by him. The package contains the latest version obtained from GNU Units repository. GNU Units copyright © 1996, 1997, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2010, 2011 by Free Software Foundation, Inc. Java version copyright © 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012 by Roman Redziejowski. The program is free software: you can redistribute it and/or modify it under the terms of the GNU Library General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. The program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. This Web page copyright © 2012 by Roman Redziejowski. The author gives unlimited permission to copy, translate and/or distribute it document, with or without modifications, as long as this notice is preserved, and information is provided about any changes. Substantial parts of this text have been taken, directly or modified, from the manual Unit Conversion, edition 1.89g, written by Adrian Mariano, copyright © 1996, 1997, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2011 by Free Software Foundation, Inc., under a written prmission contained in that document.
http://units-in-java.sourceforge.net/
13
67
Arithmetic, Deferred Execution, and Input You are now on speaking terms with your computer. The next task is to learn a few simple expressions, the computer equivalent of "My name is Fred", "Does this bus go to Notre Dame?", or "Where is the bathroom?". This chapter will introduce you to three absolutely fundamental facets of computing: arithmetic, deferred execution, and input. We begin with Many people mistakenly think that performing arithmetic computations is the prime function of a computer. In truth, computers spend most of their time doing far less exalted work: moving bits of information around from one place to another, painstakingly examining huge piles of data for those few scraps of data that are just what the user ordered, or rewriting the data in a form that is easier for the user to appreciate. Nevertheless, arithmetic is an excellent topic to begin studying because it is familiar to people. If you can do arithmetic on a calculator, you can do arithmetic on a computer. In fact, it’s even easier on the computer. Try this with your computer: The computer will type under your command the answer, 12, so quickly that you might suspect that it’s up to some trickery. OK, type in some different numbers. Use some big, messy numbers like 3254 or 17819. The general rule is: first, type the word PRINT in capital letters. Then put a space. Then the first number, an asterisk to mean "multiply" and then the second number. If you make a mistake, use the BackSpace key to go back over the mistake, then type it over. When you have it right, press the RETURN key. You may get a few minor items wrong. For example, when you used a number like 3254, did you type it as 3254 or as 3,254? That comma in between the 3 and the 2 will generate a syntax error. It may seem picayune, but I warned you that computers have no sense of context. Because commas are so small and hard to notice, they cause more syntax errors than any other character. So watch your commas! The spaces are also important. Some versions of BASIC use a space as a "delimiter". A delimiter is a marker that tells you where the end of one word is and where the beginning of the next word is. It may seem silly untilyoutrytoreadabunchofwordswithoutanydelimitersatall. So give the computer a break and give it spaces where it needs them. B u t d o n ’ t p u t i n e x t r a s p a c e s o r t h e c o m p u t e r w i l l g e t v e r y c o n f u s e d , O K ? There is no reason why you have to restrict yourself to multiplication. If you wish, you can do addition, subtraction, or division just as easily. The symbol for multiplication is an asterisk: *. The symbol for addition is a plus sign: +. The symbol for subtraction is a minus sign: -. And the symbol for division is a slash: /. With division, the computer will divide the first number by the second number. With subtraction, the computer will subtract the second number from the first number. So to subtract 551 from 1879 you type: To divide 18 by 3 type: But what if you want to do more complex calculations? Suppose, for example, that you want to add 8 to 12 and divide the sum by 4. The first idea that comes to most people’s minds is to type: which will yield a result of 11. Why? Because this command is ambiguous. I told the computer to do two operations &emdash; an addition and a division. Which one did I want done first? It makes a difference! The way I described the problem, I wanted the addition done first, then the division. Instead, the computer did the division first, dividing 12 by 4 to get 3. Then it added 3 to 8 to get 11. If it had done what I wanted it to do, it would have added 8 to 12 to get 20, then divided the 20 by 4 to get 5. Quite a mixup, yes? How does one avoid mixups like this? The primary means is through an idea called "operator precedence". This is a big phrase that means very little. Whenever we have a situation in which two operators (an operator is one of the four arithmetic operation symbols: +, -, *, or /) vie for precedence, we automatically yield to the * or the /. It’s one of those arbitrary rules of the road like "Y’all drive on the right side of the road, y’hear?" Thus, in our example above, the computer gave precedence to the division operation over the addition operation, and performed the division first. If you are a reasonable and thoughtful person, you probably have two quick objections to this system of operator precedence. First, you might wonder what happens when two operators with equal precedence contest each other. Who wins? Well, it turns out that it doesn’t really matter. For example, if I type: It doesn’t matter one bit whether the addition or the subtraction is done first. Try it. 3+4 is 7; subtract 2 gives 5. If you do it backwards, 4-2 is 2; add 3 gives 5. See? It doesn’t matter what order you do them in. The same thing applies to multiplication and division: If we do the multiplication first, we get 3*4 is 12; divide by 2 gives 6. If we do the division first, then we get 4/2 is 2; multiply by 3 gives 6. So operator precedence doesn’t matter with operators of equal precedence. Your second objection might be, "OK, how do we get the computer to do the calculation that we really wanted:" In other words, how do we get the computer to add 8 to 12 before it divides by 4? The answer is to bring in a new concept, the parenthesis pair. If you want a particular operation done first, bundle it up with a pair of parentheses, like so: I always imagine parentheses as a pair of protective arms huddling two numbers together, protecting them from the cold winds of operator precedence. In our example, that 12 belongs with the 8, not the 4, but the cruel computer would tear our hapless 12 away from the 8 and mate it in unholy union with the 4. The parentheses become like the bonds of true love, protecting and preserving relationships that a cold set of rules would violate. To adapt a phrase, "Parenthesis conquers all." So much for ridiculous metaphors. You can use parentheses to build all sorts of intricate arithmetic expressions. You can pile parentheses on top of parentheses to get ever more complex expressions. Here is an example: What does this mess mean? The way to decode a monstrosity like this is to start with the innermost operation(s) and work outward. In this example, the 3+4 is an innermost operation, and so is the 6-2. They are innermost because no parentheses serve to break up the computation. If you were to mentally perform these operations, you would see that the big long command is equivalent to: All I did to get this was to replace the "3+4" with a "7", and replace the "6-2" with a "4". Now notice that both the 7 and the 4 are surrounded by a complete pair of parentheses. Now, a pair of parentheses around one single number is a waste of time, because you don’t need to protect a solitary number from anything. Remember, parentheses protect relationships, not numbers. Having a pair of parentheses around a number is like putting a paperclip on a single piece of paper. So get let’s get rid of those excess parenthesis: Now we have another pair of uncluttered operations: 7/7 and 4/2. Let’s make them come true: Well, gee, now we have more numbers floating inside extraneous parenthesis. Out go the extra parentheses: Now we’re getting so close we can smell it. Finish up the operation: Clear out the parentheses: And there is the answer: This long exercise shows how the computer figures out a long and messy pile of parentheses. How do you create such a pile? There is no specific answer to this question, no cookbook for building expressions. I can give you a few guidelines that will make the effort easier. First, when in doubt, use parentheses. Whenever you want to make sure that a pair of numbers are calculated first, group them together with a pair of parentheses. Using extra parentheses is like using extra paper clips: it is a little wasteful but it doesn’t hurt, and if it gives you some insurance, do it. Second, always count your parentheses to make sure they balance. If you have five right parentheses, then you must have five left parentheses -- no more, no less. If your parentheses don’t balance, you will generate a syntax error. Congratulations! All of this learning has catapulted you to the level at which you can use your expensive computer as a $10 calculator. If you are willing to continue, I can now show you an idea that will take you a little further than you could go with a calculator. It is the concept of indirection as expressed in the idea of a variable. Indirection is one of the most important concepts associated with computers. It is absolutely essential that you understand indirection if you are to write any useful programs. More important, indirection is a concept that can be applied to many real-world considerations. In the simplest case of indirection, we learn to talk not of a number itself, but of a box that holds the number, whatever it might be. The box is given a name so that we can talk about it. For example, try this command on your computer: This command does two actions: first, it creates a box -- a variable -- that we will call "FROGGY"; second, it puts the number 12 into this box. From here on, we can talk about FROGGY instead of talking about 12. You might wonder, why do we need code words for simple numbers? If I want to mess around with the number 12, why don’t I just say 12, instead of going through all this mumbo-jumbo about FROGGY? The trick lies in the realization that the actual value at any given instant is not the essence of the thing. For example, suppose we talked about a different number: the time. Let’s say that you and I are having a conversation about time. You say, "What time is it?" I say, "The time is 1:22:30." That number, 1:22:30, is formatted in a strange way, but you have to admit that it is a bonafide number. Thereafter, whenever you think of time, do you think of 1:22:30? Of course not. Time is a variable whose value was 1:22:30 for one second. When we think about time, we don’t fixate on the number 1:22:30; we instead think of time as a variable that can take many different values. This is the essence of a variable: something that could be any of many different numbers, but at any given time has exactly one number. For example, your speed is a variable: sometimes you are going 55 mph and sometimes you are going 0 mph. Your bank balance is a variable: one day it might be $27.34 and another day it might be $5327.34. The importance of indirection is that it allows us to focus our attention on grander relationships. Do you remember your grammar school arithmetic exercises: "You are traveling at 40 mph. How far can you travel in two hours?" This is a simple arithmetic problem, but behind it lies a much more interesting and powerful concept. It lies in the equation distance = speed * time The big idea here is that this equation only makes sense if you forget the petty details of exactly what the speed is, and what the time is. It is true whatever the speed is, and whatever the time is. When we use an equation like this, we transcend the petty world of numbers and focus our attention on grander relationships between real-world concepts. If, to understand this equation, you must use examples ("Well, 40 mph for 2 hours gives 80 miles"), then you have not fully grasped the concept of indirection. Examples are a useful means of introducing you to the concept, of supporting your weight as you learn to walk, but the time must come when you unbolt the trainer wheels and think in terms of the relationship itself, not merely its application in a few examples. Variables are the means for doing this. There is an experimental effort underway at some computer science laboratories to develop a computer language in which the user is not required to think in terms of indirection. It is called "programming by example", and is a total perversion of the philosophy of computing. The user of such languages does not describe concepts and relationships in their true form; instead, he provides many examples of their effects. The computer then draws inferences for the user and engages in the indirection itself. In an extreme application of this philosophy, the user would not tell the computer that "distance = speed * time". Instead, the user would tell the computer that "When the speed was 40 mph and the time was 2 hours, the distance was 80 miles; when the speed was 20 mph and the time was 1 hour, the distance was 20 miles." After the user succeeds in listing enough examples, the computer is able to infer the correct relationship. Programming by example appears to be a new application of artificial intelligence that will make computers more accessible to users by allowing them to program the computers in simple terms, without being forced to think in terms of grand generalities. In truth, it is a step backwards, for it reverses the relationship between human and computer. It forces the human to do the drudge work, listing lots of petty examples, while the computer engages in the exalted thinking. The proper relationship between human and computer makes the human the thinker and the computer the drudge. To realize this relationship, you must have the courage to use your mind, to think in larger terms of relationships between variables, not merely individual numbers. Unless, of course, you enjoy being a drudge. The concept of indirection is not confined to mathematical contexts. We use indirection in our language all the time. When we say, "Children look like their parents", we are making a general statement about the nature of human beings. Only the most literal of nincompoops is troubled by this statement, asking "Which children look like which parents?" We all know that the noun "children" applies to any children. It is a variable; if you want a specific case, then grab a specific child off the street and plug him into this verbal equation. Take little Johnny Smith; his parents are Fred and Wilma Smith. Then the statement becomes "Johnny Smith looks like Fred and Wilma Smith." Again, the important concept is not about Johnny and Fred and Wilma, but about children and parents in general. Time to get back to variables themselves. A variable is a container for a number. We can save a number into a variable, and thenceforth perform any operations on the variable, changing its value, multiplying or dividing other numbers by the variable, using it just as if it were a number itself. And it is a number, only we don’t care when we write the program whether that number is a 12 or a 513; our program is meant to work with the variable whatever its value might be. Some exercises are in order. Try this: Now make some changes: Not only can you put a number into a variable, but you can also take a number out, as demonstrated by this example. The computer remembers that FROGGY has a value of 15, and retrieves that value to calculate the value of BIRDIE. Now for something that might really throw you: If you think in terms of algebra, this equation must look like nonsense. After all, how can a number equal itself plus 1? The answer is that the line presented above is not an equation but a command. It is called an assignment statement, for its true function is not to declare an equality to the world but to put a number into a variable. An assignment statement tells the computer to take whatever is on the right side of the equals sign, calculate it to get a number, and put that number into the variable on the left side of the equals sign. Thus, the above assignment statement will take the value of FROGGY, which happens to be 15 just now, and add 1 to it, getting a result of 16. It will then put that 16 into FROGGY. It is time to summarize what we have learned before we move on to deferred execution: 1) You can form an expression out of numbers, operators, and variables. 2) Multiplication and division have precedence over addition and subtraction. 3) Parentheses defeat the normal rules of precedence. 4) Variables are "indirect numbers" and can be treated like numbers. 5) You set a variable’s value with an assignment statement. 6) Anything you can calculate, you can PRINT. With these items under our belts, let’s move on to the next topic. This rather imposing term, sounding like a temporary reprieve from a death sentence, in truth means something far less dramatic. In the context of computers, execution means nothing more than the carrying out of commands. One does not idly converse with a computer; one instead issues commands. All of the things you have learned so far, and all of the things that you will learn, are commands that tell the computer to do something. You issue the command, and the computer executes the command. The question I take up in this section is, When does the computer execute your commands? You might think the question silly. After all, you didn’t buy the computer to sit around and wait for it to execute your commands at its leisure. I can imagine you barking in true military style, "Computer, when I issue a command, I want it executed NOW, not later!" But there are indeed times when it is desirable for the computer to be able to execute your commands later. A command that is executed now happens once and is gone forever, but a command that can be executed later can be executed later tomorrow, and later the next day, and the next, and the next, as many times as you want. We can give a command right now and expect that it be executed right now; but it would be even more useful to be able to record a command right now and execute it at any later date. This still may seem a bit silly. Why should anyone bother recording a command for later reference? If I want to PRINT 3+4 sometime next week, why don’t I just type "PRINT 3+4" next week when I need it? Why go to the bother of some scheme for storing that command for later use? The answer is, it all depends on how big a command you consider storing. There isn’t much point in storing a simple command such as "PRINT 3+4". But what if you have a big calculation that has many steps? Typing in all those steps every single time you wanted to do the calculation would be a big job. If you could store all those steps the first time, and then call them automatically every time you needed to do the calculation, then you would have saved a great deal of time. What a wonderful idea! There is a term we use for this wonderful idea: we call it a computer program. A computer program is nothing more than a collection of commands for the computer, saved for future reference. When you tell the computer to run a particular program, you are instructing it to execute all those commands that were stored by the programmer. There is an interesting analogy here. Suppose that you were the boss at a factory. It would be wasteful to stand over each worker, telling him or her what to do at each step of the manufacturing process. ("OK, now put that short screw into the hole at the top. Good. Now put the nut onto the bolt. Now...") A much more efficient way is to explain the entire process to the worker before he or she starts work. Once the worker has memorized the process, you don’t have to worry about him or her any more. This is analogous to the storing of commands for a computer. What is particularly curious is the concept of a program that you, the user, did not write. When you buy a computer program and put it into your computer, it is rather like the boss at the factory saying to the workers, "Here is a book of instructions for how to build a new machine. I don’t even know what the instructions are, but I like the machine. Follow these instructions." The concept of deferred execution is not unique to the computer. We see it in a variety of places in our regular lives. A cookbook is a set of commands that tell you how to make food. In the corporate world we have the venerable "Policies and Procedures Manual" that tells us how to get along in the corporate environment. But my favorite example is the Constitution of the United States of America. This document is composed of a set of commands that prescribe how the government of the USA will operate. It specifies who will do what, and when, and how. Like a computer program, it has variables: the President, the Congress, the Supreme Court. Each variable can take different "values" -- the President can be Washington, Lincoln, Roosevelt, and so on, but the commands are the same regardless of the "value" of the President, the Congress, or the Supreme Court. Like any computer program, a great deal of effort was expended getting each part of the Constitution just right, tightening up the sloppy wording, making sure that the commands would work in all conceivable situations. And like any real computer program, the programmers have spent a long time getting all the bugs out. Despite this, it has worked very well for nearly two hundred years now. Show me a computer program with that kind of performance record. So the concept of deferred execution is really not some weird new idea that only works in the silicon minds of computers. It’s been around for a while. With computers, though, deferred execution is used in a very pure, clean context, uncluttered by the complexities of the real world. If you really want to understand the idea of deferred execution, the computer is the place to see it clearly. How do you get deferred execution on your computer? With BASIC, the technique is simple: give numbers to your commands. Where earlier you typed: Now type this: 3 PRINT BIRDIE Those numbers in front of the commands tell the computer that these instructions are meant to be executed later. The computer will save them for later use. To prove it to yourself, type LIST. Sure enough, the computer will list the program that you typed in. It remembers! Even better, it can now execute all three commands for you. Type The computer will respond almost instantly by printing "20" immediately below the command RUN. It executed all three commands in your little program, and those three commands together caused it to print the "20". Congratulations. You have written and executed your first computer program. Break out the champagne. Those numbers in front of the commands -- the 1, 2, and 3 that began the lines -- those numbers actually tell the computer more than the mere fact that you intend the commands to be executed later. They also specify the sequence in which the commands are to be executed. The computer automatically sorts them and executes command #1 first, command #2 second, and command #3 third. The sequence with which commands are executed can be vitally important. Consider this sequence of commands: 1. Put the walnut on the table. 2. Move your hand away from the walnut. 3. Hit the walnut with the hammer. Now, if you got the commands in the wrong sequence, and executed #1, then #3, then #2, you would truly appreciate the importance of executing commands in the proper order. That’s why we give numbers to these commands: it makes it very easy for the stupid computer to get the right commands in the right order. By the way, it isn’t necessary to number the commands 1, 2, 3, . . . and so on. Most BASIC programmers number their commands 10, 20, 30, . . . and up. The computer is smart enough to be able to figure out that 10 is smaller than 20, and so it starts with command #10, then does command #20, then command #30, and so on. It always starts with the lowest-numbered command, whatever that is, and then goes to the next larger number, then the next, and so on. You might wonder, why would anybody want to number their commands by 10’s instead of just plain old 1, 2, 3, . . . Well, consider the wisdom of the Founding Fathers. They wrote the best Constitution they could, and then they made a provision for adding amendments to their masterpiece. They knew that, no matter how good their constitution was, someday there would be a need to change it. Now, if you number your commands 1, 2, 3, . . . and someday you need to change your program, what are you going to do if you need to add a new command between command #2 and command #3? Sorry, the computer won’t allow you to add a command #2 1/2. But if you number your commands 10, 20, 30, . . ., then if you need to add a command between #20 and #30, you just call it command #25. Unless, of course, you are wiser than the Founding Fathers, and expect no need to change your program. . . You can now write very large programs. Just keep adding commands, giving each a line number, making sure that they are in the order you want, and trying them out with the RUN command. When you get a program finished the way you want it, or if you want to save it before ending a session with the computer, you must tell the computer to save the program to your diskette. The command for doing this will probably look something like this: Unfortunately, since all computers are different, you will probably need to type something slightly different from this. Look up the exact wording in your BASIC manual under "Saving a Program". The "MYPROGRAM" part is the name that you give your program. You can give your program almost any name you want. Call it "THADDEUS" or "AARDVARK" or "ROCK"; about all the computer will care is that 1) you don’t give it a name that it’s already using for something else, and 2) that the name isn’t too long &emdash; usually 8 characters is the limit. If you don’t save the program, then it will be lost as soon as you turn off the computer or load another program. When you want to get your program back, you will have to type something just like the SAVE command, only you type "LOAD" instead of "SAVE". You’ll still have to tell it the name of the program that you want to load. One last topic and we are done with this chapter. I want to introduce you to the INPUT command. This little command allows the computer to accept input from the keyboard while the program is running. An example shows how simple it is: 10 INPUT FROGGY 30 PRINT BIRDIE If you were to RUN this program, you would see a question mark appear on the screen. The question mark is a prompt, the computer’s way of telling you that it is expecting you to do something. In this case, it is waiting for you to type a number and press RETURN. When you do this, it will take that number and put it into FROGGY. Then it will proceed with the rest of the program. That’s all the INPUT statement does; it allows you to type in a number for the computer to use. Despite its simplicity, the INPUT command has vast implications for programming. Up until this point, the programs you could write would always do the same thing. Your first program, for example, would always calculate 3*5+5 to be equal to 20. Now, this may be an exciting revelation the first ten or twenty times, but eventually it does get a little boring to be told for the umpteenth time that 3*5+5 is 20. With the INPUT statement, though, you can start to have some variety. Using the program listed above, you could type in a different number each time you ran it, and get a different answer. You could type in 8, and discover that 8+5 is 13; then you could type in 9, and learn that 9+5 is 14. For real thrills, you could type in a big, scary number like 279, and find out that 279+5 is 284. Wowie, zowie! Aren’t computers impressive? Have patience, this is only chapter 3.
http://www.erasmatazz.com/TheLibrary/MyBooks/LearnToProgram/Chapter3/Chapter3.html
13