text
stringlengths
8
1.28M
token_count
int64
3
432k
What You Should Know About Medicines/Epilepsy. Epilepsy is a disorder of the central nervous system. It is characterized by loss of consciousness and convulsions Anti-epileptic drugs: Gabapentin and pregbalin are anti-epileptic drugs
59
PHP Programming/PHP and MySQL. Introduction. Note: You should know SQL to use MySQL. You can learn that in the SQL book. PHP integrates well with MySQL, and contains a library full of useful functions to help you use MySQL. There are even many database managers written in PHP. MySQL is not a part of the server that runs PHP, it is a different server. MySQL is one of many database servers, it is open source and you can get it here. As of PHP5, MySQLi integration is not enabled by default and you should add it manually, see here for installation instructions, PHP4 has it enabled by default. Note: mysql_*() functions are deprecated in PHP 5.6+ and removed in PHP7+ . Use PDO or mysqli instead. Let's get started! Connecting to a MySQL server. To connect with a MySQL server, you should use the mysqli_connect() function or mysqli() class. It is used in the following manner: mysqli_connect(servername, username, password,database); "servername" - The name or address of the server. Usually 'localhost'. "username", "password" - The username and password used to login to the server. "database" - The database name you want to select. It's optional. Multiple MySQL connections. Though not commonly used, you can connect to more than one database server in one script. On a successful connection, "mysqli_connect()" returns a reference to the server, which you can capture with a variable: $con = mysqli_connect("localhost", "root", "123"); $con2 = mysqli_connect("http://www.example.com/", "root", "123"); Selecting your database. In order to perform most actions(except for creating, dropping and listing databases, of course), you must select a database. To do so, use "mysqli_select_db()" only if you didn't defined it in "mysqli_connect()". mysqli_select_db(db_name); Where db_name is the database name. By default, "mysqli_select_db()" will try to select the database on the last mySQL connection opened. So in the following code, "mysqli_select_db()" will try to select the database on the "example.com" server. $con = mysqli_connect("localhost", "root", "123"); $con2 = mysqli_connect("example.com:3306", "root", "123"); mysqli_select_db("database1"); The function takes a second, optional, parameter that you can use to select a different database then the one last opened: $con = mysqli_connect("localhost", "root", "123"); $con2 = mysqli_connect("example.com:3306", "root", "123"); mysqli_select_db("database1", $con); Executing a query. To execute a query, use "mysqli_query()". For example: mysqli_query($con,"UPDATE table1 SET column1='value1', column2='value2' WHERE column3='value3'"); Important: mysqli_query() returns a resource link which you "will" need for certain operations. So you should capture it by storing the result in a variable: $query1 = mysqli_query($con,"UPDATE table1 SET column1='value1', column2='value2' WHERE column3='value3'"); Functions for SELECT queries. Executing a SELECT query is all good and well, but just sometimes, we may want the result (people are strange like that). The PHP developers are those strange people, and they added to PHP a few functions to help up with that: mysqli_fetch_row(). Returns the next row in the result. It is returned as an array, so you should capture it in a variable. For example: $query1 = mysqli_query($con,"SELECT id, name, address FROM phone_book"); $person = mysqli_fetch_row($query1); print_r($person); This should output something like this: Array [0] => 1 [1] => Sharon [2] => Helm, 3 This function will always return the next row in the result, until eventually it runs out of rows and it returns "false". A very common use of this function is with a "while" loop, for example: $query1 = mysqli_query($con,"SELECT id, name, address FROM phone_book"); while($person = mysqli_fetch_row($query1)) print_r($person); echo "\n"; This should output something like this: Array [0] => 1 [1] => Sharon [2] => Helm, 3 Array [0] => 2 [1] => Adam [2] => 23rd street, 5 Array [0] => 3 [1] => Jane [2] => Unknown mysqli_fetch_array(). This one does exactly what "mysqli_fetch_row()" does, except for the fact it returns an associative array. $query1 = mysqli_query($conn,"SELECT id, name, address FROM phone_book"); $person = mysqli_fetch_array($con,$query1); print_r($person); Should output something like this: Array [id] => 1 [name] => Sharon [address] => Helm, 3 mysqli_num_rows(). Sometimes we want to know how many rows we get in the result of a query. This can be done by something like this: $counter = 0; $query1 = mysqli_query($con,"SELECT id, name, address FROM phone_book"); while(mysqli_fetch_row($query1)) $counter++; "$counter" now stores the amount of rows we got from the query, but PHP has a special function to handle this: $query1 = mysqli_query($con,"SELECT id, name, address FROM phone_book"); $counter = mysqli_num_rows($query1); "$counter" stores the same value, but wasn't that easier? Functions for other queries. The following functions are not just for SELECT queries, but for many types of queries. Those queries can be useful in many cases. mysqli_info(). Will return information about the last query executed, or about the query you send it a resource of: mysqli_info(); //For the last query executed mysql_info($query); //For $query, what ever that is... The information is returned as string, and though it's templated, it's not normally to be analyzed by the script, but to be used in output. mysqli_affected_rows(). Returns the number of rows affected by a query, only works with INSERT, UPDATE or DELETE queries: mysqli_affected_rows(); //For the last query executed mysqli_affected_rows($query); //For $query, what ever that is mysqli_insert_id(). Returns the id mysql assigned to the auto_increment column of the table after an INSERT query. $result = mysqli_query($con,"INSERT 'Bob' INTO names(firstname)"); $new_id = mysqli_insert_id(); "Note: You should call mysqli_insert_id() straight after performing the query. If another statement is issued in between mysqli_insert_id() will return NULL!" Closing a connection. You should use "mysqli_close()" to close a mySQL connection. This would typically close the last connection opened, but, of course, you can send it a connection identifier. mysqli_close(); //Close the last connection opened mysqli_close($con); //Close connection $con
1,912
Marine Hydrodynamics. This Wikibook is an introductory course in marine hydrodynamics. The course concerns fluid flow in general but generally assumes that the fluid is sea water. We will cover the following in this course:
48
Biblical Studies/Christianity/Christian Theology. Christian Theology is a systematic view of the doctrines of the Bible. Theology Proper. "The doctrine of God" Angelology. "The doctrine of Angels and Demons" Anthropology. "The doctrine of man" Bibliology. "Origin of the Bible" "The doctrine of the Bible" "/Apocrypha/" "/Cult Usage/" Christology. "The doctrine of Jesus" Ecclesiology. "The doctrine of the Church" Eschatology. "The doctrine of End Times" Hamartiology. "The Doctrine of Sin" Pneumatology. "The doctrine of the Holy Spirit" Soteriology. "The doctrine of Salvation"
191
Baptist Theology/Existence of God. < Baptist Theology Does God Exist? That is the principle question of theology. There are several arguments attempted to prove the existence of God. In Genesis 1:1, the existence of God is assumed, not argued
63
Biblical Studies/Christianity/Christian Theology/Existence of God. Does God Exist? That is the principal question of theology. There are several arguments attempted to prove the existence of God. "The existence of God, as a First Cause, from the effects seen in the world" (James Boyce). One may look around and observe reality, energy, matter, etc. and conclude that there must be a cause for the reality, energy, matter, etc. a Supreme Being. The ontological argument was developed by Anselm: "Something than which nothing greater can be thought so truly exists that it is not possible to think of it as not existing" Also known as Intelligent Design, the argument assumes that for there to be order, there must be a creator. The alternative is often referred to as the "Tornado in the Junkyard" From a literal viewpoint, in Genesis 1:1, the existence of God is assumed.
211
What You Should Know About Medicines/Anticoagulant. An Anticoagulant is a substance used to prevent blood clots. It is useful for patients who (i) are at an increased risk of large abnormal blood clots (ex. have high cholesterol, or high blood pressure) (ii) have had heart valve surgery, (iii) have a prosthetic valve made of synthetic material, and (iv) have atrial fibrillation. MORE INFO: Clotting is an important property of blood. It is sometimes desirable that such clotting of blood be prevented. Anticoagulants are used both inside and outside the human body to prevent the clotting of blood. USE OF ANTICOAGULANT IN HUMANS: Example 1: Thrombosis : The formation of a blood clot (thrombus) inside a blood vessel is described as Thrombosis. Such blood clots obstruct the flow of blood through the circulatory system. For example, your blood clots when you cut yourself so that you will not bleed to death. In cases where thrombosis is causing disease, anticoagulants can be used (at carefully controlled dosages) to reduce the tendency to clotting. Common anticoagulants for this use are Warfarin, Heparin (various heparins exist), and nicoumalone. USE OF ANTICOAGULANT OUTSIDE HUMAN BODY: Anticoagulants are also used to prevent the clotting of blood outside the body. Examples 1) Blood transfusion bags : Blood is collected from donors and stored into sterile plastic bags. To maintain the fluid state of blood, it is necessary to prevent the clotting of blood. Chemicals like Acid Citrate Dextrose (ACD) and Citrated Phosphate Dextrose (CPD) can be added to stop blood clotting 2) Medical and surgical equipments: If blood is allowed to clot, these equipments will get clogged up and thus will no longer work. To prevent this from happening suitable anticoagulants are used
489
What You Should Know About Medicines/Diuretics. Diuretics help to lower the salt and fluid levels in your body. They also reduce swelling and ease the workload on your heart.
45
Competitive Programming. What Is This Book About? This book is about programming competitions and what you need to know in order to be competitive... Why Should I Compete? The primary reason why people compete in programming contests is that they enjoy it. For many, programming is actually fun, at least until they get a job doing it and their desire is burnt out of them. It is also a good way to meet people with similar interests to your own. But for those of you who need additional incentive, it is also a good way to increase others awareness of you. Major programming competitions are always monitored by people looking for new talent for their organizations, sometimes these are the people who actually fund the contest. High school programming contests (such as the ones sponsored by the BPA) often are to help prepare students for college or careers in computer programming. These contests often attract scouts from colleges looking to award scholarships to these individuals. For example, IBM is currently funding the ICPC, a contest that costs them millions annually. Why would they pay so much for a programming contest? They view it as an investment. It lets them filter through the talent and get to those that could potentially make IBM much more money in the long run. Before IBM the contest was funded by Microsoft for the same reasons. Organizations that feel like they can't quite afford the huge price tag associated with ICPC have begun to fund cheaper contests, such as TopCoder, and in the case of Google, running their own contest through TopCoder's technology. How Do I Get Started? The first thing needed to get started is proficiency in a programming language and familiarity with a text editor/development environment. The two languages common to all of the above programming competitions are C++ and Java. These languages will be used throughout this document. There are many books and tutorials available to learn these languages, in addition to an unending amount of freely available code on the internet. Which Language Should I Use? In a competitive programming environment the value of a programming language differs from a software engineering environment. While good software engineering practices will not hurt you in completing a task, they can often consume valuable time with little benefit. Many software engineering techniques are designed for projects with many programmers and projects that take a large amount of time to complete. In a programming competition these conditions no longer hold. Most times it is a single programmer working on a task, with very little time in which to complete it. Also, some programming contests test a programmer's skills in a specific language. What Are The Contests Like? The TopCoder Open is available to both college students and professionals who are registered members of the TopCoder website and who are at least 18 years of age at the time of registration. The Tournament consists of three online rounds which lead up to the main onsite tournament and includes several competitions: Algorithm, Design, Development, Marathon, Mod Dash, and Studio Design. Competitions such as the Coding Phase in the Algorithm round allow contestants to choose between four programming languages, Java, C#, C++, and VB.NET, so you can use the one you are best at. There are also competitions where you get the chance to challenge the functionality of other competitors’ code.(“Welcome to the 2012 TopCoder Open” Copyright © 2001-2012, TopCoder, Inc. http://community.topcoder.com/tco12/)
752
PHP Programming/Mailing. The mail function is used to send E-mail Messages through the SMTP server specified in the php.ini Configuration file. bool mail ( string to, string subject, string message [, string additional_headers [, string additional_parameters]]) The returned boolean will show whether the E-mail has been sent successfully or not. This example will send message "message" with the subject "subject" to email address "[email protected]". Also, the receiver will see that the eMail was sent from "Example2 <[email protected]>" and the receiver should reply to "Example3 <[email protected]>" mail( "[email protected]", // E-Mail address "subject", // Subject "message", // Message "From: Example2 <[email protected]>\r\nReply-to: Example3 <[email protected]>" // Additional Headers There is no requirement to write E-mail addresses in format "Name <email>", you can just write "email". This will send the same message as the first example but includes From: and Reply-To: headers in the message. This is required if you want the person you sent the E-mail to be able to reply to you. Also, some E-mail providers will assume mail is spam if certain headers are missing so unless you include the correct headers your mail will end up in the junk mail folder. Error Detection. Especially when sending multiple emails, such as for a newsletter script, error detection is important. Use this script to send mail while warning for errors: $result = @mail($to, $subject, $message, $headers); if ($result) { echo "Email sent successfully."; } else { echo "Email was not sent, as an error occurred."; Sending To Multiple Addresses Using Arrays. In the case below the script has already got a list of emails, we simply use the same procedure for using a loop in PHP with mysql results. The script below will attempt to send an email to every single email address in the array until it runs out. while ($row = mysql_fetch_assoc($result)) { mail($row['email'], $subject, $message, null, "-f$fromaddr"); Then if we integrate the error checking into the multiple email script we get the following $errors = 0 $sent = 0 while ($row = mysql_fetch_assoc($result)) { $result = ""; $result = @mail($row['email'], $subject, $message, null, "-f$fromaddr"); if (!$result) { $errors = $errors + 1; $sent = $sent + 1; echo "You have sent $sent messages"; echo "However there were $errors errors";
651
Pascal Programming/Beginning. In this chapter you will learn: Programs. All your programming tasks require one source code file that is called in Pascal a program. A program source code file is translated by the compiler into an executable application which you can run. Let’s look at a minimal program source code file: program nop; begin end. Compilation. In order to start your program you need to compile it. First, copy the program shown above. We advise you to actually type out the examples and not to copy and paste code. Name the file nop.pas. nop is the program’s name, and the filename extension .pas helps you to identify the "source code" file. Once you are finished, tell the compiler you have chosen to compile the program: Finally, you can then execute the program by one of the methods your provides. For example on a console you simply type out the file name of the executable file: ./nop (where ./ refers to the current working directory in Unix-like environments) As this program does (intentionally) nothing, you will not notice any (notable) changes. After all, the program’s name nop is short for "no operation". The computer speaks. Congratulations to your first Pascal program! To be fair, though, the program is not of much use, right? As a small step forward, let’s make the computer speak (metaphorically) and introduce itself to the world: program helloWorld(output); begin writeLn('Hello world!'); end. Program header. The first difference you will notice is in the first line. Not only the program name changed, but there is (output). This is a program "parameter". In fact, it is a list. Here, it only contains one item, but the general form is (a, b, c, d, e, …) and so on. A program parameter designates an external entity the needs to supply the program with, so it can run as expected. We will go into detail later on, but for now we need to know there are two "special" program parameters: input and output. These parameters symbolize the default means of interacting with the . Usually, if you run a program on a console, output is the console’s display. Writing to the console. The next difference is writeLn('Hello world!'). This is a statement. The statement is a routine invocation. The routine is called writeLn. WriteLn has (optional) parameters. The parameters are, again, a comma-separated list surrounded by parentheses. Routines. "Routines" are reusable pieces of code that can be used over and over again. The routine writeLn, short for "write line", writes all supplied parameters to the destination followed by a “newline character” (some magic that will move the cursor to the next line). Here, however, the destination is invisible. That is, because it is optional it can be left out. If it is left out, the destination becomes output, so our console output. If we "want" to "name" the destination explicitly, we have to write writeLn(output, 'Hello world!'). WriteLn(output, 'Hello world!') and writeLn('Hello world!') are "identical". The missing optional parameter will be inserted automatically, but it relieves the programmer from typing it out. In order to use a routine, we write its name, as a statement, followed by the list of parameters. We did that in line 2 above. String literals. The parameter 'Hello world!' is a so-called string literal. "Literal" means, your program will take this sequence of characters "as it is", not interpret it in any way, and pass it to the routine. A string literal is delimited by typewriter (straight) apostrophes. Reserved words. In contrast to that, the words program, begin and end (and many more you see in a bold face in the code examples) are so-called "reserved words". They convey special meaning as regards to how to interpret and construct the executable program. You are only allowed to write them at particular places. Behavior. This type of program, by the way, is an example of a class of “Hello world” programs. They serve the purpose for demonstrating minimal requirements a source code file in any programming language needs to fulfill. For more examples see Hello world in the WikiBook “Computer Programming” (and appreciate Pascal’s simplicity compared to other programming languages). Comments. We already saw the option to write "comments". The purpose of comments is to serve the programmer as a reminder. Comment syntax. Pascal defines curly braces as comment delimiting characters: (spaces are for visual guidance and have no significance). The left brace "opens" or "starts" a comment, and the right brace "closes" a comment. However, when Pascal was developed not all computer systems had curly braces on their keyboards. Therefore the bigramms (a pair of letters) using parentheses and asterisks was made legal, too: Such comments are called "block comments". They can span multiple lines. Delphi introduced yet another style of comment, line comments. They start with two slashes // and comprise everything until the end of the current line. Delphi, the as well as support all three styles of comments. Helpful comments. There is an “art” of writing "good" comments. When writing a comment, stick to "one" natural language. In the chapters to come you will read many “good” comments (unless they clearly demonstrate something like below). Terminology. Familiarize with the following terminology (that means the terms on the right printed as comments): program demo(input, output); // program header const // ────────────────────┐ │ answer = 42; // constant definition ┝ const-section│ type // ────────────────────┐ │ employee = record // ─┐ │ │ number: integer; // │ │ │ firstName: string; // ┝ type definition │ │ lastName: string; // │ ┝ type-section │ end; // ─┘ │ │ employeeReference = ^employee; // another type def. │ │ // ────────────────────┘ ┝ block var // ────────────────────┐ │ boss: employeeReference; // variable declaration┝ var-section │ // │ begin // ────────────────────┐ │ boss := nil; // statement │ │ writeLn('No boss yet.'); // another statement ┝ sequence │ readLn(); // another statement │ │ end. // ────────────────────┘ │ Note, how every constant and type definition, as well as every variable declaration all go into dedicated "sections". The reserved words const, type, and var serve as headings. A "sequence" is also called a "compound statement". The "combination" of definitions, declarations and a sequence is called a "block". Definitions and declarations are optional, but a sequence is required. The sequence may be empty, as we already demonstrated above, but this is usually not the case. Do not worry, the difference between "definition" and "declaration" will be explained later. For now you should know and recognize "sections" and "blocks".
1,767
Bloom's Taxonomy. Bloom's Taxonomy * Benjamin Bloom created this taxonomy for categorizing levels of abstraction - thus providing a useful structure in which to describe Lesson Plan Components: Interest Approach, Discussion, Presentation, Demonstration, and Test Items. Content Goals start with an active verb. Note the 'Cues' below, which suggest active verbs that may be used when creating Lesson Plan Components. See the Example Lesson Plan. Knowledge Comprehension Application Analysis Synthesis Evaluation Note: IPSI uses Bloom's work as modified by Simpson and Kratwold to create three domains: cognitive, psychomotor, and affective. The first, second and fourth levels of Bloom form the cognitive domeain. The third level of Bloom forms the psychomotor domain and the fifth and sixth levels of Bloom form the affective domain. Accordingly, content will be parsed into one of nine categories --- three levels of cognitive, three of psychomotor and three of affective. These nine categories are sufficiently precise so that prescriptions regarding instruction and testing can be aligned with the intent expressed in content goals. See also. --a discussion of Bloom's taxonomy as it relates to classroom teaching.
280
Linguistics. This online textbook serves to provide an introduction to the science of linguistics, its major subfields, and its theoretical consequences. Part of the
34
Linguistics/Introduction. Language is all around us. Language allows us to share complicated thoughts, negotiate agreements, and make communal plans. Our learning, our courting, our fighting — all are mediated by language. You can think of language as a "technology" — humans manipulate their bodies to produce sounds, gestures, and appearances that encode messages using a shared system. How then does the technology of language work? Answering this question is surprisingly hard; our language skills are automatic and therefore hard to reflect upon. Nevertheless, throughout the centuries, scholars have devised ways to study human language, although there is still much more research to be done and many mysteries to explore. The field of scholarship that tries to answer the question "How does language work?" is called "linguistics", and the scholars who study it are called "linguists". How do linguists learn about language? Linguistics is a "science". This means that linguists answer questions about language by observing the behavior of language users. Astronomy has its enormous telescopes, particle physics has its supercolliders, biology and chemistry have intricate and expensive apparatuses, all for learning about their particular facets of the world. Modern linguists go straight to the source by observing language users in action. One of the charms of linguistics is that the data is all around you; you need nothing more than a patient ear and an inquiring mind to do original linguistic research of your own. But you need not start from scratch — generations of linguists before you have laid a fairly stable groundwork for you to build on. Throughout the history of linguistics, the primary source of data for linguists have been the "speech, writing, and intuitions" of language users around them. This is not the only way one could imagine learning about language. For example, one could study respected authorities. But this approach raises an obvious question: how did the respected authorities learn what they knew? If each language were invented by an ancient sage, who determined once and for all how that language worked, the authoritative approach would have great appeal. We would go to the writings of the Founding Sage of Danish, for example, and to the writings of the sage's immediate disciples, to find out the Original Intent, much as American judges refer to the Constitution. But, as far as we can tell, this is "not" how most languages come to be. We have ancient authorities in plenty, but in most cases these authorities were merely trying to codify the practices of the people who seemed to them most skillful in the use of language. In other words, these authorities were themselves scientific linguists of a sort: they observed language users and tried to describe their behavior. Describing and prescribing. In literate cultures, it is common to have a tradition of language instruction. In formal classes, students are taught how to read and write. Furthermore, the teacher tells the students rules of proper usage. This is what is referred to as a prescriptive tradition, in which students are told what to do. It is similar to being taught the proper way to do arithmetic or knit a sweater. Formal language instruction is usually "normative", which means that it involves a sense of "should and shouldn't", a notion of right and wrong behavior. By contrast, linguists follow a descriptive tradition, in which the object is to observe what people really do, and form theories to explain observed behavior. Any specific use of language is only considered "right" or "wrong" on the basis of whether it appears in ordinary, natural speech. As a member of a literate culture, you have probably been exposed to a certain amount of your culture's traditional language instruction. When you first take up the study of linguistics, you will probably experience some discomfort as you observe language behaviors that you have been taught are wrong. It will be hard to suppress an almost instinctive reaction: "This behavior is "incorrect". My observation is no good; the person I am observing is an unreliable source of information." It is important to remember that traditional language instruction and scientific linguistics have completely different goals and methods. Traditional language instruction is intended to train students to use a standard language. Language standards exist largely to make sure formal communication is possible between distant regions, between generations, between centuries, between social classes. Modern civilization arguably depends on such formal communication. Its rules must be constant over wide areas, over long spans of time, across different social and economic groups. This leads to an interesting contradiction: The natural result is that students emerge from traditional language instruction with a strong sense that certain language behaviors are "simply wrong". Most members of a literate culture have this moral sense about language, and find it hard to suppress. Yet to do objective science, to find out how language really works, it is necessary to adopt a detached viewpoint and to treat all language users as valid. The first principle of linguistics is: "Respect people's language behavior, and describe it objectively." In this book we will adopt this objective stance: language behaviors are not intrinsically right or wrong, and we seek to "describe" what they are, not to "prescribe" what they should be. Hidden knowledge: how linguistic inquiry works. Linguists often say that they study the knowledge that a native speaker must have in order to use their language — not formal, school-learned knowledge, but a more subtle kind of knowledge, a knowledge so deeply-ingrained that language users often do not know they know it. We will illustrate the type of knowledge we mean with a "consciousness-raising" exercise. We will show you some things about English that you must already know, but almost certainly don't 'know you know'. Case study 1: English plurals. In order to speak English, you have to know how to make the "plural", or multiple form, of most nouns you hear. You probably do this effortlessly. If you ask most people how to do it, they will say "Oh, you just add "s"." But listen carefully. You use these three different plural endings every day, effortlessly, without conscious reflection. You always use the right one. It is even amusing to "try" to use the wrong plural ending. You "can" say "*forkiz", or "*spoonce", but you never do. (We use an asterisk to draw your attention to the fact that few people would ever say these things.) You must know the rules governing the use of these different plural endings somewhere inside you, but in all likelihood you never knew you knew until this moment. You must have some way to select the correct ending to use with each word, otherwise you would occasionally say things like "*rosebushss". But unless you have thought about this before, it is almost certain that even now that you have been exposed to the concept, you still have no idea how you manage to select the appropriate plural suffix every time. Here is something that you definitely know, but you cannot state it out loud. It is "unconscious knowledge". Can you analyze your own behavior and figure out how you decide whether to use "-s", "-z", or "-iz"? Take a few minutes and try. Write out a dozen or so common English nouns and classify them according to what plural ending you would use. Do you see any patterns? (Watch out for completely irregular nouns like "foot"/"feet"; for now we are only concerned with "S-plurals".) One theory you might come up with is that the correct plural suffix must simply be memorized for each noun. This is a perfectly reasonable theory. Perhaps "forks" sounds better to us than "*forkiz" simply because the former is the only plural we have ever heard. However, this is not the case, because for new words we have never encountered we can still pick out a plural ending that sounds right. For example: (Jean Berko-Gleason first studied examples similar to these as set in her wug test). Complete these sentences with the appropriate plurals. Then have five English-speaking friends do it, but don't let them collude: force them to form their own plurals. You and your five friends will all agree: the first example gets a buzzing plural "-z", the second gets a whole syllable "-iz", and the third gets a hiss "-s". And none of you have ever heard those words before, nor do you have any clue what they mean. If plural endings were simply memorized, you and your friends would have had to guess the endings, and you would likely have made different guesses. The second principle of linguistics is, "Language knowledge is often unconscious, but careful inquiry can reveal it." The idea of deep structure and the general outline of linguistic theory. What have linguists learned about how language works? What is the overall shape of modern linguistic theory? Linguists espouse a variety of theories about language; differences between these theories are sometimes quite striking even to laypeople and sometimes so subtle that only well-read linguists can understand the distinctions being made. Arguments between linguists who support different theories can be quite heated. But underneath the noisy debate about details there is widespread consensus, which has been coalescing since the 1950s. This consensus sees, in every corner of human linguistic ability, at least two layers: a "surface structure" consisting of the sounds we actually speak and hear, and the marks we write and read, and a "deep structure" which exists in the minds of speakers. The deep and surface structures are often strikingly different, and are connected by "rules" which tell how to move between the two kinds of structure during language use. These rules are part of every language-user's unconscious knowledge. The idea of deep structure is an unintuitive one. It is natural to be skeptical about it. Why do linguists believe that language structures inside the mind are so different from what we speak and hear? We will use another case study to give an example of the evidence. Case Study 2: The English auxiliary "wanna". (1a) "Rachel doesn't want to do her linguistics homework." (1b) "Rachel doesn't wanna do her linguistics homework." In many varieties of English, the two words "want to" can often be contracted into "wanna". English users are more likely to do this in speech than in writing, and are more likely to do it in relaxed, informal contexts. (Linguists use the word "registers" to describe the different behaviors language-users adopt depending on context.) The pronunciation of "wanna" lacks the clear "t" sound of "want to". English users evidently must know both variants. Can "want to" "always" be contracted? Consider the following examples. (2a) "Who do you want to look over the application?" (2b) "*Who do you wanna look over the application?" Again, we are using an asterisk to call your attention to the fact that the second example is not natural English to most native users. It is, in fact, traditional in linguistics to use an asterisk to mark an example that is somehow unacceptable or unnatural for native users. As in our first case study, we seem to have found a mysterious piece of unconscious knowledge that English users all share. We do not resist changing (1a) into (1b), but something makes the change from (2a) to (2b) much less comfortable. What could it be? How do English speakers decide when "want to" may be contracted? Perhaps the contraction is inhibited by the fact that (2a) is a question. We can test this theory with a similar example. (3a) "Who do you want to invite to the party?" (3b) "Who do you wanna invite to the party?" Here, the contraction works fine. So the question-theory cannot be correct. In fact, the similarity between (2a) and (3a) makes (2a)'s resistance to contraction quite puzzling. What follows is not "the answer" to the puzzle. Rather, it is a sketch of part of a theory that some linguists use to explain the observed behavior of "wanna". This theory was arrived at by considering many, many examples, and consulting many, many native English speakers. It is not in any way authoritative, but it illustrates the point we are trying to make. Consider some possible "answers" to questions (2a) and (3a), which we repeat for ease of reference: (2a) "Who do you want to look over the application?" (2c) "I want Yuri to look over the application." (3a) "Who do you want to invite to the party?" (3c) "I want to invite Yuri to the party." Notice that in sentence (2c), the name "Yuri" comes between "want" and "to", separating these two words, while in (3c), the words "want to" are still next to each other. Let us hypothesize that in our minds, questions like (2a) and (3a) have some kind of mark that shows where we expect the answer to be inserted. Linguists sometimes call such a mark a "trace", and symbolize it with "t". So we might show this "mental form" of our two questions as follows: (2d) "Who do you want "t" to look over the application?" (3d) "Who do you want to invite "t" to the party?" If such traces really exist in our minds, they would provide a very elegant explanation of when "want to" can be contracted. The proposed explanation is that we can contract "want to" only when there is nothing between the two words in the mental form of the sentence. We already knew this to be true when the interrupting material is audible. Of course "want to" cannot be contracted in (2c), because "Yuri" is in the way. Our proposal is to extend this explanation to inaudible material, and to say that "want to" cannot be contracted in (2a) because a trace is in the way. You might object that we have invented traces precisely to explain when "want to" cannot contract; that we will simply hypothesize that every uncontractable example has a trace in the middle. This is a fair objection, but remember that we are not putting traces wherever we want, but only where we expect the answer to the question to fit. You are encouraged to try more examples on yourself and your friends. This step of introducing traces to explain when "want to" may be contracted is a serious and profound piece of theory-building. We are saying that sentences in the mind are not exactly like their counterparts spoken aloud. They are not mere mental tape-recordings — they can possess aspects, like traces, that cannot be heard. As soon as we take this theoretical step, we open up a new question: 'How is language represented in the mind?'. Linguists use the term deep structure to discuss the way sentences are represented in the mind. In contrast, surface structure means sentences as we hear or read them. This leads to the third principle of linguistics: "Sentences have a deep structure in the mind, that is not directly observable, but may be inferred indirectly from patterns of language behavior". It is this third principle that separates recent linguistic scholarship (since about 1950) from the centuries of work that went before. Using linguistic evidence cautiously. Before we proceed, we must say a few words about the mode of inquiry we are using. As we throw various examples at you, we are either marking them with asterisks — "starring" them, as linguists say — or we are not. In essence, we are asking you to go along with "our" judgment about whether or not the examples are natural, native English. We would prefer to be scientific about it; one way of doing this would be to perform a study in which we present our examples to a few hundred native English users, and have our subjects tell us whether they thought the examples were good English or not. But such studies take a lot of time and effort, and it's easy to make mistakes in experimental technique that would weaken our confidence in the results. It is extremely tempting to use oneself as an experimental subject, and use one's own judgment about whether an example is natural English or not. There are obvious pitfalls to this approach. One may not be as typical a speaker as one believes. Or one's judgment may be unreliable in the highly artificial situation of asking oneself questions about one's own language. But nevertheless, in some cases, the situation seems clear-cut enough that we can give examples, as we have been, in the reasonable confidence that the reader will agree with our judgments. This is a shortcut, and is not a good substitute for real data. But in reality, a lot of linguistics gets done this way, with scholars using themselves as informal experimental subjects. Doing research in this way incurs a debt, the debt of eventually backing up our claims with real experimental studies. It's fine to get preliminary insights by probing our own intuitions about language. Eventually, though, we must do real science, and we must remember that in any conflict between experimental data and our own intuitions, real data always wins. Structure of this Book — Layers of Linguistics. As you may already have noticed, language is a hugely multifaceted entity. When we learn how to write in school, we are taught that individual letters combine to create words which are ordered into sentences that make up a composition. Spoken language is similar, but the reality of language is much more subtle. The structure of language may be separated into many different layers. On the surface, utterances are constructed out of sequences of sounds. The study of the production and perception of these sounds is known as phonetics. These sounds pattern differently in different languages. The study of how they group and pattern is known as phonology. These groups of sounds then combine to create words, which is morphology. Words must be ordered in specific ways depending on a language's syntax. The literal meaning assigned to words and sentences is the semantic layer, and the meaning of sentences in context is known as pragmatics. Each of these may be considered a branch of theoretical linguistics, which studies the structure of models of language. Don't worry if it's not yet clear to you what each of these subfields of linguistics deals with. The first chapters of this book go through these fields layer by layer, building up a clear picture of what linguistics is. We will then explore various topics of inquiry which apply our linguistic knowledge of these layers to solve real-life problems, a pursuit known as applied linguistics. Branches of applied linguistics include: computational linguistics, anthropological linguistics, neurolinguistics, sociolinguistics, psycholinguistics, discourse analysis and language acquisition. Workbook section. Exercise 1: Me and John. If you took English classes at school, you may have been warned against using the following sentence: (4a) Me and John are friends. You probably were instructed to replace it with the following: (4b) John and I are friends. Pronouns such as "me, him, her, ..." are termed 'objective pronouns' because traditionally they are considered to never appear as the subject of a verb, and prescriptivists rule that as such usage of them in this position is "incorrect". However (4a) is not marked with an asterisk because it is still largely acceptable to native English speakers, and as descriptive linguists we are interested in both forms. Now note that certain arrangements of pronouns ("I", "me", "John", etc.) in the sentence make it ungrammatical to all English speakers: (4c) *I and John are friends. (4d) *Her and John are friends. (4d) *I and him are friends. List all possible combinations of two pronouns in the sentence "___ and ___ are friends." that you can think of, and label each sentence which would not be said by native speakers with an asterisk. Then create a theory as to what makes any sentence of this form unacceptable.
4,466
Biblical Studies/Christianity/Christian Theology/Character of God. The Bible claims that God made mankind in His own image in Genesis 1:26. This does not equate that men are in the image of God and not women, as can be seen in Genesis 1:27, but rather that all of humanity possesses the Imago Dei. Christian theology strictly anthropomorphizes God as having a personality. Sometimes God is extremely angry and must kill whole towns; at other times, God appears infinitely compassionate. His human son, Jesus, is more renowned for compassion. The Bible does not record Jesus as having issued capital justice while on Earth.
149
Høgnorsk. Høgnorsk ~ English Learning the High Norwegian Language Høgnorsk (en: High Norwegian) is the usual term for an unofficial, and today little used, form of one of the two written languages in Norway, [Nynorsk (en: New Norwegian) (the other one being Bokmål. The basis for the High Norwegian language direction, is a wish to preserve the New Norwegian written language as an independent language, free of the strong influence from Bokmål that today's New Norwegian has. The written High Norwegian language is a tradition originating from the first version of the New Norwegian written language (then called (Landsmål), as it was built by Ivar Aasen and later used by classical New Norwegian authors as Aasmund Olavsson Vinje, Arne Garborg, Olav Nygard and Olav H. Hauge.
210
Høgnorsk/Bokstavrekkja. The Consonants - Medljodi. b, d, f, g, h, j, k, l, m, n, p, r, s, t, v The Vowels - Sjølvljodi. a, e, i, o, u, y, æ, ø, å (á, aa) Diphthongs - Tviljod. au, ei, øy
113
Algebra/Systems of Equations. Systems of Simultaneous Equations. In a previous chapter, solving for a single unknown in one equation was already covered. However, there are situations when more than one unknown variable is present in more than one equation. When in a given problem, more than one algebraic equation is true at a time, it is said there is a system of simultaneous equations which are all true together at once. Such sets of multiple equations may help solve for more than one unknown variable in a problem, since having more than one unknown in one equation is typically not enough information to "solve" any of the unknowns. An unknown quantity is something that needs algebraic information in order to solve it. An equation involving the unknown is typically a piece of information which may provide the information to "solve" the unknown, i. e. to determine a specific number value (or limited number of discrete values) that the unknown is (or can be) equal to. Some equations provide little or no information and so do little or nothing to narrow down the possibilities for solutions of the unknowns. Other equations make it impossible to satisfy an unknown with any real number, so the solution set for the unknown is an empty set. Many other useful equations make it possible to solve an unknown with one or just a few discrete solutions. Similar statements can be made for systems of simultaneous equations, especially regarding the relationships between them. Linear Simultaneous Equations with Two Variables. In the previous module, linear equations with two variables were discussed. A single linear equation having two unknown variables is practically insufficient to solve or even narrow down the solutions for the two variables, although it does establish a relationship between them. The relationship is shown graphically as a line. Another linear equation with the same two variables may be enough to narrow down the solution to the two equations to one value for the first variable and one value for the second variable, i. e. to solve the system of two simultaneous linear equations. Let's see how two linear equations with the same two unknowns might be related to each other. Since we said it was given that both equations were linear, the graphs of both equations would be lines in the same two-dimensional coordinate plane (for a system with two variables). The lines could be related to each other in the following three ways: 1. The graphs of both equations could coincide giving the same line. This means that the two equations are providing the same information about how the variables are related to each other. The two equations are basically the same, perhaps just different versions or forms of each other. Either one could be mathematically manipulated to produce the other one. Both lines would have the same slope and the same y-intercept. Such equations are considered dependent on each other. Since no new information is provided, the addition of the second equation does not solve the problem by narrowing the solution set down to one solution. Example: Dependent linear equations formula_1 formula_2 The above two equations provide the same information and result is the same graph, i. e. lines which coincide as shown in the following image. Let's see how these equations can be mathematically manipulated to show they are basically the same. Divide both sides of the first equation formula_1 by 3 to give formula_4 formula_5 formula_6 This is the same as the second equation in the example. This is the slope-intercept form of the equation, from which a slope and a y-intercept unique to the line can be compared with any other equations in the slope-intercept form. 2. The graphs of two lines could be parallel although not the same. The two lines do not intersect each other at any point. This means there is no solution which satisfies both equations simultaneously, i. e. at the same time. The solution set for this system of simultaneous linear equations is the empty set. Such equations are considered inconsistent with each other and actually give contradictory information if it is claimed they are both true at the same time in the same problem. The parallel lines have equal slopes but different y-intercepts. Sets of equations which have at least one common point which might provide a solution set are consistent with each other. For example, the dependent equations mentioned previously are consistent with each other. Example: Inconsistent linear equations formula_7 formula_8 To compare slopes and y-intercepts for these two linear equations, we place them in the slope-intercept forms. Subtract 3x from both sides of both equations. formula_9 formula_10 Divide both sides of both equations by -2 and simplify to get slope-intercept forms for comparison. formula_11 Now, both slopes are equal at 3/2, but the y-intercepts at 1 and -1 are different. The lines are parallel. The graphs are shown here: 3. If the two lines are not the same and are not parallel, then they would intersect at one point because they are graphed in the same two-dimensional coordinate plane. The one point of intersection is the ordered pair of numbers which is the solution to the system of two linear equations and two unknowns. The two equations provide enough information to solve the problem and further equations are not needed. Such equations intersecting at a point providing a solution to the problem are considered independent of each other. The lines have different slopes but may or may not have the same y-intercept. Because such equations provide at least one solution point, they are consistent with each other. Example: Consistent independent linear equations formula_13 formula_14 Both of these equations are given in the slope-intercept, so it is easy to compare slopes and y-intercepts. For these two linear functions, both slopes are different and both y-intercepts are different. This means the lines are neither dependent nor inconsistent, so on a two-dimensional graph they must intersect at some point. In fact, the graph shows the lines intersecting at (1,-2), which is the ordered pair solution to this system of independent simultaneous equations. Visual inspection of a graph cannot be relied on to give perfectly accurate coordinates every time, so either the point is tested with both equations or one of the following two methods is used to determine accurate coordinates for the intersection point. Solving Linear Simultaneous Equations. Two ways to solve a system of linear equations are presented here, the addition method and the substitution method. Examples will show how two independent linear simultaneous equations with two unknown variables could be solved for both unknown variables using these methods. Elimination by Addition Method. The elimination by addition method is often simply called the addition method. Using the addition method, one of the equations is added (or subtracted) to the other equation(s), usually after multiplying the entire equation by a constant, in order to eliminate one of the unknowns. If the equations are independent, then the resulting equation(s) should be one(s) which will have one less unknown. For an original system of two equations and two unknowns, the resulting equation with one less unknown would have one unknown left which could easily be solved for. For systems with more than two equations and two unknowns, the process of elimination by addition continues until an equation with one unknown results. This unknown could then be solved for and the solved value then substituted into the other equations resulting in a system with one less unknown. The elimination by addition process is repeated until all of the unknowns are solved. If a system has two equations which are dependent, then the addition of the equations could or would eliminate both unknowns at once. If the equations are parallel lines which are inconsistent, then a contradictory equation could result. The addition method is useful for solving systems of simultaneous linear equations, particularly if the equations are given in the form Ax + By = C, where x and y are the two unknown variables and A, B, and C are constants. Example: Solve the following system of two equations for unknowns x and y using the addition method: formula_15 formula_16 Solution: We can either multiply the first equation by -3 and add the result to the second equation to eliminate x, or we can multiply the second equation by 2 and add the result to the first equation to eliminate y. Let's multiply [both sides of ] the second equation by 2. formula_18 Now we add this resulting equation to the first equation; i. e. each of the two sides of the equations are added together to give a combined equation as shown here: formula_20 formula_21 _____________________ This means that we add x + 2y and 6x – 2y to get 7x + 0·y and we add 4 and 10 to get 14. This eliminates y from the combined equation to give an equation in x only: formula_23 formula_24 Now that we have x, we can substitute the value for x into either of the original two equations and then solve for y. Let's pick the first equation for the substitution into x. formula_25 formula_26 formula_27 So the solution set consists of the ordered pair ( 2,1) which is the point of intersection for the two linear functions as shown here: Elimination by Substitution Method. The elimination by substitution method is often simply called the substitution method. With the substitution method, one of the equations is solved for one of the unknowns in terms of the other unknown(s). Then that expression for the first unknown is substituted into the other equation(s) to eliminate it such that the equation(s) then have only the other unknown(s) left. If the equations are independent, then the resulting equation(s) should be one(s) which will have one less unknown. For an original system of two equations and two unknowns, the resulting equation with one less unknown would have one unknown left which could easily be solved for. For systems with more than two equations and two unknowns, the process of elimination by substitution is repeated until an equation with one unknown results. This unknown could then be solved for and the solved value then substituted into the other equation(s), resulting in a system with one less unknown. The process of elimination by substitution continues until all of the unknowns are solved. If a system has two equations which are dependent, then applying the substitution method would either eliminate two unknowns at once or result in an equation which do not yield single values for the remaining unknown(s). If the equations are parallel lines which are inconsistent, then a contradictory equation could result. Example: Solve the following system of two equations for unknowns x and y using the substitution method: formula_28 formula_29 Solution: We can start by solving for either x or y in terms of the other unknown in either one of the equations. Let's start by solving for x in terms of y in the first equation. formula_30 Next, we substitute this expression for x into the other equation in order to eliminate x from the equation. formula_31 formula_32 We have eliminated x and now we have an equation in terms of y only. We now solve for y in this equation. formula_33 formula_34 We have found the solution for y to be -1. We substitute this value for y into the expression for x in terms of y we determined from the first equation earlier. formula_35 Finally, we calculate the value of x. formula_36 So the solution set consists of the ordered pair (-2,-1) which is the point of intersection for the two linear functions as shown here: Slopes of Parallel and Perpendicular Lines. "This paragraph restates itself. It should be reworded." formula_37 . Example: Find the slope-intercept form of a [new] line which intersects y = (1/2)x – 3 at (4,-1) and is perpendicular to it. Solution: First, find slope of the new line from slope of the given line. Let m = slope of the new line. formula_38 formula_39 formula_40 The slope-intercept form of the new line will be: formula_41 where b is the y-intercept of the new line. Next, solve for y-intercept of new line using the intersecting point (4,-1) and the new slope of -2. Substitute x = 4 and y = -1 into the preceding equation and solve for b. formula_42 formula_43 formula_44 Finally, the slope-intercept form of the new perpendicular line is : formula_45 . Solving Systems of Simultaneous Equations Involving Equations Of Degree 2. The substitution method should be used for efficiency when solving nonlinear simultaneous equations, unless other methods such as the graphing method provide clear and simple solutions quickly (when they would be faster than substitution). Example: Solve the system of simultaneous equations. formula_46 formula_47 With the second equation, make a given term (here, 2x should be used) the subject. formula_48 Substitute the third equation into the first, and through factorization of the resulting, simplified quadratic with one variable the solutions can be found. formula_49 formula_50 formula_51 formula_52 formula_53 Hence we know formula_54 or formula_55 Then, we calculate that the two possibilities are: formula_54, formula_57 or; formula_58, formula_59 Solving Systems of Simultaneous Equations Using a Graphing Calculator. TI-83 (Plus) and TI-84 Plus: 1. Press "Y=" 2. Enter both equations, solved for Y 3. Press "GRAPH" 4. If all intersection points are not visible, press "ZOOM" then 0 or select "0: ZoomFit" 5. Press "2nd" then "TRACE" 6. Press 5 or select "5: intersect" 7. Move the cursor to one of the intersection points. (There may be only one) Each of these points represents one solution to the system. 8. Press "ENTER" three times 9. The coordinates of the intersection are shown at the bottom of the screen. Repeat steps 5-8 for other solutions. TI-89 (Titanium): via Graphing: 1. Press the green "diamond key", located directly beneath the "2nd" (blue) button. 2. Follow steps 2-5 as listed above. To access "Y=" and "GRAPH", press the green "diamond key" , then press F1 (it activates the tertiary function, "Y=") and F3 ( "GRAPH"). To access "ZOOM" and "TRACE", press F2 and F3 (diamond function activated), respectively. For "ZoomFit", press F2, then "ALPHA" (white), then "=" (for A). 3. To locate the point of intersection, manually use the directional keypad (arrow keys), or press F5 for "Math", then 5 for "Intersection". (The second option is more difficult to use, however; manual searching and zooming is recommended.) 4. The coordinates are displayed on the bottom of the screen. Repeat steps 2 and 3 until all desired solutions have been found. For new or additional equations, return to the "Y=" as described above. via Simultaneous Equation Solver: Note:This is a default App on the TI-89 Titanium. If you are using the TI-89 or no longer have the Solver, visit the Texas Instruments site for a free download. 1. On the APPS screen, select "Simultaneous Equation Solver" and press enter. Press "3" when the next screen appears. 2. Enter the number of equations you wish to solve and the corresponding number of solutions. 3. The two equations are represented simultaneously in a 2 x 3 matrix (assuming that you are solving two equations and searching for two solutions. The size of the matrix depends on the number of equations you wanted to solve). In the corresponding boxes, enter the coefficients/constants of your equations, pressing "ENTER" every time you submit a value. (Remember that all equations must be converted into standard form – Ax + By = C – first!) 4. Once all values have been entered, press F5 to solve.
3,745
Australian History/Contents. Australian History. Links. This is a wiki textbook—feel free to edit it, update it, correct it, and otherwise increase its teaching potential.
40
Introduction to Philosophical Logic/Prolegomena. What is logic? Logic is the study of the consistency of beliefs. For beliefs to be consistent it must be possible for them to "obtain" at the same time. For example, it is illogical to believe that the sky is completely blue and that the sky is completely red because the sky being entirely blue is inconsistent with its being entirely red, i.e. it "is not possible" for the sky to be entirely red at the same time as its being entirely blue. Logic is also a study of "logical consequence", i.e. what follows by necessity from something else. By studying inconsistency of beliefs, philosophers are able to study the "validity of arguments", as will be shown later. Methods of finding whether certain arguments are valid is described later. The symbolisation of these sentences, known as formalisation, simplifies and quickens this process. It also enables the philosopher to clarify ideas using an unambiguous language in which to represent thoughts. The sophistication of the language used enables greater insights into the significance of these thoughts (and a cursory analysis of more logical languages is described in Other Logics. Aims of this book. This book aims to give the reader a basic understanding of logic and its relationship with philosophy, rather than a more mathematical approach to advanced logic. It is designed to provide the reader with a grasp of terms such as "valid", "consistency", "entailment", which often arise in philosophy, to help the reader comprehend the philosophical issues that use them. Philosophical debates of certain issues are not developed here, and are at most briefly mentioned. Certain assumptions are made; the reader is advised to consider them. The final part of this book (Other Logics) is a development and extension of the principles described herein. It is designed to interest rather than fully inform the reader of these matters.
410
Italian/Vocabulary/Basic Words and Phrases. =Basic Vocabulary= Greetings. See Italian/Vocabulary/Greetings and Introductions Colours. See Italian/Vocabulary/Colors See also. Italian/Vocabulary/Common phrases
68
Italian/Grammar/Nouns. Gender. Italian nouns are either masculine or feminine. As a general rule (with a few exceptions), male human beings are associated to masculine nouns; female human beings are associated to feminine nouns. Collective nouns, referring to a group of human beings of both genders, are usually masculine. Examples: Attore ("actor") is masculine. (è maschile) Attrice ("actress") is feminine. (è femminile) Maestro (male "teacher") is masculine. Maestra (female "teacher") is feminine. Bambini ("children", a group of little boys) is masculine Bambini ("children", a mixed group of little boys and girls) is masculine Bambine ("children", a group of little girls) is feminine Exceptions (example): Persona ("person" - man or woman) is feminine Animals and things may be masculine or feminine, but there is no clear rule for this association. When you learn a new word, you should learn whether it is masculine or feminine. Examples: Pietra ("stone") is feminine. Sasso (also "stone") is masculine. Topo ("mouse") is masculine. Volpe ("fox") is feminine. The basic rule is that masculine singular nouns end with -o, feminine singular nouns end with -a. Most words follow this form, but this is not always the case. Examples Uomo ("man") is masculine. Donna ("woman") is feminine. Mano ("hand") is feminine. Programma ("program") is masculine. Cane ("dog") is masculine. Pace ("peace") is feminine. Città ("town") is feminine. Crisi ("crisis") is feminine. Virtù ("virtue") is feminine. Plural. In Italian, nouns are pluralized by a change in the last vowel. In short: Every case: A basic example: [Il] ragazzo ("boy") is masculine singular. - [I] ragazzi is the plural form. [La] ragazza ("girl") is feminine singular. - [Le] ragazze is the plural form. Other examples: [Il] programma ("program", masculine). Plural: [i] programmi. [La] mano ("hand", feminine). Plural: [le] mani. [Il] cane ("dog", masculine). Plural: [i] cani. [La] canzone ("song", feminine). Plural: [le] canzoni. [La] crisi ("crisis", feminine). Plural: [le] crisi. [La] città ("town", feminine). Plural: [le] città. [Il] caffè ("coffee", masculine). Plural: [i] caffè. [La] virtù ("virtue", feminine). Plural: [le] virtù. [Il] gas ("gas"). Plural: [i] gas. Warning.<br> In some cases, plural feminine forms can be mistaken for singular masculine forms. Example of confusing forms. [Il] signore ("mister") is masculine singular. - [I] signori is the plural form. [La] signora ("mrs") is feminine singular. - [Le] signore is the plural form. The problem here is that "signore" can refer to both one man or two (or more) women. This problem is usually solved by taking the article into account. Special plurals. A number of masculine nouns ending in -o have special plurals ending in -a; these special plurals switch to feminine gender. They are a remnant of the old Latin neutral gender. Examples of plurals in -a: [Il] braccio ("arm", masculine). Plural: [le] braccia (feminine) [L'] uovo ("egg", masculine). Plural: [le] uova (feminine) Very few nouns have irregular plurals. Irregular plurals (example): Uomo ("man"), plural: uomini Some nouns are invariant (the plural is the same as the singular) even if they end in -a, -e or -o. These nouns are usually originated by a shortening of longer names. Invariant plurals i -a, -e or -o (example): [La] moto (short for motocicletta) ("motorcycle"). Plural: [le] moto Nouns ending with i+vowel in some cases miss one i in the plural (a rule for masculine nouns). Examples of plurals for nouns in -ia, -io: Camicia ("shirt", fem.). Plural: camicie Doccia ("shower", fem.). Plural: docce Arancia ("orange", fem.). Plural: arance Inizio ("start", masc.). Plural: inizi Negozio ("shop", masc.). Plural: negozi Raggio ("ray", masc.). Plural: raggi Masculine nouns ending with -co, -go can have plurals ending with -chi, -ghi or -ci, -gi. Feminine nouns ending with -ca, -ga usually have plurals ending with -che, -ghe. Examples of plurals for nouns in -co, -go, -ca, -ga: Amico ("friend", masc.). Plural: amici Arco ("arch", masc.). Plural: archi Lago ("lake", masc.). Plural: laghi Medico ("physician", masc.). Plural: medici Amica ("friend", fem.). Plural: amiche Barca ("boat", fem.). Plural: barche Targa ("plate", fem.). Plural: targhe Particular cases -cia and -gia<br> Words ending with either -cia and -gia respect the following rule. Plural ends with -cie/-gie if the final letter before the suffix(cia/gia) is a vowel Plural ends with -ce/-ge if the final letter before the suffix is a consonant If the stress falls on the i, the i always remains so, for instance, -cia makes the plural with -cie.
1,534
DirectX/Managed DirectX. Managed DirectX is a new technology introduced by Microsoft. It is a set of libraries - or as referred to in .NET _assemblies_ - to create applications that utilise DirectX. Each of the DLL's is used with a specific sub-system. The DLL's are as below.
73
Introduction to Philosophical Logic/Consistency and Inconsistency. =Consistency and Inconsistency= Logic is a study of the consistency of beliefs. A belief is part of a psychological state in which a person thinks, is under an impression or "believes" that the universe has some property. They are readily represented by sentences, a linguistic interpretation. Such sentences can then be evaluated for "truth". Truth. What is truth? What sort of thing could be considered true? Of the above sentences, only one, it seems, is capable of being assigned a truth value (considered true or false). The last sentence is true, but what of the others? The first is a command. It can be obeyed or disobeyed, but cannot be true (or false). The second is a question and again cannot be true or false. The third expresses a wish; it may be true that the person uttering it wishes he or she were not hungry, but it itself cannot be evaluated for truth. It shall be said that only "declarative" statements are evaluable for truth. A test, perhaps, for determining if a sentence is declarative or not is to ask if the following question is a meaningful, grammatical question in English: where P is the sentence being analysed. None of the above is meaningful English. These therefore fail the test. So what is it for a declarative sentence to be true. There are many truth theories, each defining truth slightly differently. The definition of truth adopted in this book comes from the "correspondence" theory, which states that a sentence P is true if and only if the state of affairs designated by P obtains; "it is true that P" is true if P. Falsehood is similarly defined for a sentence: a sentence is false if the state of affairs it designates do not exist. Although this seems straightforward and platitudinous, there are complications, explanation of which roams outside the scope of this book. The reader is urged to investigate these! For the sake of simplicity it will be assumed that all declarative sentences are true or false and so, if a sentence is true, its negation is false. The "negation" of a sentence P is any sentence that shares its meaning precisely with the sentence it is not the case that P. Definition of Consistency. Logic is concerned with the consistency of sets of sentences (used to represent beliefs) and with consistent outcomes to processing logical formulae containing operands and operatives. A set of sentences is said to be consistent if and only if there is at least one possible situation in which they are all true. So, the following set of sentences is consistent: Note that not all of the sentences are true (sentence 5). However, if the Earth were a cube, it need not be that any of the other sentences are false. There is a "possible situation" in which they are all true. In this way, logicians are not interested as such in the actual truth value of a sentence (whether it is in fact true or false), but more in a sentence's possible truth values. Consider the following English sentences: There is no possible situation in which 1 and 2 are both true, that is to say, it is impossible for them both to be true at the same moment. Such a pair of sentences are said to be inconsistent. A set of sentences is said to be inconsistent if there is no possible situation in which they are all true. A set of sentences may consist of only one sentence. In which case, if there is a possible situation in which that sentence is true, the sentence is said to be consistent. If there is no such situation, the sentence is said to be inconsistent. For example, "grass is green" and "snow is blue" are examples of consistent sentences (and the set containing them and only them is consistent). The sentences "grass is green" and "grass is blue" (when taken individually) are consistent sentences, but any set containing them both is inconsistent. The sentence "2+2=5" is inconsistent. There is no situation in which it is true. If two sentences cannot be both true and cannot be both false, they are said to be contradictory. For example, "Socrates was a philosopher" and "Socrates was not a philosopher" are contradictory statements. If two sentences cannot be both true at the same time (form an inconsistent set), they are said to be contrary. For example, "I have exactly 10 fingers" and "I have exactly 9 fingers" are contrary (both statements cannot be true, but both could be false). All contradictory statements are contrary. Logical Possibility and Necessity. What exactly is meant by "possible" where it is used above? The type of possibility being considered is "logical". If something is considered logically possible, it "could have been the case that it was true", regardless of whether it is actually true or not. So, it "could have been true" that Charles I was not beheaded, for example if he had evaded the executioner, or that the moon is made of cheese, if, for example, the universe was slightly different to how it actually is. Necessity can be defined in terms of possibility. It is, for example, impossible that Charles I was both beheaded and not beheaded, i.e. it is not possible that he was beheaded and not beheaded. A statement about impossibility is a statement about "necessity", i.e. the way things "must" be. To say something is impossible is to say that no possible situation exists where it is true. If no situation exists where it is true, it "must" be false - it is necessarily false and so its negation is necessarily true. The reader should strive to understand that 2 follows from 1 (and indeed that 1 follows from 2) before continuing. Some examples will clarify this (and extend the idea): *It must be that two plus two does not make five. *It must be that four is the square of two. *According to the definition above, it could not be that a sentence were neither true nor false. A sentence that describes a state of affairs that is not possible is said to be necessarily false. A sentence the negation of which describes a state of affairs that is not possible is said to be necessarily true.
1,398
Introduction to Philosophical Logic/Arguments. Definition of an argument. An argument (in the context of logic) is defined as a set of "premises" and a "conclusion" where the conclusion and premises are separated by some trigger word, phrase or mark known as a "turnstile". For example: 1 "I think; therefore I am." There is only one premise in this argument, "I think". The conclusion is "I am" and the turnstile is "therefore" (although the semi-colon may be thought of as part of the turnstile). 2 "All men are mortal. Socrates was a man. So, Socrates was mortal." In this example there are two premises and the turnstile is "so". In English (and all other "natural languages"), the conclusion need not come at the end of an argument: 3 "Pigs can fly. For pigs have wings and all winged animals can fly." 4 "I am a safe driver: I have never had an accident." Here the turnstiles ("for" and ":") seem to indicate where the premises come as opposed to where the conclusion comes. Other examples of turnstiles (indicating either conclusion or premises) are: "so", "thus", "hence", "since", "because", "it follows that", "for the reason that", "from this it can be seen that". Sound arguments. A sound argument is an argument that satisfies three conditions. -True premises -Unambiguous premises -Valid logic If one of these conditions is unsatisfied then the argument is unsound, though in the case of ambiguous premises, not necessarily so. The first, second and fourth arguments (depending of course, on who "I" is - it is being assumed it is the author!) are all sound. Here are some more: 5 "Grass is green. The sky is blue. Snow is white. Therefore coal is black." 6 "Grass is green. The sky is blue. Snow is white. Therefore pigs can fly." 7 "2+2=4; hence 2+2=4" 8 "2+2=4; hence 2+2=5" Note that it is not necessary that the truth value of the conclusion play a role in determining whether an argument is sound. This can be determined by considering the truth of the premises and the validity of the argument. However, if the conclusion is not true, the argument is not sound. A sound argument always has consistent premises. This must be the case, since there is a possible situation (namely reality) in which they are all true. Valid arguments. Of greater interest to the logician are valid arguments. A valid argument is an argument for which there is no possible situation in which the premises are all true and the conclusion is false. Of the above arguments 2, 3 and 7 are valid. The reader should consider whether argument 1 is valid (read "Meditations on First Philosophy" by Descartes, chapters 1, 2). It does not matter whether the premises or conclusion are actually true for an argument to be valid. All that matters is that the premises "could not all be true and the conclusion false". Indeed, this means that an argument with inconsistent premises is always valid. There is no situation under which such an argument has all premises true and so there is no situation under which such an argument has all premises true and conclusion false. Hence it is valid. Similarly, an argument with a necessary conclusion can in no situation have all true premises and a false conclusion, since there is no situation in which the conclusion is false. An argument with the single premise 'The conclusion is true.' is valid (regardless of the conclusion). An argument with the conclusion 'The premises are all true.' is also valid. According to the definition of truth given previously, if the conclusion is false, its negation is true. Hence a valid argument can also be defined as an argument for which there is no possible situation under which the premises and the negation of the conclusion are all true. Hence, a valid argument is an argument such that the set of its premises and the negation of the conclusion is inconsistent. Such a set (the "union" of the set of premises and the set of the negation of the conclusion) is known as the "counter-example set". It is called the counter-example set for the following reason: if a possible situation is found in which the members of this set are all true (and so the set is found to be consistent), this situation provides a "counter-example" to the arguments being valid, i.e. the existence of such a situation proves that the argument is not valid. Counter-examples do not exist only for arguments, but also for statements: "Prime numbers are always odd" 2 provides a counter-example (a number) to this statement. "All animals have four legs" Human beings provide a counter-example (a type of animal). "Years are 365 days long" Leap years provide a counter-example (a type of year). "Years designated by a number divisible by four are leap years" The year 1900 provides a counter-example (a particular year). "It always rains in England" A singularly sunny day in September (today, when written - a particular interval of time) provides a counter-example. Counter-examples to declarative sentences refute their truth and are classes of things ("thing" being understood very broadly here) or particular things. Counter-examples to arguments refute their validity and are possible situations designated by sets of sentences (the counter-example set). Some clarification of the situation is often needed. For example, take argument 4. It will be modified slightly as follows: "John has never had an accident; therefore, John is a safe driver." It will be assumed here that "accident" means car accident and "driver" means motorist and "safe" means not liable to cause an accident. The counter-example set is: John has never had an accident. John is not a safe driver. Clarification by example: it may be that John has never driven in his life (and so never had an accident) because he is blind (and so cannot be considered a safe driver). As mentioned, an inconsistent counter-example set implies that a conclusion is valid because it means that there is no situation under which the premises are all true and the conclusion is false (the negation of the conclusion is true). Take argument 2; the counter-example set is: All men are mortal. Socrates was a man. Socrates was not mortal. These sentences cannot all be true at once. If Socrates was a man and he was not mortal, it could not be that all men are mortal. If all men are indeed mortal and Socrates was not mortal, he could not have been a man. If all men are mortal and Socrates was a man, he must have been mortal. Hence the counter-example set is inconsistent and the argument is valid. Use a similar approach to show that arguments 3 and 7 are valid (and use it to consider argument 1). This method is known as "reductio ad absurdum" (which translates literally from Latin as "reduction to absurdity"). The negation of the conclusion is absurd given the truth of the premises and so the conclusion must be true.
1,657
Introduction to Philosophical Logic/Formalisation/Propositional Calculus. =Propositional Calculus= Natural languages, such as English, are flawed. One such flaw is "ambiguity", another is that they are tedious to write out if they are long. However, sentences can be formalised into a symbolic, logical language that contains neither of these flaws (although, as the discerning reader will discover, natural languages have many advantages over these languages). The first such language to be considered is the propositional calculus.
121
Introduction to Economics. Economics is the social science of studying the production, distribution and consumption of goods and services and It is a complex social science that spans from mathematics to psychology. At its most basic, however, economics considers how a society provides for its needs. Its most basic need is survival; which requires food, clothing and shelter. Once those are covered, it can then look at more sophisticated commodities such as services, personal transport, entertainment, the list goes on. Today, this social science known as "Economics" tends to refer only to the type of economic thought which political economists refer to as "Neoclassical Economics". It developed in the 18th century based on the idea that Economics can be analysed mathematically and scientifically. What is Value? A generally accepted notion of Value is the worth of goods and services as determined by markets. Thus an important part of Economics is the study of policies and activities for the generation and transfer of Value within markets in the form of goods and services. Often a measure for the worth of goods and services is units of currency such as the US Dollar, peso, etc... But, unlike the units of measurements in Physics such as Seconds for Time, there exists no absolute basis for standardizing the units for Value. Usually the value of a currency is related to the value of Gold, which in turn is valued by amount or number of goods and services that it can be exchanged for. Because the rate of production of gold in the world is slower than the rate of creation of goods and services, the relation between gold and currencies is not as strict as it used to be. Thus, one of the most complicated and most often misunderstood parts of economy is the concept of "value". One of the big problems is the large number of different types of values that seem to exist, such as "exchange-value", "surplus-value", "use-value" The discussion of values all start with one simple question: What is something worth? Today's most common answer is one of those answers that are so deceptively simple that it seems obvious when you know it. But then remember that it took economists more than a hundred years to figure it out: "Something is worth whatever you think it is worth". In 1st century BC, Publilius Syrus wrote: "Something is only worth what someone is willing to pay for it". This statement needs some explanation. Take as an example two companies that are thinking of buying a new copying machine. One company does not think they will use a copying machine that much, but the other knows it will copy a lot of papers. This second company will be prepared to pay more for a copying machine than the first one. They find different "utility" in the object. The companies also have a choice of models. The first company knows that many of the papers will need to be copied on both sides. The second company knows that very few of the papers it copies will need double sided copying. Of course, the second company will not pay much more for this, while the first company will. In this example, we see that a buyer will be prepared to pay more for the increase in utility compared to alternative products. But how does the seller value things? Well, in pretty much the same way. Of course, most sellers today do not intend to use the object he sells himself. The utility for the seller is not as an object of usage, but as a source of income. And here again it is marginal utility that comes in. For which price can you sell the object? If you put in some more work, can you get a higher price? Here we also get into the resellers utility. Somebody who deals in trading will look at an object, and the utility for him is to be able to sell it again. How much work will it take, and what margins are possible? So not only do the two different buyers have a different value on an object, the salesman puts his value on it, and the original manufacturer may have put yet another value on it. The value depends on the person who does the valuation, it is subjective. So, how do all these subjective values turn into the price? To understand that, we must take a look at the place where things are bought and sold: /The market/. The Market. A market is an often national but increasingly international legally, autonomous or semi-autonomous, defined place, system or arrangement. Where economic agents (producers, facilitators, sellers, buyers, investors and speculators, etc) come into close contact with each other for the purpose of economic transactions. Not only can it simply consist on a limited electronic marketplace such as eBay but in its broader sense markets are aggregated specific around their rules or intrinsic uniqueness. The things that congregates agents or product(s). We can for instance refer to the wine market (that is specific without more context to all the things related to the national wine economy) but we can also specify the global Porto market (that is specific to a specific type of wine, national regarding the Porto brand) to congregate its global economic agents. Setting a price. An object's Value, worth and utility is not something fixed, but instead a subjective property of the object. This subjectivity may be a bit surprising, it is easy to imagine that something must have an objective worth being bought and sold. However this is not the case, in fact, it's the opposite. When somebody has an item to sell, he puts a value on this item, and will not sell under that value. Likewise, when somebody wants to buy something, he will put a value on the object, and will not pay more than this. If these valuations overlap, so that the buyer's valuation is higher than the seller's valuation, the object will be sold. If the seller's valuation is higher than the buyer's, the buyer will simply say "it's not worth it" or the seller will say "it's worth more than that" and no deal will be made. Of course, you don't haggle about price every time you buy a candy bar, but you still agree on a price. It's just that the store owner has put up a sign with a price, and you can either accept it or reject it. Neither you nor him want to haggle about something that just costs eighty cents, because it's simply not worth the effort. So even though haggling is not a necessary part of the pricing, both the buyer and seller agrees on the price, and both think they are better off after the exchange. Think about this: If you would think you would be worse off after buying something, would you buy it? Of course not, so buying and selling is an act done by free will. (Unless of course somebody is pointing a gun at you, but then it's not buying and selling, but stealing.) Now, we know that the price ends up somewhere between the seller's valuation of the item and the buyer's valuation of the item. The question of what the price of an item will be, therefore depends on these valuations. What, then will these valuations depend on? If an object had an intrinsic, objective worth, and both seller and buyer were aware of it(and had the same preferences, or valuation), the buyer's and seller's valuation of the object would never overlap, and no deal would ever be cut, because the seller would never be willing to sell it at a price less than the objective worth [or else he would be in loss] and the buyer would never be willing to buy it at a cost higher than the objective worth [why would he?], and hence, nobody would ever sell or buy anything. The subjectiveness of value is necessary for things to be sold and bought at all. Free and Regulated markets. The description above is of a free market, where anyone can buy and sell, and where price is set by buyer and seller alone. This is not always the case. Instead many markets are "regulated". While the national government may hold the ultimate control over national affairs, at least to some degree, depending on its level of political/economical independency, often it will delegate the power to private or non-governmental public authorities, even to supra national or international institutions to oversee the regulatory needs, internationally there may be also be other regulations due to treaties and accords. A form of sanctioned bureaucratic approval is often necessary to limit either the people involved or the prices or both, or to make sure the action/function is not a danger to others. For example, not everyone is allowed to sell medicine, claim to be a doctor or drive a taxi. But it goes beyond public security, it can be generalized primarily as a way to protect special interest groups, more than the good of the general public. Regulated markets include for instance valuable metals, currency, weapons, technical functions (practice of medicine, drug production, prescription, sale) and even educational accreditation and technology. This regulations can take many shapes, for example the control of prices, movement/transfer, establishing of quality and quantity gradients and the freedom and requirements to practice a job/function. Of course, it can be said today that all markets are regulated in some way. When the state sets up the rules for making the market function smoothly is not usually seen as making the market non-free, at best it is to exert control (protect, manage), preserve market (social-economic) stability and increase national competitiveness. Money. Money is such an obvious and integral part of today's society, that it is sometimes difficult to imagine society without it. It's also a very abstract concept, and can be hard to grasp. It comes in many forms, from special types of sea-shells, to pigs, and via the paper and coin system to digital blips in a computer. But what is money, really? As we have seen, people value things differently. But communicating this value to somebody else is a problem. The only way you can communicate this value is by comparing it with other things. But since all others, just like you, have subjective values, it becomes complex and confusing. This gets evident if we look at how value impacts on barter The complexities of barter. When exchanging goods by barter, you need to find something you want more than something you have, while the person you barter with has to value the thing you have more than the thing you want. There are four individual valuations that must match. An example might clear things up: If what you really want is some shoes, and the thing you have that you want to get rid of is a chair, you have to find a shoemaker that needs a chair, or you will not get any shoes. Say that the shoemaker instead needs a lamp. Then you can find somebody else that needs a chair, and has a lamp, barter that, and then go to the shoemaker. Now, the big problem here is that when you are to value the lamp, it is no longer what you think of the lamp that is important. It's what the shoemaker thinks of the lamp. You need to guess its average value, so that you can be reasonably sure that the shoemaker will want it. The effect of this is that you pretty much need to know how people value almost everything, since you'll be forced into bartering almost everything. This type of direct barter system may seem to some as not very efficient, but in fact may be extremely efficient if done in the proper context and with the needed infrastructures, especially in today technological world. The reason we don't do it like that ? Well, we can only state that it is not the general norm. There are plenty of communities that still use barter systems. Bartering is also the system that is adopted as soon as any other more complex system fails or loses trust. But due to the way trade evolved with the appearance of larger markets and adoption of currency the need of an increased level of economic control and taxation become evident to ruling classes and inevitable beyond the level of city states. The essence of money. So in essence, money is a common value system. It quantifies the value of an object in a way that everyone understands, and it makes communicating with others simpler. Instead of weighing the values of the shoes against the lamp against the chair, you can set a number on it. You can say that your chair is worth five units, the lamp maker can value his lamp to three units and the shoemaker thinks his shoes are worth four units. We can now instantly and easily compare values. Trading suddenly got much easier. But that's not all. With a common value system that is based on exchangeable entities, we can exchange these entities as payment. You can now go down with your chair to the market, sell the chair to the highest bidder, and then go with your money to the shoemaker, and buy a couple of shoes. The shoemaker in turn takes the money and goes to the lamp maker. No longer do you need to evaluate the average market value of the lamp, or cut three-way deals. All you need to do is find somebody that is willing to pay what you think your chair is worth, and find a pair of shoes that is cheap enough for you. And it doesn't even end there! Money can be stored because it does not rot like wood or rust like steel. If you have a source of income that is seasonal you can keep the money you make during high-season and buy food with it during low season. So money is simply a common value system based on exchangeable entities. But this simple concept makes life much less complicated in so many ways. Supply and Demand. The principle of "supply and demand" is one of the best-known principles of economic theory. It was first posited by Jean-Baptiste Say, an 18th-century Classical Political Economist who suggested that demand and supply are interrelated. His theory was that the more people want something, the higher the demand is for it and the more they will value it. So if something is in low supply but in high demand the value is increased, as it will decrease if there is an abundance or a low demand for the that item. For example: There are 100 dolls and 400 people that want those dolls. Since there are more people than there are dolls, people will pay more money for them. Alfred Marshall, a famous neoclassical economist, went further in the mid-20th century and developed a mathematical model of supply and demand. The two variables in this theory are price and quantity. The forces of demand and supply are prominent in determining the price of a commodity. DEMAND- Is the amount of a good that will be bought at given prices over a period of time. SUPPLY- Is the amount of a good that sellers are prepared to sell at a given price. MARKET SYSTEM/ PRICE MECHANISM- Is the automatic determination of prices and the allocation of resources by the operation of markets in the economy. PRICE- Is the amount of money goods are exchanged for in a transaction. Capital. Capitalism, as its name suggests, is based on the ownership of capital. What is capital? Basically, capital is anything that can be traded for something else. Any amount of money is capital, as it can be traded for a huge variety of things. Personal items are also capital because they can be sold; houses, cars, and other bizarre items fall under this category. The ability to work can also be considered capital, or labour-power. Karl Marx first posited that perhaps there was something that separated capital into two broad categories. Some capital is bought, and then the value is fixed; this applies for an item of clothing or food, for instance. Some may lose value and depreciate; cars fall into this category. Some capital, however, can actually produce more capital which can then be sold; this, he argued, is "real" capital. For example, if you have a cookie-stamper and a van, some flour, butter, and other cookie ingredients, with that capital, you could produce cookies and ship them around in the van selling them for a profit, albeit small.
3,553
Python Programming/Interactive mode. Python has two basic modes: script and interactive. The normal mode is the mode where the scripted and finished codice_1 files are run in the Python interpreter. Interactive mode is a command line shell which gives immediate feedback for each statement, while running previously fed statements in active memory. As new lines are fed into the interpreter, the fed program is evaluated both in part and in whole. Interactive mode is a good way to play around and try variations on syntax. On macOS or linux, open a terminal and simply type "python". On Windows, bring up the command prompt and type "py", or start an interactive Python session by selecting "Python (command line)", "IDLE", or similar program from the task bar / app menu. IDLE is a GUI which includes both an interactive mode and options to edit and run files. Python should print something like this: $ python Python 3.0b3 (r30b3:66303, Sep 8 2008, 14:01:02) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. The »> is Python's way of telling you that you are in interactive mode. In interactive mode what you type is immediately run. Try typing 1+1 in. Python will respond with 2. Interactive mode allows you to test out and see what Python will do. If you ever feel the need to play with new Python statements, go into interactive mode and try them out. A sample interactive session: »> 5 5 »> print(5*7) 35 »> "hello" * 2 'hellohello' »> "hello".__class__ <type 'str'> However, you need to be careful in the interactive environment to avoid confusion. For example, the following is a valid Python script: if 1: print("True") print("Done") If you try to enter this as written in the interactive environment, you might be surprised by the result: »> if 1: ... print("True") ... print("Done") File "<stdin>", line 3 print("Done") SyntaxError: invalid syntax What the interpreter is saying is that the indentation of the second print was unexpected. You should have entered a blank line to end the first (i.e., "if") statement, before you started writing the next print statement. For example, you should have entered the statements as though they were written: if 1: print("True") print("Done") Which would have resulted in the following: »> if 1: ... print("True") True »> print("Done") Done Interactive mode. Instead of Python exiting when the program is finished, you can use the -i flag to start an interactive session. This can be very useful for debugging and prototyping. python -i hello.py For i in range(-1,-5,-1): print(i)
731
Python Programming/Creating Python Programs. Welcome to Python! This tutorial will show you how to start writing programs. Python programs are nothing more than text files, and they may be edited with a standard text editor program. What text editor you use will probably depend on your operating system: any text editor can create Python programs. However, it is easier to use a text editor that includes Python syntax highlighting. Hello, World. The very first program that beginning programmers usually write or learn is the "Hello, World!" program. This program simply outputs the phrase "Hello, World!" then terminates itself. Let's write "Hello, World!" in Python! Open up your text editor and create a new file called codice_1 containing just this line (you can copy-paste if you want): print('Hello, World!') The below line is used for Python 3.x.x print("Hello, World!") You can also put the below line to pause the program at the end until you press anything. input() This program uses the codice_2 function, which simply outputs its parameters to the terminal. By default, codice_2 appends a codice_4 character to its output, which simply moves the cursor to the next line. Now that you've written your first program, let's run it in Python! This process differs slightly depending on your operating system. Windows. If it didn't work, make sure your PATH contains the python directory. See Getting Python. Linux (advanced). print('Hello, world!') Note that this mainly should be done for complete, compiled programs, if you have a script that you made and use frequently, then it might be a good idea to put it somewhere in your home directory and put a link to it in /usr/bin. If you want a playground, a good idea is to invoke codice_34 and then put scripts in there. To make ~/.local/bin content executable the same way /usr/bin does type codice_35 (you can add this line to your shell rc file, for example ~/.bashrc). Result. The program should print: Hello, world! Congratulations! You're well on your way to becoming a Python programmer. Exercises. Solutions
509
Algorithm Implementation/Sorting/Merge sort. Merge Sort. You start with an unordered sequence. You create N empty queues. You loop over every item to be sorted. On each loop iteration, you look at the last element in the key. You move that item into the end of the queue which corresponds to that element. When you are finished looping you concatenate all the queues together into another sequence. You then reapply the procedure described but look at the second last element in the key. You keep doing this until you have looped over every key. When you complete this process the resulting sequence will be sorted as described above. Key Comparing. Keys are compared in the following way: Let ka be the key of the one item, called item A, let kb be the key of the other item, called item B. Let ka(i) be the ith entry in the key ka, where the first entry is at index 0. Let i = 0. If the keys are less than i elements long then the keys are equal. If ka(i) < kb(i), then item A is ordered before item B. If ka(i) > kb(i), then item B is ordered before item A. If ka(i) = kb(i), then add one to i, and return the line under "Let i = 0." Time Cost. Let ni be the number of items in the sequence to be sorted. N is number of integers that each key element can take. Let nk be the number of keys in each item. The total time to sort the sequence is thus O(nk(ni + N)). Common Lisp. Naive implementation, translation of pseudocode found at Wikipedia. (defmacro apenda-primeiro (ret1 left1) "Appends first element of left1 to right1, and removes first element from left1." `(progn (setf ,ret1 (if (eq ,ret1 nil) (cons (car ,left1) nil) (append ,ret1 (cons (car ,left1) nil)))) (setf ,left1 (cdr ,left1)))) (defun merge-func (left right) "Our very own merge function, takes two lists, left and right, as arguments, and returns a new merged list." (let (ret) (loop (if (or (not (null left)) (not (null right))) (progn (if (and (not (null left)) (not (null right))) (if (<= (car left) (car right)) (apenda-primeiro ret left) (apenda-primeiro ret right)) (if (not (null left)) (apenda-primeiro ret left) (if (not (null right)) (apenda-primeiro ret right))))) (return))) ret)) (defun merge-sort (m) "Merge-sort proper. Takes a list m as input and returns a new, sorted list; doesn't touch the input." (let* ((tam (length m)) (mid (ceiling (/ tam 2))) (left) (right)) (if (<= tam 1) m (progn (loop for n from 0 to (- mid 1) do (apenda-primeiro left m)) (loop for n from mid to (- tam 1) do (apenda-primeiro right m)) (setf left (merge-sort left)) (setf right (merge-sort right)) (merge-func left right))))) Simpler Implementation in a somewhat more functional style. "Sorts the elements from the first and second list in ascending order and puts them in `sorted`" (cond ((and (null frst) (null scond)) sorted) ((null frst) (append (reverse sorted) scond)) ((null scond) (append (reverse sorted) frst)) (t (let ((x (first frst)) (y (first scond))) (if (< x y) (sort-func (rest frst) scond (push x sorted)) (sort-func frst (rest scond) (push y sorted))))))) "Divides the elements in `lst` into individual elements and sorts them" (when (not (null lst)) (let ((divided (mapcar #'(lambda (x) (list x)) lst))) (labels ((merge-func (div-list &optional (combined '())) ; merges the elements in ascending order (if div-list (merge-func (rest (rest div-list)) (push (sort-func (first div-list) (second div-list)) combined)) combined)) (final-merge (div-list) ; runs merge-func until all elements have been reconciled (if (> (length div-list) 1) (final-merge (merge-func div-list)) div-list))) (final-merge divided))))) C. ///function: mergeSort(name_array); //tipo Data used: typedef struct data{ char*some; int data; } DATA; typedef struct _nodo{ int key; DATA data; }nodo; ///n is kept as global int n; void merge(nodo*a,nodo*aux,int left,int right,int rightEnd){ int i,num,temp,leftEnd=right-1; temp=left; num=rightEnd-left+1; while((left<=leftEnd)&&(right<=rightEnd)){ if(a[left].key<=a[right].key){ aux[temp++]=a[left++]; else{ aux[temp++]=a[right++]; while(left<=leftEnd){ aux[temp++]=a[left++]; while(right<=rightEnd){ aux[temp++]=a[right++]; for (i=1;i<=num;i++,rightEnd--){ a[rightEnd]=aux[rightEnd]; void mSort(nodo*a,nodo*aux,int left,int right){ int center; if (left<right){ center=(left+right)/2; mSort(a,aux,left,center); mSort(a,aux,center+1,right); merge(a,aux,left,center+1,right); void mergeSort(nodo*p){ nodo*temp=(nodo*)malloc(sizeof(nodo)*n); mSort(p,temp,0,n-1); free(temp); C++. A recursive implementation using the C++14 standard library. template <typename BidirIt, typename Compare = std::less<» void merge_sort(BidirIt first, BidirIt last, Compare cmp = Compare {}) const auto n = std::distance(first, last); if (n > 1) { const auto middle = std::next(first, n / 2); merge_sort(first, middle, cmp); merge_sort(middle, last, cmp); std::inplace_merge(first, middle, last, cmp); int main() std::vector<int> v {3, -2, 1, 5, -9, 10, 3, -3, 2}; merge_sort(std::begin(v), std::end(v)); // sort increasing merge_sort(std::begin(v), std::end(v), std::greater<> {}); // sort decreasing subroutine Merge(A,NA,B,NB,C,NC) integer, intent(in) :: NA,NB,NC ! Normal usage: NA+NB = NC integer, intent(in out) :: A(NA) ! B overlays C(NA+1:NC) integer, intent(in) :: B(NB) integer, intent(in out) :: C(NC) integer :: I,J,K I = 1; J = 1; K = 1; do while(I <= NA .and. J <= NB) if (A(I) <= B(J)) then C(K) = A(I) I = I+1 else C(K) = B(J) J = J+1 endif K = K + 1 enddo do while (I <= NA) C(K) = A(I) I = I + 1 K = K + 1 enddo return end subroutine merge recursive subroutine MergeSort(A,N,T) integer, intent(in) :: N integer, dimension(N), intent(in out) :: A integer, dimension((N+1)/2), intent (out) :: T integer :: NA,NB,V if (N < 2) return if (N == 2) then if (A(1) > A(2)) then V = A(1) A(1) = A(2) A(2) = V endif return endif NA=(N+1)/2 NB=N-NA call MergeSort(A,NA,T) call MergeSort(A(NA+1),NB,T) if (A(NA) > A(NA+1)) then T(1:NA)=A(1:NA) call Merge(T,NA,A(NA+1),NB,A,N) endif return end subroutine MergeSort program TestMergeSort integer, parameter :: N = 8 integer, dimension(N) :: A = (/ 1, 5, 2, 7, 3, 9, 4, 6 /) integer, dimension ((N+1)/2) :: T call MergeSort(A,N,T) write(*,'(A,/,10I3)')'Sorted array :',A end program TestMergeSort function merge_sort(arr) if length(arr) <= 1 return arr end middle = trunc(Int, length(arr) / 2) L = arr[1:middle] R = arr[middle+1:end] merge_sort(L) merge_sort(R) i = j = k = 1 while i <= length(L) && j <= length(R) if L[i] < R[j] arr[k] = L[i] i+=1 else arr[k] = R[j] j+=1 end k+=1 end while i <= length(L) arr[k] = L[i] i+=1 k+=1 end while j <= length(R) arr[k] = R[j] j+=1 k+=1 end arr end (if (= 0 (length x)) ;else (if (= (length x) 1) x ;else (combine (mergesort (firstHalf x (/ (length x) 2))) (mergesort (lastHalf x (/ (length x) 2))) ) ) (if (null? list1) list2 ;else (if (null? list2) list1 ;else (if (<= (car list1) (car list2)) ;car of list 1 is second element of list 2 (cons (car list1) (combine (cdr list1) list2)) ;else (cons (car list2) (combine list1 (cdr list2))) ) ) (if (= N 0) null (if (or (= N 1) (< N 2)) (list (car L)) ;else (cons (car L) (firstHalf (cdr L) (- N 1))))) (if (= N 0) L); Base Case (if (or (= N 1) (< N 2)) (cdr L) ;else (lastHalf (cdr L) (- N 1))) A slightly more efficient version only traverses the input list once to split (note that codice_1 takes linear time in Haskell): sort :: Ord a => [a] -> [a] sort [] = [] sort [x] = [x] sort xs = merge (sort ys) (sort zs) where (ys,zs) = split xs merge [] ys = ys merge xs [] = xs merge (x:xs) (y:ys) | x<=y = x : merge xs (y:ys) | otherwise = y : merge (x:xs) ys split [] = ([], []) split [x] = ([x], []) split (x:y:xs) = (x:l, y:r) where (l, r) = split xs function merge(sequence left, sequence right) while length(left)>0 and length(right)>0 do if left[1]<=right[1] then result = append(result, left[1]) left = left[2..$] else result = append(result, right[1]) right = right[2..$] end if end while return result & left & right end function function mergesort(sequence m) sequence left, right integer middle if length(m)<=1 then return m end if middle = floor(length(m)/2) left = mergesort(m[1..middle]) right = mergesort(m[middle+1..$]) if left[$]<=right[1] then return left & right elsif right[$]<=left[1] then return right & left end if return merge(left, right) end function This is an ISO-Prolog compatible implementation of merge sort. mergesort(L, Sorted) :- once(mergesort_r(L, Sorted)). mergesort_r([], []). mergesort_r([X], [X]). mergesort_r(L, Sorted) :- split(L, Left, Right), mergesort_r(Left, SortedL), mergesort_r(Right, SortedR), merge(SortedL, SortedR, Sorted). % Split list into two roughly equal-sized lists. split([], [], []). split([X], [X], []). split([X,Y|Xs], [X|Left], [Y|Right]) :- split(Xs, Left, Right). % Merge two sorted lists into one. merge(Left, [], Left). merge([], Right, Right). merge([X|Left], [Y|Right], [Z|Merged]) :- (X @< Y -> Z = X, merge(Left, [Y|Right], Merged) Z = Y, merge([X|Left], Right, Merged) A "standard" mergesort: from heapq import merge def mergesort(w): """Sort list w and return it.""" if len(w)<2: return w else: # sort the two halves of list w recursively with mergesort and merge them return merge(mergesort(w[:len(w)//2]), mergesort(w[len(w)//2:])) An alternative method, using a recursive algorithm to perform the merging in place (except for the O(log n) overhead to trace the recursion) in O(n log n) time: def merge(lst, frm, pivot, to, len1, len2): if len1==0 or len2==0: return if len1+len2 == 2: if lst([pivot])<lst[frm]: lst[pivot], lst[frm] = lst[frm], lst[pivot] return if len1 > len2: len11 = len1/2 firstcut, secondcut, length = frm+len11, pivot, to-pivot while length > 0: half = length/2 mid = secondcut+half if lst[mid]<lst[firstcut]: secondcut, length = mid+1, length-half-1 else: length = half len22 = secondcut - pivot else: len22 = len2/2 firstcut, secondcut, length = frm, pivot+len22, pivot-frm while length > 0: half = length/2 mid = firstcut+half if lst[secondcut]<lst[mid]: length = half else: firstcut, length = mid+1, length-half-1 len11 = firstcut-frm if firstcut!=pivot and pivot!=secondcut: n, m = secondcut-firstcut, pivot-firstcut while m != 0: n, m = m, n%m while n != 0: n -= 1 p1, p2 = firstcut+n, n+pivot val, shift = lst[p1], p2-p1 while p2 != firstcut+n: lst[p1], p1 = lst[p2], p2 if secondcut-p2>shift: p2 += shift else: p2 = pivot-secondcut+p2 lst[p1] = val newmid = firstcut+len22 merge(lst, frm, firstcut, newmid, len11, len22) merge(lst, newmid, secondcut, to, len1-len11, len2-len22) def sort(lst, frm=0, to=None): if to is None: to = len(lst) if to-frm<2: return middle = (frm+to)/2 sort(lst, frm, middle) sort(lst, middle, to) merge(lst, frm, middle, to, middle-frm, to-middle) def merge_sort(array) return array if array.size <= 1 mid = array.size / 2 merge(merge_sort(array[0...mid]), merge_sort(array[mid...array.size])) end def merge(left, right) result = [] until left.empty? || right.empty? result « (left[0] <= right[0] ? left : right).shift end result.concat(left).concat(right) end sort [] = [] sort [x] = [x] sort array = merge (sort left) (sort right) where left = [array!y | y <- [0..mid]] right = [array!y | y <- [(mid+1)..max]] max = #array - 1 mid = max div 2 Standard ML. fun mergesort [] = [] let fun merge ([],ys) = ys (*merges two sorted lists to form a sorted list *) | merge (xs,[]) = xs | merge (x::xs,y::ys) = if x<y then x::merge (xs,y::ys) else y::merge (x::xs,ys) val half = length(lst) div 2; in merge (mergesort (List.take (lst, half)),mergesort (List.drop (lst, half))) end public int[] mergeSort(int array[]) // pre: array is full, all elements are valid integers (not null) // post: array is sorted in ascending order (lowest to highest) // if the array has more than 1 element, we need to split it and merge the sorted halves if(array.length > 1) // number of elements in sub-array 1 // if odd, sub-array 1 has the smaller half of the elements // e.g. if 7 elements total, sub-array 1 will have 3, and sub-array 2 will have 4 int elementsInA1 = array.length / 2; // we initialize the length of sub-array 2 to // equal the total length minus the length of sub-array 1 int elementsInA2 = array.length - elementsInA1; // declare and initialize the two arrays once we've determined their sizes int arr1[] = new int[elementsInA1]; int arr2[] = new int[elementsInA2]; // copy the first part of 'array' into 'arr1', causing arr1 to become full for(int i = 0; i < elementsInA1; i++) arr1[i] = array[i]; // copy the remaining elements of 'array' into 'arr2', causing arr2 to become full for(int i = elementsInA1; i < elementsInA1 + elementsInA2; i++) arr2[i - elementsInA1] = array[i]; // recursively call mergeSort on each of the two sub-arrays that we've just created // note: when mergeSort returns, arr1 and arr2 will both be sorted! // it's not magic, the merging is done below, that's how mergesort works :) arr1 = mergeSort(arr1); arr2 = mergeSort(arr2); // the three variables below are indexes that we'll need for merging // [i] stores the index of the main array. it will be used to let us // know where to place the smallest element from the two sub-arrays. // [j] stores the index of which element from arr1 is currently being compared // [k] stores the index of which element from arr2 is currently being compared int i = 0, j = 0, k = 0; // the below loop will run until one of the sub-arrays becomes empty // in my implementation, it means until the index equals the length of the sub-array while(arr1.length != j && arr2.length != k) // if the current element of arr1 is less than current element of arr2 if(arr1[j] < arr2[k]) // copy the current element of arr1 into the final array array[i] = arr1[j]; // increase the index of the final array to avoid replacing the element // which we've just added i++; // increase the index of arr1 to avoid comparing the element // which we've just added j++; // if the current element of arr2 is less than current element of arr1 else // copy the current element of arr2 into the final array array[i] = arr2[k]; // increase the index of the final array to avoid replacing the element // which we've just added i++; // increase the index of arr2 to avoid comparing the element // which we've just added k++; // at this point, one of the sub-arrays has been exhausted and there are no more // elements in it to compare. this means that all the elements in the remaining // array are the highest (and sorted), so it's safe to copy them all into the // final array. while(arr1.length != j) array[i] = arr1[j]; i++; j++; while(arr2.length != k) array[i] = arr2[k]; i++; k++; // return the sorted array to the caller of the function return array; var defaultComparator = function (a, b) { if (a < b) { return -1; if (a > b) { return 1; return 0; Array.prototype.mergeSort = function( comparator ) { var i, j, k, firstHalf, secondHalf, arr1, arr2; if (this.length > 1) { firstHalf = Math.floor(this.length / 2); secondHalf = this.length - firstHalf; arr1 = []; arr2 = []; for (i = 0; i < firstHalf; i++) { arr1[i] = this[i]; for(i = firstHalf; i < firstHalf + secondHalf; i++) { arr2[i - firstHalf] = this[i]; arr1.mergeSort( comparator ); arr2.mergeSort( comparator ); i=j=k=0; while(arr1.length != j && arr2.length != k) { if ( comparator( arr1[j], arr2[k] ) <= 0 ) { this[i] = arr1[j]; i++; j++; else { this[i] = arr2[k]; i++; k++; while (arr1.length != j) { this[i] = arr1[j]; i++; j++; while (arr2.length != k) { this[i] = arr2[k]; i++; k++; })(); Separate into two functions: function mergesort(list){ return (list.length < 2) ? list : merge(mergesort(list.splice(0, list.length » 1)), mergesort(list)); function merge(left, right){ var sorted = []; while (left.length && right.length) sorted.push(left[0] <= right[0]? left.shift() : right.shift()); while(left.length) sorted.push(left.shift()); while(right.length) sorted.push(right.shift()); return sorted; use sort '_mergesort'; sort @array; function merge_sort(array $left, array $right) { $result = []; while (count($left) && count($right)) ($left[0] < $right[0]) ? $result[] = array_shift($left) : $result[] = array_shift($right); return array_merge($result, $left, $right); function merge(array $arrayToSort) { if (count($arrayToSort) == 1) return $arrayToSort; $left = merge(array_slice($arrayToSort, 0, count($arrayToSort) / 2)); $right = merge(array_slice($arrayToSort, count($arrayToSort) / 2, count($arrayToSort))); return merge_sort($left, $right); def mergeSort(def list){ else { center = list.size() / 2 left = list[0..center] right = list[center..list.size() - 1] merge(mergeSort(left), mergeSort(right)) def merge(def left, def right){ def sorted = [] while(left.size() > 0 && right.size() > 0) if(left.get(0) <= right.get(0)){ sorted « left.remove(0) }else{ sorted « right.remove(0) sorted = sorted + left + right return sorted class APPLICATION feature -- Algorithm mergesort (a: ARRAY [INTEGER]; l, r: INTEGER) -- Recursive mergesort local m: INTEGER do if l < r then m := (l + r) // 2 mergesort(a, l, m) mergesort(a, m + 1, r) merge(a, l, m, r) end end feature -- Utility feature merge (a: ARRAY [INTEGER]; l, m, r: INTEGER) -- The merge feature of all mergesort variants local b: ARRAY [INTEGER] h, i, j, k: INTEGER do i := l j := m + 1 k := l create b.make (l, r) from until i > m or j > r loop -- Begins the merge and copies it into an array `b' if a.item (i) <= a.item (j) then b.item (k) := a.item (i) i := i + 1 elseif a.item (i) > a.item (j) then b.item (k) := a.item (j) j := j + 1 end k := k + 1 end -- Finishes the copy of the uncopied part of the array if i > m then from h := j until h > r loop b.item (k + h - j) := a.item (h) h := h + 1 end elseif j > m then from h := i until h > m loop b.item (k + h - i) := a.item (h) h := h + 1 end end -- Begins the copy to the real array from h := l until h > r loop a.item (h) := b.item (h) h := h + 1 end end feature -- Attributes array: ARRAY [INTEGER] end C#. public class MergeSort<T> where T : IComparable public T[] Sort(T[] source) T[] sorted = this.Split(source); return sorted; private T[] Split(T[] from) if (from.Length == 1) // size <= 1 considered sorted return from; else int iMiddle = from.Length / 2; T[] left = new T[iMiddle]; for (int i = 0; i < iMiddle; i++) left[i] = from[i]; T[] right = new T[from.Length - iMiddle]; for (int i = iMiddle; i < from.Length; i++) right[i - iMiddle] = from[i]; // Single threaded version T[] sLeft = this.Split(left); T[] sRight = this.Split(right); T[] sorted = this.Merge(sLeft, sRight); return sorted; private T[] Merge(T[] left, T[] right) // each array will individually be sorted. // Do a sort of card merge to merge them in a sorted sequence int leftLen = left.Length; int rightLen = right.Length; T[] sorted = new T[leftLen + rightLen]; int lIndex = 0; int rIndex = 0; int currentIndex = 0; while (lIndex < leftLen || rIndex < rightLen) // If either has had all elements taken, just take remaining from the other. // If not, compare the two current and take the lower. if (lIndex >= leftLen) sorted[currentIndex] = right[rIndex]; rIndex++; currentIndex++; else if (rIndex >= rightLen) sorted[currentIndex] = left[lIndex]; lIndex++; currentIndex++; else if (left[lIndex].CompareTo(right[rIndex]) >= 0) // l > r, so r goes into dest sorted[currentIndex] = right[rIndex]; rIndex++; currentIndex++; else sorted[currentIndex] = left[lIndex]; lIndex++; currentIndex++; return sorted; const maxA = 1000; type TElem = integer; TArray = array[1..maxA]of TElem; procedure merge(var A:TArray;p,q,r:integer); var i,j,k,n1,n2:integer; B:TArray; begin n1 := q - p + 1; n2 := r - q; for k := p to r do B[k - p + 1] := A[k]; i := 1; j :=n1 + 1; k := p; while(i <= n1)and(j <= n1 + n2)do begin if B[i] <= B[j] then begin A[k] := B[i]; i := i + 1; end else begin A[k] := B[j]; j := j + 1; end; k := k + 1; end; while i <= n1 do begin A[k] := B[i]; i := i + 1; k := k + 1; end; while j <= n1 + n2 do begin A[k] := B[j]; j := j + 1; k := k + 1; end; end; procedure mergeSort(var A:TArray;p,r:integer); var q:integer; begin if p < r then begin q := (p + r) div 2; mergeSort(A,p,q); mergeSort(A,q + 1,r); merge(A,p,q,r); end; end; procedure mergeSort(var A:TArray;n:integer); var p,q,r,k:integer; begin k := 1; while k <= n do begin p := 1; while p + k <= n do begin q := p + k - 1; if p + 2 * k - 1 < n then r := p + 2 * k - 1 else r := n; merge(A,p,q,r); p := p + 2 * k; end; k := k * 2; end; end; Using a functor to create modules that specialize sorting lists of a given type with a particular comparison function: module type Comparable = sig type t val compare: t -> t -> int end module MakeSorter(M : Comparable) = struct (** Split list into two roughly equal halves *) let partition (lst: M.t list) = let rec helper lst left right = match lst with | [ ] -> left, right | x :: [ ] -> x :: left, right | x :: y :: xs -> helper xs (x :: left) (y :: right) in helper lst [] [] (** Merge two sorted lists *) let rec merge left right = match left, right with | _, [ ] -> left (* Emmpty right list *) | [ ], _ -> right (* Empty left list *) | x :: xs, y :: _ when (M.compare x y) < 0 -> x :: merge xs right (* First element of left is less than first element of right *) | _, y :: ys -> y :: merge left ys (* First element of right is greater than or equal to first element of left *) let rec sort lst = match lst with | [ ] | _ :: [ ] -> lst (* Empty and single element lists are always sorted *) | lst -> let left, right = partition lst in merge (sort left) (sort right) end module StringSort = MakeSorter(String) let () = let animals = [ "dog"; "cow"; "ant"; "zebra"; "parrot" ] in let sorted_animals = StringSort.sort animals in List.iter print_endline sorted_animals
8,569
Data Structures/Singly Linked Lists. Singly Linked Lists are a type of data structure. A linked list provides an alternative to an array-based structure. A linked list, in its simplest form, in a collection of nodes that collectively form linear sequence. In a singly linked list, each node stores a reference to an object that is an element of the sequence, as well as a reference to the next node of the list. It does not store any pointer or reference to the previous node. To store a single linked list, only the reference or pointer to the first node in that list must be stored. The last node in a single linked list points to nothing. See also Linked Lists.
152
Python Programming/Creating Python programs/Solutions. < Back to Problems 1. "Modify the hello.py program to say hello to a historical political leader (or to Ada Lovelace)." This is just a matter of replacing "world" with a different name, for example: print('Hello, Ada!') 2. "Change the program so that after the greeting, it asks, "How did you get here?"." This can be accomplished by adding a second print statement: print('Hello, Ada!') print('How did you get here?') 3. "Re-write the original program to use two print statements: one for "Hello" and one for "world". The program should still only print out on one line." This can be accomplished by passing the end parameter in the first print statement, and telling it to finish the output with a space (Python 3): print ('Hello,', end=' ') print ('world!') When using Python 2.6 and up, you can use the ',' operator to prevent print from inserting a newline at the end.The ',' instead inserts a space at the end of the string to be printed. When using print as a function, the comma is placed outside the parentheses (Python 2.7.5). print 'Hello,', print 'world!'
307
Australian History/Early Explorers. ‹<br> Prior to the arrival in Australia of Captain James Cook, Australia has been the focus both of European mythology, as Terra Australis. Similarly, contrary to popular mythology, a number of non-Indigenous explorers have travelled to Australia prior to Captain Cook's 'discovery' of the continent. Terra Australis. The term "Terra Australis" was first introduced by Aristotle. Aristotle's ideas were later expanded by Ptolemy, a Greek cartographer from the first century AD, who believed that the Indian Ocean was enclosed on the south by land. When, during the Renaissance, Ptolemy became the main source of information for European cartographers, the land started to appear on their maps. Although voyages of discovery sometimes did reduce the area where the continent could be found, cartographers kept drawing it on their maps and scientists argued for its existence with such arguments as that there should be a large landmass in the south as a counterweight against the known landmasses in the Northern Hemisphere. Usually the land was shown as a continent around the South Pole, but much larger than the actual Antarctica, spreading far north in particular in the Pacific Ocean area. New Zealand, discovered by Abel Tasman in 1642, was by some regarded as a part of the continent as well as Africa and Australia. The idea of Terra Australis was finally corrected by James Cook. On his first voyage he circumnavigated New Zealand, showing it could not be part of a large continent. On his second voyage he circumnavigated the globe at a very high southern latitude, at some places even crossing the south polar circle, showing that any possible southern continent must lie well within the cold polar areas, and not in regions with a temperate climate as had been thought before. Key Explorers. Marco Polo and Cristóvão de Mendonça. In about 1300, Marco Polo made reference to the reputed existence of a vast southern continent, although there is no evidence that he had specific knowledge of Australia. Some writers have suggested that maps compiled in Europe from the late 1400s show parts of the Australian coastline. Some believe that Australia was sighted by a Portuguese expedition led by Cristóvão de Mendonça in about 1522. A number of relics and remains have been interpreted as evidence that the Portuguese reached Australia in the early to mid 1500s, 200 years before Cook. These clues include the Mahogany Ship, an alleged Portuguese caravel that was shipwrecked six miles west of Warrnambool, Victoria (although its remains have never been found); a stone house at Bittangabee Bay; the so-called Dieppe map, a secret map drawn by the Portuguese; a cannon and five keys found near Geelong. Most historians do not accept these relics as proof that the Portuguese discovered Australia. Binot Paulmyer. The French navigator Binot Paulmyer claimed to have landed at Australia in 1503, after being blown off course. However later investigators concluded it was more likely he was in Madagascar. French authorities again made such a claim in 1531. Luis Vaez de Torres. Luis Vaez de Torres was the first of the 17th century Portuguese maritime explorers. A Portuguese expedition commanded by de Torres and piloted by Pedro Fernandez de Quiros set out for Australia in 1605. They sailed from east to west along the southern coast of Papua, and sighted the islands of Torres Strait. When de Quiros landed on the New Hebrides, he named the island group "Austrialia del Espiritu Santo", translated as "South Land of the Holy Spirit". The first undisputed sighting of Australia by a European was made in 1606. The Dutch vessel "Duyfken", captained by Willem Jansz, explored perhaps 350km of western side of Cape York, in the Gulf of Carpentaria. The Dutch made one landing, but were promptly attacked by Aboriginals and subsequently abandoned further exploration. Dirk Hartog. In 1616 Dirk Hartog landed on what is now called Dirk Hartog Island, off the coast of Western Australia, and left behind an inscription on a pewter plate. (This plate may now be seen in the Rijksmuseum in Amsterdam.) The Dutch named the western half of the continent New Holland, but made no attempt to colonise it. Further voyages by Dutch ships explored the north coast of Australia between 1623 and 1636, giving Arnhem Land its present-day name. Abel Tasman. In 1642, Abel Tasman sailed on a famous voyage from Batavia (now Jakarta), to Papua New Guinea, Fiji, New Zealand and, on November 24, sighted Tasmania. He named it Van Diemen's Land, after Anthony van Diemen, the Dutch East India Company's Governor General at Batavia, who had commissioned his voyage. Tasman claimed Van Diemen's Land for the Netherlands. The discovery that sailing east from the Cape of Good Hope until land was sighted, and then sailing north along the west coast of Australia was a much quicker route than around the coast of the Indian Ocean made Dutch landfalls on the west coast inevitable. Most of these landfalls were unplanned. The most famous and bloodiest result was the mutiny and murder that followed the wreck of the Batavia. William Dampier. William Dampier first explored the north-west coast of Australia in 1688, in the Cygnet, a small trading vessel. He made another voyage in 1699, before returning to England. The first Englishman to see Australia, he was able to describe some of the flora and fauna of Australia, being the first to report Australia's peculiar large hopping animals.
1,362
Introduction to Philosophical Logic/Complex sentences and sentence functors. The "reductio ad absurdum" method can be extended to "complex sentences". A complex sentence is a sentence made up of smaller sentences, for example "Sarah can swim but she cannot dive" is made up of the declarative sentences "Sarah can swim" and "She cannot dive", linked by the conjunction "but". Such words and phrases that link other declarative sentences are "sentence functors". A "functor" is the part of a language that stands for a function. For instance "y=sin(x)" is a statement (in a particular mathematical notation) about the function sine and "sin()" is the functor that stands for this function in that statement. Function is not defined here. If you are unsure of its meaning, please refer to Wikipedia. A 'sentence function' takes declarative sentences as input and yields declarative sentences as output. Sentence functors stand for such functions. A sentence functor is then a string of words and sentence variables that becomes a declarative sentence if each sentence variable is replaced by a declarative sentence (here "string" just means a series of symbols). This is true for English, French, Spanish, German, Greek, Latin and so on, but all words and declarative sentences must be in the same language. The composition of sentences. Sentences comprise parts known as "constituents". A constituent will be defined as a string of symbols that is meaningful by itself. This definition may be regarded as slightly empty and the meaning of constituent is probably best understood intuitively and by example. The following are constituents of the sentence "The cat sat on the mat": "the" "cat" "the cat" "sat on the mat" "on the mat" "the cat sat on the mat" All of the above are meaningful in themselves. Words are meaningful by themselves and so all individual words are constituents of any sentence that contains them (words can be considered "atomic" parts of language, that is they cannot be broken down further into other constituents - inflections, prefixes and suffixes are ignored here). The meaning of each constituent shown is the same as its meaning as a part of the sentence. The meaning of the constituent must be the same as it meaning in the sentence of which it is part, otherwise (although a string identical to the constituent may appear in the sentence) it is not a constituent of the sentence. For example, consider the sentence "the man who wrote on the blackboard was old". The string "the blackboard was old" is meaningful, however its meaning is no part of the sentence above, which is asserting that the man, not the blackboard, was old. Hence, "the blackboard was old" is not a constituent of that sentence. Similarly, and perhaps less clearly, "the man" is not a constituent of the sentence. The meaning of "the man" is not the meaning conveyed in the sentence. Were "the man" to be a constituent, it would suggest that a particular man had already been identified. However, this is not the case since the restriction "who wrote on the blackboard" is needed. So, "the man" is not a constituent, but "the man who wrote on the blackboard" is. Ambiguity. An ambiguous sentence is one that has more than one meaning. Crudely speaking, there are two types of ambiguity: structural and lexical. A lexical ambiguity arises where one word (or perhaps a phrase) has more than one meaning; for example "fast" in the sentences "He drove very fast" and "He was fast asleep". A structural ambiguity arises in a constituent because it is unclear what that constituent's constituents are. To clarify this, the notion of "scope" is introduced. The "scope" of a constituent is defined as the smallest constituent containing that constituent and something else besides. So, in the above example, the scope of "blackboard" is "the blackboard"; the scope of the first "the" is "the man who wrote on the blackboard". Consider the sentence "He told me to be careful this evening". Was this warning discussed in this sentence issued this evening or was it about this evening? It is unclear exactly what the constituents of this sentence are: it is unclear what the scope of "this evening" is. Is the scope of "this evening" "to be careful this evening", or is it the whole sentence "He told me to be careful this evening"? It can be seen that a new language might be devised to clear up such ambiguities: "He told me [to be careful this evening]"; "[He told me to be careful this evening]". Propositional calculus is an entirely unambiguous language, using a bracketing system as shown here. This will be seen in the next part of the book. Sentence functors. To repeat the definition given earlier, sentence functors are strings of words and sentence variables such that if all the sentence variables are replaced by any declarative sentences, the whole becomes a declarative sentence. To fully understand this definition, it is necessary to know what a "sentence variable" is. A sentence variable is something (usually represented by the Greek letters psi, phi or chi) that can be assigned as its value any declarative sentence. One example of a sentence functor was discussed in the section on consistency: "it is not the case that phi" Other examples of a sentence functors are: "phi and psi" "I know that phi" "It is obvious that phi" "Either phi, or psi and chi" The following are not English sentence functors (consider whether the string obtained by replacing the sentence variable with a declarative sentence is in itself a declarative sentence): "Mary and phi" "Is it true that phi?" "phi is true" (but ""phi" is true" is a sentence functor) "Whomever phi should stand up for themselves" The last example forms a declarative sentence when phi is replaced by some declarative sentences (for example, if replaced with "Jack is bullying"), but not all (e.g. "the sky is blue"), so it is not a sentence functor. The "number of places" of a sentence functor is the number of different sentence variables it contains. "Either phi, or psi and chi" is a 3-place sentence functor. "Either phi or psi" is a 2-place sentence functor. "Either it is the case that psi or it is not the case that psi" is a 1-place sentence functor. An n-place sentence functor is "satisfied" by certain "ordered n-tuplets" of declarative sentences. An ordered n-tuplet of declarative sentences is a list of n different declarative sentences in a particular order. In particular, those ordered n-tuplets of declarative sentences that satisfy an n-place sentence functor yield a "true" declarative sentence when they, in order, replace the sentence variables of a sentence functor. The ordered pair (grass is green, snow is black) satisfies "it is the case that phi, but not that psi"; whereas the ordered pair (snow is black, grass is green) does not. A sentence is a sentence functor with no sentence variables, i.e. a sentence is a 0-place sentence functor. Truth tables. When determining what sentences satisfy what sentence functors, the logician is interested in their truth value (rather than the actual meaning or sense). This information can be summarised in the form of a "truth table". A truth table stipulates all combinations of truth for a given set of sentences and what the truth value of a sentence functor is for each combination. Consider the sentence functor "phi and psi". The truth table for this sentence functor is drawn up as follows: Notice that the letters P and Q are used rather than phi and psi. Sentence variables cannot bear truth values. P and Q instantiate actual sentences, the truth values of which are considered below these letters in the table: 'T' stands for when that sentence is true and 'F' stands for when it is false. Note also that "P and Q" is a sentence, not a 2-place sentence functor (sentence functors with at least one sentence variable cannot bear truth values). To know what the value of the declarative sentence yielded by replacing the sentence variables of the sentence functor with sentences of various truth values, the row (known as a "structure") containing the desired truth values is selected and the letter in that row below the complex sentence taken. In the above example, the sentence "P and Q" is true when P is true and Q is true but false for any other values of P and Q (so when P is true but Q is false, "P and Q" is false). Consider the truth table of the sentence functor "Hume knew that phi". The structure where P is false is false for "Hume knew that P", for it is not possible to "know" something that is false. However, the structure where P is true has the symbol "-" in it (this will be referred to as a blank). This symbol does not mean that the sentence is neither true nor false in this structure. It means that there are true sentences that satisfy this functor and there are sentences that do not satisfy this functor. For example, Hume knew that his (Hume's) first name was David. However, he did not know that Russell was (or rather would be from Hume's perspective) a 20th century philosopher.
2,151
Ruby Programming/Installing Ruby. The first step to get started in Ruby development is setting up your local environment. Due to differences between various operating systems we will cover multiple of them. If you are already able to use a terminal emulator and know how to install Ruby yourself, you can skip this chapter (after you installed Ruby). Otherwise we will guide you through the process of installing Ruby on your computer. Terminal emulators. Knowing how to use a terminal emulator is very useful if you are programming. Usually it will provide the most straightforward access to commands and applications instead of hiding it behind graphical interfaces. On the other hand they are often daunting to beginners since they are often perceived to require a deep understanding of a computer when in fact often only knowing the very basics is already enough to get started. Unix-like operating systems. One of the most commonly used shells in the Unix-like operating systems (i.e. macOS, GNU/Linux, BSD) is the shell, in fact it is very often the default shell. To start a session you will often use a terminal emulator, which allow you to use a terminal session at the same time as other graphical applications. It doesn't really matter, which terminal emulator you use, generally you want one that has color and Unicode support. In macOS you can use Terminal.app which you can find under Applications > Utilities. A popular alternative is iTerm. On most Linux distributions you will usually be provided with at least one terminal emulator by default, otherwise you might want to try Terminator, Konsole, rxvt-unicode or something different. When you open a new window or tab in your terminal emulator of choice you will be shown your prompt. What it looks like exactly depends a lot on configuration, which can vary greatly from OS to OS (you can configure everything to your likings, however this exceeds the scope of this short introduction). Generally it will indicate your "current working directory", "username" and "hostname". When working in the shell your session always has "current working directory". Commands that accept relative filenames will use that directory as the base directory to look for files. By default you are in your user's home folder, which is often abbreviated with a tilde (codice_1). To execute a command you just type it into the shell and press enter. At first we want to look at the command codice_2. If you type it in just like that it will print the files and directories in your current working directory. You can also provide a relative path to a directory you want to list, e.g. codice_3. If you want more detailed information about the files you can use codice_4, if you instead want to also include invisible entries (i.e. names starting with a dot) use codice_5. Of course it is possible to combine them both by running codice_6 or the short form codice_7. Note that this kind of concatenating multiple arguments into one is only possible with single character parameters. Parameters can also come in long form, for example the equivalent of codice_5 is codice_9. Which forms are available depends on the individual command. Now you might be thinking how to remember all parameters for every command you will ever use. Thankfully you only want to remember the most important ones, which are the ones you use most frequently, otherwise there is a nice way to look them up. You can either use the codice_10 command. For example run codice_11 to find more information about the codice_2 command. Oftentimes you can find a more concise summary by trying to run the command in question followed by the parameter codice_13, however this is not something you can expect to work well with every command, whereas manual pages should be always available. Back to the topic of current working directories. If you want to change your directory you can use codice_14 followed by the directory you want to change to. There are two special virtual directories '.' and '..'. The single dot refers to the current directory while the double dot refers to a dir's parent directory. So executing codice_15 changes into the parent directory of the current working directory. A very brief summary of other useful commands: codice_16: display the contents of a file. codice_17: create a directory. System-wide installation. A common and easy way to install Ruby is to perform a system-wide installation. Depending on the operating system installation procedures will be different (if required at all). Windows. Windows Operating System did not have Ruby programing language pre-installed (unlike other platforms listed here) . To install Ruby programming language, it is highly recommended to install it from here: https://rubyinstaller.org/ . Refer to right side bar: WHICH VERSION TO DOWNLOAD? for guide to download which version to download. Usually it will recommend the latest stable version to be downloaded. You might see the options of Ruby+Devkit installer version as a selectable component. This option is important as to build native C/C++ extensions for Ruby and is necessary for Ruby on Rails. Moreover it allows the download and usage of hundreds of Open Source libraries which Ruby gems (packages) often depend on. Download it and double click the file to be installed on the local PC. Once install it, double click of Ruby Installer to start installing it on Windows Step 1: Select "I accept the license" and click "Next" button Step 2: Select the directory that you wanted to install to and below , select the "Add Ruby executables to your PATH" and "Associate .rb and .rbw files with the Ruby installation". Click "Next" buttons" Step 3: Select all the checkboxes inside the setup files. Click "Next" Step 4: Click "ridk install" and click "Next" button to proceed Once finished installing, type cmd into Window search bar and type codice_18 sto see which version of ruby that are installed. If it is showing, congrats, you successfully installed Ruby language in the system. macOS. Ruby comes preinstalled on macOS. To check which version is installed on your system, execute codice_18 inside a shell session. If you want to install a more recent version of Ruby, you can: Linux. On many Linux distributions Ruby is installed by default. To check if Ruby is installed on your system, run codice_18 in a shell session. Where this is not the case, or you want to update the installed version, you should use your distribution's package manager. Here we will provide information for some popular Linux distributions here, however it is recommended to users of all distributions to familiarize themselves with their distribution's package manager, since this will allow for the most efficient software management. Whether this is a command-line or graphical application depends on the offerings of the distribution and personal preference of the user. Debian / Ubuntu. The package manager Synaptic provides graphical package management. It is installed by default under Ubuntu and has to be installed manually on Debian (by running codice_21 from the command line). Instead of using Synaptic you can also use apt directly from the command-line (you can find further information in the Debian Wiki's article on Package Management). Execute codice_22 from the command line to install Ruby. Fedora. From the command-line you can install Ruby with DNF by executing codice_23. Arch Linux. Use pacman to install Ruby by executing codice_24 as root. Mandriva Linux. On Mandriva Linux, install Ruby using the command-line tool urpmi. PCLinuxOS. On PCLinuxOS, install Ruby using either the graphical tool Synaptic or the command-line tool apt. Red Hat Linux. On Red Hat Linux, install Ruby using the command-line tool RPM. Per-user Installation. Per-user installations allow each user of the system to utilize their own particular version of Ruby without impact to other users. Guix. To install the latest available version of Ruby, run: codice_25. Setup Ruby in Windows. Ruby does not come preinstalled with any version of Microsoft Windows. However, there are several ways to install Ruby on Windows. Setup Ruby in Windows with Notepad++. In latest version of Ruby and Window 10 , it is much more easier to setup Ruby now more than ever. We will be using to start setup Ruby in Windows. Once install Notepad++ in Windows, open up the programs and click on Plugins > Plugins Admin At Plugins Admin , select Plugins , NppExec to install Once installed, you can run NppExec by pressing F6 Building from Source. If your distro doesn't come with a ruby package or you want to build a specific version of ruby from scratch, please install it by following the directions here. Download from here. Compile options. Building with debug symbols. If you want to install it with debug symbols built in (and are using gcc--so either Linux, cygwin, or mingw). ./configure --enable-shared optflags="-O0" debugflags="-g3 -ggdb" Optimizations. Note that with 1.9 you can pass it codice_26 to have it build faster. To set the GC to not run as frequently (which tends to provide a faster experience for larger programs, like rdoc and rails), precede your build with $ export CCFLAGS=-DGC_MALLOC_LIMIT=80000000 though you might be able to alternately put those in as opt or debug flags, as well. Testing Installation. The installation can be tested easily by executing: $ ruby -v This should produce an output similar to: ruby 1.8.7 (2009-06-12 patchlevel 174) [i486-linux] If this shows up, then you have successfully installed Ruby. However if you get an error similar to: -bash: ruby: command not found then you did not successfully install Ruby.
2,299
Ruby Programming/Interactive Ruby. When learning Ruby, you will often want to experiment with new features by writing short snippets of code. Instead of writing a lot of small text files, you can use irb, which is Ruby's interactive mode. Running irb. Run irb from your shell prompt. $ irb --simple-prompt The » prompt indicates that irb is waiting for input. If you do not specify --simple-prompt, the irb prompt will be longer and include the line number. For example: $ irb irb(main):001:0> A simple irb session might look like this. $ irb --simple-prompt » 2+2 => 4 » 5*5*5 => 125 » exit These examples show the user's input in bold. irb uses => to show you the return value of each line of code that you type in. Cygwin users. If you use Cygwin's Bash shell on Microsoft Windows, but are running the native Windows version of Ruby instead of Cygwin's version of Ruby, read this section. To run the native version of irb inside of Cygwin's Bash shell, run irb.bat. By default, Cygwin's Bash shell runs inside of the Windows console, and the native Windows version of irb.bat should work fine. However, if you run a Cygwin shell inside of Cygwin's rxvt terminal emulator, then irb.bat will not run properly. You must either run your shell (and irb.bat) inside of the Windows console or install and run Cygwin's version of Ruby. Understanding irb output. irb prints out the return value of each line that you enter. In contrast, an actual Ruby program only prints output when you call an output method such as codice_1. For example: $ irb --simple-prompt » x=3 => 3 » y=x*2 => 6 » z=y/6 => 1 » x => 3 » exit Helpfully, codice_2 not only does an assignment, but also returns the value assigned to codice_3, which irb then prints out. However, this equivalent Ruby program prints nothing out. The variables get set, but the values are never printed out. x=3 y=x*2 z=y/6 x If you want to print out the value of a variable in a Ruby program, use the codice_1 method. x=3 puts x
610
Australian History/Preface. ‹<br> Authors should maintain the regular wikibook rules governing unbiased writing, the only other guideline are that everything should be kept more or less in chronological order and divided into logical chapters. Everything else is left pretty much up to the individual authors that elect to join into the project. There are some excellent examples of Wikibooks covering history, including Canadian History, and the Australian History Wikibook is modeled on these. The eventual aim of this project is to create a textbook which could, in printed form or online, be a useful resource for high school and university students studying Australian history, and could even be used (in printed form) as the main textbook for a high school or university subject. For more details on this, please see . More specific details for creating a Victorian textbook are available from the VCAA, including the Study design guide. As of the time of writing, some of the VCE "Australian History" course requirements were as follows: ›
225
Australian History/Authors. Australian History/Contents
10
Electronics/Superposition. Superposition Principle. Most basic electronic circuits are composed of linear elements. Linear elements are circuit elements which follow Ohm’s Law. In Figure 1 (a) with independent voltage source, V1, and resistor, R, a current "i"1 flows. The current "i"1 has a value according to Ohm’s Law. Similarly in Figure 1 (b) with independent voltage source, V2, and resistor, R, a current "i"2 flows. In Figure 1 (c) with independent voltage sources, V1 and V2, and resistor, R, a current i flows. Using Ohm’s Law equation 1 is reached. If some simple algebra is used then equation 2 is reached. But V1/R has a value "i"1 and the other term is "i"2 this gives equation 3. This is basically what the Superposition Theorem states. The Superposition Theorem states that the effect of all the sources with corresponding stimuli on a circuit of linear elements is equal to the algebraic sum of each individual effect. Each individual effect is calculated by removing all other stimuli by replacing voltage sources with short circuits and current sources with open circuits. Dependent sources can be removed as long as the controlling stimuli is not set to zero. The process of calculating each effect with one stimulus connected at a time is continued until all the effects are calculated. If kth stimulus is denoted sk and the effect created by sk denoted ek. The steps for using superposition are as follows: Note: the removal of each source is often stated differently as: replace each voltage source with its internal resistance and each current with its internal resistance. This is identical to what has been stated above. This is because a real voltage source consists of an independent voltage source in series with its internal resistance and a real current source consists of an independent current source in parallel with its internal resistance. Superposition Example. Problem: Calculate the voltage, "v", across resistor R1. Step 1: Short circuit V2 and solve for v1. By voltage divider rule. Short circuit V1 and solve for v2. By voltage divider rule. Step 2: Sum the effects. Using equations 5 and 6. If formula_9 and formula_10 then
509
Modern Physics/Gravitational Red Shift. The red shift. Spacetime diagram for explaining the gravitational red shift. Light emitted at a lower level in a gravitational field has its frequency reduced as it travels to a higher level. This phenomenon is called the gravitational red shift. We can see why this happens by using the principle of equivalence. Being in a gravitational field is equivalent to being in an accelerated frame, so knowing how the doppler shift works in such a frame will tell us how it works in a gravitational field. We view the process of light emission and absorption from the unaccelerated or inertial frame, as shown in the figure above. In this reference frame the observer of the light is accelerating to the right, as indicated by the curved red world line, which is equivalent to a gravitational force to the left. The light is emitted at point A with frequency formula_1 by a source which is stationary at this instant. At this instant the observer is also stationary in this frame. However, by the time the light gets to the observer, they have a velocity to the right which means that the observer measures a Doppler shifted frequency formula_2 for the light. Since the observer is moving away from the source, formula_3, as indicated above. The relativistic Doppler shift is given by so we need to compute "U"/"c". The line of simultaneity for the observer at point B goes through the origin, and is thus given by line segment OB. The slope of this line is "U"/"c", where "U" is the velocity of the observer at point B. From the figure we see that this slope is also given by the ratio "X' /X". Equating these, eliminating "X" in favor of "L" = √("X"2 - "X"′2), which is the actual invariant distance of the observer from the origin, and substituting into the previous equation results in our gravitational red shift formula: If "X"′ = 0, then there is no redshift, because the source is collocated with the observer. On the other hand, if the source is located at the origin, so "X"′="X", the Doppler shifted frequency is zero. In addition, the light never gets to the observer, since the world line is asymptotic to the light world line passing through the origin. If the source is at a higher level in the gravitational field than the observer, so that "X"′ < 0, then the frequency is shifted to a higher value, i. e., it becomes a "blue shift". To see how this doppler shift relates to the strength of gravity, "g", and the distance "h" between the source and the observer, first note that Making these substitutions gives So the redshift is proportional to gravity. Since this doppler shift doesn't depend on the type of wave we can conclude that it is actually caused by time dilation, just like the doppler shift due to relative motion. That is, gravity slows down time. Energy and frequency. In equation 1 "gh" is the change in gravitational potential energy so the change in frequency is proportional to change in potential energy, which suggests there might be a connection with energy conservation. However, since we haven't yet established any connection between frequency and energy, we can't simply apply a energy conservation argument. Instead, we can argue in reverse, finding out what the energy-frequency relationship must be is energy is to be conserved. Suppose we have two identical systems, both at rest in a uniform gravitational field "g" with initial energy "E" separated by a vertical distance "h". The system has mass "E"/"c"2, giving it potential energy, so the total energy of the two systems is initially where the second term is due to the lesser potential energy of the lower system. The lower system emits a burst of waves, frequency ω, energy "E"(ω). When this reaches the upper system the waves have been red-shifted to frequency ω', energy "E"(ω'). This energy is absorbed by the upper system. The total energy is now Since we want to preserve energy conservation, these two equations must give the same result. Equating them, we get Comparing this with the doppler shift we see that which can only be true if "E"(ω) is proportional to ω I.e, energy conservation implies "energy is proportional to frequency", which is one of the axioms of quantum theory. We could equally well have started with the quantum result and proved the gravitational red shift must exist. Either theory requires the other for consistency. Since energy and frequency are each the temporal components of a four-vector, their being proportional implies the four-vectors themselves, and their spatial components, are also proportional. So, for waves, momentum is proportional to k. Remember too, we saw earlier, when we looked at Hamilton's equations, that classical mechanics would be equivalent to a theory of anisotropic waves, in the geometrical optics limit, if energy were proportional to frequency and momentum to wave number. This proportionality isn't just required for energy conservation, it would make possible a theory uniting waves and particles. None of this actually proves the proportionality, doing that requires experiment, but it does make it a natural assumptiom, which is indeed confirmed by experiment. Because of all this, from now on we'll assume that energy and frequency are related in this way, with the constant of proportionality being formula_12 Gravity and curvature. The gravitational redshift also implies that space is curved. We can see this by considering a rectangle in space-time. Without gravity, if we start at some point A, wait for time "t", then move at light-speed to the right for a distance "h", we get to the same place, B, as if we move at light-speed for a distance "h" then wait for time "t" at rest with respect to A. With gravity, if we follow the first path, we rise a distance of "ct" then wait for time "t". On the second path, we begin by waiting for time "t", "but this is dilated by gravity." To an observer at B we appear to be waiting for a time "t"(1+"gh"/"c"2) before we start, so we end up at B later than on the first path. Thus, with gravity, it matters which order we add distance vectors in. This can't happen if space is flat, so "space must be curved." To describe how it's curved we'd need the techniques of General Relativity.
1,489
World History/The Renaissance in Europe. Fall of the Byzantines and Rise of the Ottomans. When in 1453, the Ottoman Empire captured Constantinople, and renamed it Istanbul, Rome was finally declared crushed. It began to fall in all areas of life, and then fell altogether. Meanwhile, the most successful Ottoman ghazi came to power(1248-1326?) and began to build the Ottoman Empire. Ostman was a firm believer of Islam, and treated all his captured people with respect, and allowed them to worship whom they wished.
132
Radiation Oncology/Cancer Syndromes. Cancer Syndromes
18
Conlang/Intermediate/History/Common sound changes. As time progresses and a language is often used, sounds start to change; different phonemes are used. The words get "smoothed" like gravel at a beach or in a desert. After centuries the stones will be smooth. "Sound changes", as they're called, are a major driving force of language change. Sound changes are born every time we speak. People rarely say any word perfectly; perhaps your <t> was a little too far forward, or your <u> wasn't rounded as much as it normally would be. Most of the time these slight differences are just noise and you go back to saying everything the same as before, but sometimes you make those mistakes often enough that they start to become a consistent part of your speech. Anyone who respects or admires you — even if it's just your group of friends — will start to subconsciously copy the way that you speak and that sound change will begin to spread. Sound change is nigh unstoppable. If you write two novels in the same setting in different periods of time using the same conlang, it's quite likely some sound changes will have happened, so you'll want to implement them. Sound change also has no memory. If at a certain point in time there are some sounds X in words, they all will change to Y even if some of them were W a few centuries ago while some have been X since the beginning of the language. Although it seems like sound change happens regardless of grammar, this is not necessarily true. As an example, some varieties of Brazilian Portuguese delete final in verbs, but not in nouns or nominalized verbs. The infinitive "poder" (can) is usually pronounced , but as a noun, "poder" (power) is pronounced , even colloquially. Another possible inconsistency for sound changes is that more frequent words are more subject to changes. In the case that two words would be pronounced the same if a certain sound change happens, one of the following things can happen: Common Sound Changes. Some kinds of sound changes are more common than others. Let's take a look at the most common ones. Assimilation. Assimilation is by far the most important sound change. Assimilation is when a sound changes to become more similar to the surrounding sounds. A consonant may change to match the place or type of articulation of an adjoining consonant. Lenition. Lenition is the "weakening" of sounds. It usually refers to consonants becoming voiced and moving down the type of articulation table closer to being a semivowel. It also refers to sounds that disappear altogether. Lenition is especially common intervocalically (between two vowels). Palatalization. Palatalization is the shifting of a consonant towards the palate. This is a common type of assimilation. Consonants can palatalize before or after a front vowel ([], []) or a palatal consonant ([]), perhaps ending up as an affricate or fricative. Velarization. Velarization is a secondary articulation of a consonant where the back of the tongue is raised towards the velum. In some languages, such as Russian and Irish, velarized consonants often contrast with palatalized consonants. Monophthongization. Monophthongization is the simplification of a diphthong (or triphthong) down to a single vowel. This feature was very common in Old French and Ancient Greek, leading some the diphthongs of these languages to be monophthongized. For instance, the French <ai> and <eau> are now pronounced [] and []; in Modern Greek, the combinations <ει> and <οι> are pronounced []. Nasalization. Vowels next to nasal consonants very often become nasal themselves. This is a type of assimilation. If a nasal consonant disappears, the mark it left on the vowel may remain, causing nasal vowels to become phonemic. Rhotacism. This is the change from // to a trilled //, which has occurred in various European languages. In Latin, // became // between vowels (lenition), and // then proceeded to become //. This causes alternations between // and // in some words' inflected forms: Flower is "flos" in the nominative singular, but "floris" in the genitive. You can regularise these sounds over time. Latin did this, so that original "flos" and "honos" became "flor" and "honor", to match their genitives "floris" and "honoris". Environments. Sound Changes can happen both unilaterally (in every possible location) or only in certain environments. For instance, a language may lenite a particular sound, only if it follows a particular consonant. When logging sound changes, a standardized notation is used, which looks something like this: [x] > [y] / [z]_ In this formula, the underscore indicates where our phoneme in question would be, and it can be read as "when [x] follows [z], it becomes [y]". This basic structure can be expanded for more tricky rules. For example [x] > [h] / #[V]_ Here we are indicating that [x] becomes [h] when following [V], where [V] is "any vowel". We have also added the hash to the second half of the equation, which indicates a word boundary (either initial or final). This means we can read this as "post-vocalic [x] becomes [h] in initial syllables only." Choosing sound changes. The basic idea here is that when you're making your conlang you should have in your mind a parent language (or proto-language) and a child language. The proto-language is going to be a conlang just as we have been making up until this point and should not have any history to it. The child language is going to contain all the history. We will evolve the child language by applying sound changes to the parent. The child language is the "result"; the language that you will present to other people, or put in your novel, or whatever other reason you conlang for. Before you begin, you may want to have some idea of the kinds of sounds that you want your child language to have. Create a rough draft of the phonology of the child language. Once you have that, you can start trying to change the phonology of the proto-language into this child draft by selecting sound changes and adding them to a list. It takes some practice to be able to do this well, so don't worry too much if the final product isn't exactly the same as your draft. Alternatively, you can decide not to worry too much about the final product and simply select sound changes randomly. You won't have much control over what you get, but you may get something interesting. Once you have a list of sound changes, you will want to go through the dictionary of the proto-language and apply those sound changes to every word there. Sound change appliers. You may have noticed that applying sound changes to words is quite a tedious process. To help with this, some conlangers have written computer programs called "Sound change appliers" that automate much of this work for you. The original and most famous sound change applier is the by Zompist. Sound change appliers are powerful and useful tools, but they can have trouble with certain kinds of changes. They can get confused by any change that needs to happen in particular syllables, such as syllable-based syncope, or any change where the environment spans multiple syllables, such as umlaut.
1,766
Prolog/Math, Functions and Equality. This section explains how to use math in prolog, using functions and equality. Numbers in Prolog. Prolog, like any other programming language, has a representation for numbers. Numbers are used like constants, and represented like they are anywhere on the computer, the following are valid ways of dealing with numbers in a predicate: a(12, 345). a(A) :- b(A, 234.3). a(345.3409857) :- b(23476.923804). To perform mathematical operations on numbers, we will need functions. To store the result of a mathematical operation in a variable, we will need to look more closely at equality. Functions. Until now, predicates have always represented a simple true or false. Predicate a(A, B) is true or false, depending on the values of A and B. Functions are predicates that represent a value. The sin() predicate, for instance, is a function. sin(0) represents the value 0 and sin(1) represents the value 0.841471. Functions can be used anywhere a number or constant can be used, in queries, predicates and rules. For instance, if the fact p(0). is in your program, the query ?- p(sin(0)). will unify with it. The following common mathematical functions are built in to most Prolog implementations: Note that functions themselves cannot be evaluated. the query ?- sin(3). will fail because sin() is implemented as function and not as a predicate. One difference between functions and predicates is that the meaning (or definition) of a predicate is usually defined by you, in your program. When you use functions like sin(), they've already been defined in your prolog implementation. In other words, prolog will not find the definition in your program, but in it's library of "built-in" predicates. It is possible to create your own functions, but that's something you will usually not need. Equality. There are several kinds of equality, with slightly different meanings. Here we will just look at the "=" operator and the "is" operator. First, look at: ?- A is 36/5. This query assigns the result of mathematical operation 36/5 to the variable A. So Prolog will answer: A = 7.2 This idea may be familiar from other programming languages. The = operator, however, is very different. It doesn't solve the right-hand side, but instead keeps it as a formula. So you get this: ?- A = 36/5. A = 36/5 Instead of assigning the result of the operation to the variable A, prolog assigns the operation to A, without evaluating it. You can see the same thing with queries. If you ask ?- (31 is (36-5)). You'll get "Yes", because the (36-5) is evaluated (solved). But if you ask ?- (31 = (36-5)). You'll get "No", because Prolog will compare a number (31) to a formula (36-5), rather than to the result of solving the formula. The is operator is meant specifically for mathematical functions. The left argument has to be a variable and the right argument has to be a mathematical function with all variables instantiated. The = operator is used for unification of variables and can be used with any two arguments (although it will fail if the two arguments aren't the same and can't be made the same by instantiating variables a certain way). Prolog knows many other ways of comparing two terms or instantiating variables, but for now, these two will suffice. When working with functions, we will almost always use the is operator. Math. Now that we know about functions and equality, we can start programming with math. plus(A, B, C) :- C is A + B. This predicate adds two numbers (A and B), and unifies the result with C. The following program is somewhat more complex. fac(0,1). fac(A,B) :- A > 0, Ax is A - 1, fac(Ax,Bx), B is A * Bx. This program calculates the factorial of A (A! in math notation). It works recursively. The first rule states that the factorial of 0 is 1. The second states that the factorial of a number A greater than 0 is the factorial of A-1 times A. Exercises. (1) What will prolog answer to the following queries (on an empty database)? Try to think of the answer yourself, and then use a prolog compiler to verify it. (2) Write a predicate called sigma, such that sigma(A,B,N) is true when N=A+(A+1)+(A+2)+...+(B-2)+(B-1)+B. In other words, formula_1. You may assume that A and B are integers with B>A. Test your predicate with queries such as: ?- sigma(4,9,X). X = 39 ; fail. ?- sigma(-7,-2,X). X = -27 ; fail. ?- sigma(-5,5,X). X = 0 ; fail. (3) The factorial program shown at the end of this chapter sins against one of the guidelines of using recursive rules. In the second rule: fac(A,B) :- A > 0, Ax is A - 1, fac(Ax,Bx), B is A * Bx. The recursive part is not the last predicate in the rule. Answers to Exercises. (1) ?- X = 1 + 2 + 3. X = 1 + 2 + 3. ?- X is 100/10. X = 10. ?- X is (14 + 16)/3, X + 3 = Y. X = 10. Y = 10 + 3. ?- X = 1000/100 + 5, Y is X. X = Y, Y = 15. (2) As usual, there is more than one way to solve this problem. Here's one way which uses recursion in a similar way to the factorial predicate. sigma(A,A,A). sigma(A,B,N) :- B>A, %What do you think would happen if you removed this line? Try it. Why does this happen? A1 is A+1, sigma(A1,B,N1), N is A+N1. Tail Recursive way: sigma(A,B,N):- add_1(A,B, A, N). add_1(B,B,N,N). add_1(A,B,Count,N):- B > A, A1 is A + 1, Count1 is Count + A1, add_1(A1, B, Count1, N). Prev:Lists Next:Putting it Together
1,704
Management Strategy/Five Forces. Michael Porter, one of the leading researchers in the field of business and a professor at Harvard Business School, has identified five key forces which affect the strategy of any industry. These forces are: Managers use the Five Forces model to help identify opportunities or evaluate decisions in the context of the environment. Often, the Five Forces are mapped against a SWOT analysis to develop a corporate strategy. To complete a Five Forces analysis, it is often best to build a grid on a piece of paper and label each section--keeping Industry Competition separate. Fill-in each section and to develop a view of the industry. Then, think about if the industry is truly competitive or if the industry is a monopoly or oligopoly. What makes your company able to compete in this environment?
169
Organic Chemistry/Isocyanate. Organic isocyanates have the formula R-N=C=O where R can be either alkyl or aryl. They can be solids or liquids. Isocyanates are generally toxic and must be handled with great care, especially the more volatile isocyanates where inhalation is a primary route of exposure. Isocyanates are usually produced by reacting an amine with phosgene (COCl2): <chem>R-NH2 + COCl2 -> HCl + R-NH-(C=O)Cl -> HCl + R-N=C=O</chem> They are very reactive molecules, reacting with nucleophiles such as water, alcohols or amines. The reactions involve attack at the carbon of the isocyanate in a manner similar to that for carboxylic acid derivates such as esters or anhydrides. Water reacts with isocyantes to give a carbamic acid. This reaction is catalyzed by tertiary amines. Carbamic acids are unstable and decompose to carbon dioxide and an amine, R-NH2 <chem>R-N=C=O + H2O -> R-NH-(C=O)-OH -> RNH2 + CO2</chem> Alcohols react with isocyanates to give carbamates which are also known as urethanes. This reaction is catalyzed by tertiary amines or salts of metals such as tin, iron and mercury. <chem>R-N=C=O + R'-OH -> R-NH-(C=O)-O-R'</chem> Primary and secondary amines react with isocyanates to give substituted ureas. <chem>R-N=C=O + R'R"NH -> R-NH-(C=O)-NR'R"</chem> R' and R" can be H, alkyl or aryl Isocyanates themselves to give dimers and trimers.
528
Italian/Grammar/Pronouns. General. Personal pronouns are short words that replace persons or things: he, she, they, it, me, her etc. Personal pronouns can play various roles. For instance, in the sentence "I eat cake", the word 'I' is a subject, but in the sentence "That lion wants to eat me", the word 'me' is the object. Other pronouns (not personal) also replace nouns, with a more specific usage. For instance, "this" can replace a noun, with a meaning similar to "it" (or "he"/"she"), e.g. in the sentence "this is good for you". More information about pronouns, subjects etc. in English can be found in a different place. Here it is assumed that sufficient knowledge is available. Subject Personal Pronouns / Pronomi personali soggetto. List of Subject Personal Pronouns: Singular: io - "I" - Note: no capital letter required tu - "you" - Note: used to address one person egli - "he" - Note: not used in common talk lui - "he" - Note: colloquial, normally it replaces "egli" ella - "she" - Note: not used in common talk lei - "she" - Note: colloquial, normally it replaces "ella" esso - "it" - Note: masculine, little used in common speech essa - "it" - Note: feminine, little used in common speech lei - "you" - Note: special use; sometimes with capital initial Plural: noi - "we" voi - "you" - Note: used to address two or more people Voi - "you" - Note: same as 'voi' with lower case initial, but more formal (compare to 'Lei' in the singular) essi - "they" - Note: masculine esse - "they" - Note: feminine loro - "they" - Note: both masculine and feminine, colloquial, normally it replaces "essi" and "esse" Loro - "you" - Note: special use, very uncommon, capital initial The pronouns for the 1st person (singular: io, plural: noi) do not need special explanations. The pronouns for the 2nd person (in English "you" both singular and plural) have a usage far more varied than in English. Tu is addressed to one person only (singular) and is related to the older English "thou". It is felt to be informal. It is used with members of the same family (e.g. father, mother), with relatives (e.g. uncles), with children, with friends, and, in modern usage, with work colleagues. It is often used with boys and girls, and sometimes it is used with other persons in order to create a friendly atmosphere. In the other cases (especially with grown-up people that are not friends or relatives) it is replaced by the pronoun lei (sometimes written with a capital letter). This pronoun literally means "she" and its usage is similar to English sentences including "His/Her Majesty". For the same purpose the pronoun Ella can also be used. This is felt today as obsolete and is used rarely only in writing. When two or more people are addressed, voi is used both in formal and informal language - that is with relatives, friends and other people. The plural for "Lei" does exist, it is Loro (capital initial), but it is nowadays little used. Instead of Loro, Voi is used, which is the same as voi but with a capital letter. The pronouns for the singular 3rd person (in English "he"/"she"/"it") take into account the gender of the replaced noun, which in Italian can only be masculine or feminine. When referring to a person, "he" is translated as egli (proper language, rarely used in speech) or lui (colloquial), "she" as ella (rare in speech) or lei (colloquial). For animals or things, "it" is translated as esso when the noun is masculine (e.g. "lago", lake) and as essa when it is feminine (e.g. "barca", boat). Plural also takes into account the gender, so "they" is translated as essi (masc.) or esse (fem.). In colloquial talk "they" is usually translated as loro. It should also be remembered that pronouns are required less often than in English because verb forms change for every person. It's usually better to say "Sono felice" ([I] am happy) than "Io sono felice" (I am happy). Missing subject pronouns. In many languages, including English, French and German, the subject of the verb must be expressed. Pronouns are widely used to avoid repeating nouns for this purpose. In Italian the subject is often omitted, as the verb can give sufficient information. Personal subject pronouns are far less used than in English. They are used only when there is a need for clarity or a wish to emphasize the pronoun itself. Examples of missing subjects: Piove. "It is raining." No need to mention "it". Vengo subito. "I come soon". No need to mention "I". Dove vai? "Where are you going?" No need to mention "you". Dove andiamo? "Where are we going?" No need to mention "we". Example of special uses: Credo di sì. "I think so". Io credo di sì. "I do think so". The pronoun makes the assertion more emphatic. Object Pronouns. 1) In most cases the direct and indirect object pronouns are placed before the verb: Ti vedo. "I see you. One exception: "loro" as an indirect object is placed after the verb. Additionally, a few forms of the verb always have pronouns after them, but then they are postfixed (ie. written onto it as one word). These forms are the infinitive (dropping the final 'e', eg. "vederlo"), the gerund and the past participle (but only when used alone, not as part of a perfect tense). For example 'I want to see him' is "Voglio vederlo and not "voglio lo vedere". Note that stress does not change when pronouns are appended, ie. "scrivere" => "scriverlo" and not "scriverlo" 2) Note that 'Lei' and 'Voi' when indicating formal 'you' are capitalised, regardless of inflection (eg. "Perché non Le piace questo film?. "Why don't you like this movie? ). This rule also applies to postfixed 'Lei' and 'Voi', but there they may also occur in lower case (eg. I can see you. "Posso vederLa. "or "Posso vederla. ") 3) When two pronouns (or a pronoun and "ci" (there) or "ne" (form there/it)) are combined, the order is "indirect + direct + ci/ne". You will never encounter all three together, but for example if you have an indirect and a direct object, then the indirect will precede the direct. Additionally the preceding form changes: For example, 'He gives it to us' is Ce lo dà". "Glie" is affixed to the next word, so 'I give it to him/them' becomes Glielo do. " Another example is the verb "andarsene", which means to go away ("andare-si-ne", lit. to go (oneself) from the place). It is conjugated "me ne vado, te ne vai, se ne và, etc." As you can see in "andarsene" the postfixed pronouns are all written as one word. This is the rule for all such combinations, ie. "Voglio dartelo. "I want to give it to you. (="dare-ti-lo") 4) The prepositional forms are obligatory after prepositions: "Vado da lui stasera. "I'm going to his place tonight. They are also placed after the verb instead of the direct object to emphasize the pronoun (eg. "Ho visto te. "It is you I saw. - This is somewhat uncommon). They can replace the indirect object in a similar fashion, which is much more common, but these pronouns require the preposition "a" when indicating the indirect object (eg. "Lo dirò a te prima che lo dica agli altri. "I'll tell you before I tell any of the others. - As you can see in this sentence, nouns such as "gli altri" also need the preposition "a") Possessive Pronouns. Possessives, like articles, must agree with the gender and number of the noun they modify. Hence, "mio zio", my uncle, but "mia zia", my aunt. So depending on what is being modified, the possessive pronouns are: In most cases the possessive adjective must be used "with" the definite article: The only exception is when the possessive refers to a family member in the singular: But in the plural: However, 'loro' always takes the definite article (also note that 'loro' does not change, for example the female form is 'loro', not 'lora' or something like that): Personal pronouns can also be used on their own: To say 'one of my', 'one of your' etc. you use a personal pronoun with the indefinite article: And finally, possessives may be placed behind the noun it refers to in poetical language and specific expressions. If the definite article is used at all, it remains before the noun. Relative Pronouns. Relative pronouns are just pronouns, like those before this section but they refer to something relative to the context of the sentence or the situation. the pronoun "I" will always refer to myself while the pronoun "she" will always refer to some "her". A Relative pronoun can refer to a person, a thing or a situation. In English these pronouns are who, which, that, whom, where. Demonstrative Pronouns. There are only two demonstrative pronouns: questo and quello. Pay attention to how these pronouns change in gender and number. 'Questo' and related forms always use their full form. 'Quello' uses the full form as described above when on their own (for example, "Questo è rotto." This one is broken.), but has a shorter form when used with a noun ("Quel problema non si può risolvere." That problem can't be solved.) The form is dependent on the article you would use with the noun: For example, you would say "gli uomini", so you say "quegli uomini". These forms are also used when followed by a relative pronoun: "Quel che voglio" = What I want / The one that I want / That which I want.
2,588
Applicable Mathematics/Odds. "Odds" is a way of expressing the likelihood of an event. The more usual way of expressing the likelihood of an event is its "probability" (the percentage of future trials which are expected to produce the event: so in tossing a coin believed to be fair, we would assign a probability of 50% (or one half, or 0.5) to the event "heads"). The ODDS of an event, however, is the ratio of the probability of the event happening to the probability of the even not happening (i.e. the ODDS of a fair coin landing heads is 50%:50% = 1:1 = 1). It is the ODDS we are using when we use a phrase like "it is 50/50 whether I get the job" or "The chances of our team winning are 2 to 1". Or, For example, when rolling a fair die, there is one chance that you will roll a 1 and five chances that you will not. The odds of rolling a 1 are 1:5, or 1 to 5. This can also be expressed as formula_1 or 0.2 or 20%, but these forms are likely to be misunderstood as normal probabilities rather than odds. To convert odds to probability, you add the two parts, and this is the denominator of the fraction of your probability. The first part becomes the numerator. Thus, 1:5 becomes 1/(1+5) or 1/6. To convert probability to odds, you use the numerator as the first number, then subtract the numerator from the denominator and use it as the second number. Thus, 1/6 becomes 1:(6-1) or 1:5. See also. The more usual way of expressing the likelihood of an event is its "probability" (the percentage of future trials which are expected to produce the event: so in tossing a coin believed to be fair, we would assign a probability of 50% (or one half, or 0.5) to the event "heads"). The ODDS of an event, however, is the ratio of the probability of the event happening to the probability of the event not happening (i.e. the ODDS of a fair coin landing heads is 50%:50% = 1:1 = 1). It is the ODDS we are using when we use a phrase like "it is 50/50 whether I get the job" or "The chances of our team winning are 2 to 1". Or, For example, when rolling a fair die, there is one chance that you will roll a 1 and five chances that you will not. The odds of rolling a 1 are 1:5, or 1 to 5. This can also be expressed as 1 / 5 or 0.2 or 20%, but these forms are likely to be misunderstood as normal probabilities rather than odds. To convert odds to probability, you add the two parts, and this is the denominator of the fraction of your probability. The first part becomes the numerator. Thus, 1:5 becomes 1/(1+5) or 1/6. To convert probability to odds, you use the numerator as the first number, then subtract the numerator from the denominator and use it as the second number. Thus, 1/6 becomes 1:(6-1) or 1:5.
783
Conlang/Beginner/Writing. OK, you have your sounds and words, but you think that this alphabet we use is boring? Well, you're in luck! This page will give you a few hints and tips about creating your own writing system, commonly called a "conscript". Types of script. To make an interesting writing system the last thing you want to do is copy the Roman script. So, before you actually start to create any glyphs (symbols), you will need to decide what kind of script you would like to make. Scripts are divided up by how many sounds are expressed per glyph. Naturally, you are not limited to just these types. You can modify or combine them to fit your language and style. Glyphs. Now that you have chosen the type of script you want, you get to design the glyphs themselves. Here are a few things to keep in mind: Writing Direction. Okay, now you have your sounds and your symbols but you have one more thing to figure out: which way the words will be written. You then need to find out which way the lines will go. For example, Japanese is sometimes written top to bottom with vertical lines going right to left. Arabic goes right to left but the lines go top to bottom horizontally. Chinese and Mongolian are both "traditionally" written top to bottom but with vertical lines going left to right and right to left respectively. Some writing systems like Ogham are sometimes even written in a circle. Now you can write in your own script!
338
Active Server Pages/Basic ASP Syntax. = Objectives = In this section you will learn about the different parts of Active Server Pages scripting. We don't want to get too deep in discussion of how to write ASP scripts, but instead just want to introduce the different elements of ASP scripting. = Content = Active Server Pages or ASP is a scripted language which means that it doesn't have to be compiled in order to run. Instead, the program code is parsed, interpreted and evaluated in real-time. This means that you do not have to compile code in order to execute it. Code is executed by placing it within a web page on a Web server. ASP was developed by Microsoft and because of this, is only completely reliable on Microsoft's IIS (Internet Information Server.) We encourage you to use Microsoft IIS to develop Active Server Page code. Statements. A statement is like a sentence in English, it describes one task for the ASP interpreter to perform. A block of code is made up of multiple statements strung together on a web page. There are various types of statements which we will talk about in the future including: "(flow) control, output, assignment, and HTTP processing. In Active Server Pages, it is important to realize that each statement must appear on a line by itself. There is no special delimiter that indicates the end of an ASP statement. Unless you consider the carriage-return as the delimiter, which you really shouldn't. You can use the colon character ":" within a program block to place multiple statements on the same line such as: <% Response.Write "Error" : Response.End In practice, it is best to avoid such programming constructs since it will make your code harder to read and maintain. Another way to include more than one statement on a line is by using multiple code blocks like in the following: <% If I < 0 Then %>Negative<% End If %> Here, you have two different statements (even though they look like one statement) broken into two different code blocks. Active Server Pages allows this construct on your web pages since it is sometimes necessary to write code like this to eliminate white space that appears on your web page. Variables. One of the hallmarks of a good scripting language is that the variables in the language are loosely typed. This means that there is minimal type-checking done on the values of the variables your program code uses. Active Server Pages was built correctly as a scripting language to be loosely typed. This allows you, as the programmer, to worry less about what type a variable is which frees you from a lot of work converting values and testing for the correct type. You are not required to declare your variables before you use them (or even at all if you so wish). The one exception to this rule is when the page directive "Option Explicit" is in effect. If you place the following at the top of your ASP page, you will be required to declare all variables: <% Option Explicit %> If you don't want this requirement in all your scripts, you can leave it out. It is a useful tool to throw onto a page to check for mis-spelled or mis-used variable names. If you mis-spelled a variable, you will likely see "variable not defined". Variables are declared with a Dim statement and can include a single variable at-a-time or a comma-delimited list of variable names like shown in the following example: <% Dim sPhoneNo Dim sFirstName, sMidleInitial, sLastName There is no such thing as a global variable in the ASP language. All variables are specific to the web page they are processed in. Once the page is done processing the values are lost. One exception to this is the Application object. It can hold a number of different values, each of which is associated with a string value. Another way is to pass variables in a web form or part of the URL in the Request object. More information on these topics will be covered later. Comments. Comments are notations you can make on your web page within an ASP script block that will not be output on the web page. For all intents and purposes, they are hidden from your site visitors since they cannot view the ASP source code for your web pages. This allows you to make comments about what your code is doing. Active Server Pages uses the quote character (') to define comments. This comment is a "line comment" meaning that it will comment out everything following it up until the end of the line. There is no multi line comment in ASP. In order to comment multiple lines, each line must be preceded by a quote ('). <% ' example comment - show the current date Response.Write Now() Server-Side Includes (SSI). Server-Side Includes allow you to retrieve code from one web page and insert it into another page. This allows you to create common pieces of code and HTML which only need to be maintained in one place. Other web pages can include this content within their pages using this server directive. If you are using server-side include directives, you cannot place ASP code within the directive. This is because the directive is evaluated before the ASP code is interpreted. So trying to do something like the following is illegal: <!-- #include virtual="/lib/<%= sLibName %>" --> But this doesn't mean that you can't place ASP code inside an included file. So if you include a file called /lib/common.asp, you can place ASP code inside that file (just like you would a normal ASP page) and it will be evaluated by IIS and the results will be placed in the calling page. An interesting note is that your main page can use a ".html" extension and include (via SSI) an ASP page and the included file will be interpreted and processed as ASP and included in the HTML page. The two different types of Server-Side includes allow you to retrieve and include files based on the absolute path (based on the Web site root) or relative to the calling page as shown below; <%' absolute path - based on web site root %> <!-- #include virtual="/pathtoinclude/include.asp" --> <%' relative path - based on folder of calling page %> <!-- #include file="../../folder1/include.asp" --> There are benefits to using both. A good rule of thumb is this: if you expect you will be moving entire folders (or trees) of your web application then relative paths are more appropriate. If you expect things to stay pretty much where you created them then you should use absolute paths. Please note: When using IIS6 (Windows 2003 Server), relative paths are not enabled by default, which will cause any Server-Side includes using relative paths to cease functioning. = Review Questions = = Exercises = Career Fair or Exhibition = External Links =
1,580
Active Server Pages/Variable Types. = Objectives = In this section you will learn about variables in Active Server Pages. This includes naming conventions, how they are declared, different primitive types and naming conventions. After studying this section, you should have a firm grasp of the types of variables available to you and how to work with different types in a web application. = Content = All variables in ASP are loosely typed. This means that you don't need to declare variables with a type. It also means that you can assign the value of any variable to any other variable. There are a few exceptions to this rule as we will see. Basically, every variable in ASP has the major type of Variant. It is "variant" meaning that the type of value it holds can vary. Basically, the ASP interpreter will handle all conversion between types automatically when you group more than one variable or type together in a single expression. Variable Scope. There are three different scopes for variables used in ASP. The first is page scope meaning that the variable is available for the entire duration. These are typically declared at the top of the page. The second type of variables are procedure scoped variables which are declared inside a procedure or function declaration. The third type of variables are scoped with a class definition which is used for object-oriented programming. <% Dim sStr1 ' page-scoped variable Procedure DoSomething Dim mStr2 ' procedure-scoped variable End Procedure Function DoSomething Dim mStr2 ' function-scoped variable End Function Class clsTest Private msStr4 ' class-scoped variable Public msStr5 ' class-scoped variable End Class Page-scoped variables will be visible to all code included on your web page including procedures and functions. You can even access page-scoped variables within a class although good object-oriented design would strongly discourage this practice. Procedure-scoped variables will only be visible within the procedure in which they are defined. It is perfectly acceptable to use the same variable name for a page-scoped variable and a procedure-scoped variable. Keep in mind, that when you do this, you will hide the page-scoped variable and will have no way to access it within the procedure. Class-scoped variables can only be accessed through an instance of the class. There is no notion of static variables or static methods in the simplified ASP scripting language. You will need to use the ObjectName.VariableName syntax in order to access member variables directly. Only variable declared Public can be accessed in this way. Private class variables can only be accessed by code defined within the class. More information about using classes in Active Server Pages will be discussed later. Valid Variable Names. To be valid, an ASP variable name must follow the following rules: Variable names, just like statements are case-insensitive. This means that if you declare a variable with the name "myVar", you can access it using "MYVAR" or "myvar" or even "MyVaR". One way to avoid using reserved words and make it easier to identify what type of variables you are using is to use a naming convention. Usually the first letter or few letters of the variable names what information it holds. Examples: Following a naming convention can help avoid messy code, and leads to good programming habits. It also allows somebody else to work with your code and help debug it or modify it. Along with a naming convention, use comments as well to document what the variable is to be used for in the ASP page. Examples: You will learn more on naming conventions later on in this document. Declaring Variables. To declare a variable, you use the Dim statement for page and procedure-scoped variables. After the Dim statement, you can put a comma-separated list of variable names. If you prefer, you can just put one variable on each line. This allows you to add comments about each variable you declare. <% Dim sAction ' form action Dim sQuery ' database query Dim I, J, K You should place your variables in a consistent location on the page or within your procedures. Typically, most people place the declarations at the very top (beginning) of the page or the top of the procedure. Others think it makes more sense to place the variable declarations right above the location where they are first used. Primitive Types. When we talk about primitive types, we are talking about low-level variables types that cannot be broken down into smaller primitive types. Basically, these are the "built-in" variables that Active Server Pages understands. These are sometimes referred to the "sub-type" of the variable since the major type of the variable is "variant". Each variable may have one of the sub-types shown in the following table. If you have used other programming languages, then these primitive types will be very familiar to you. Special Values. In addition to the values shown above, the following special values may also be assigned to Active Server Page variables: The empty value is the default value for newly declared variables (never assigned any value.) If tested in a boolean context, it is equivalent to "false" or tested in a string context it is equivalent to the empty string ("") Conversion and Verification. You don't need to concern yourself much with setting the type of the variable. This is done pretty much automatically for you. In the event that you want to set the type that is assigned to a variable, you can use the ASP conversion function that corresponds to each type. Also, there are verification functions available to make sure a variable is compatible with the specified type. This doesn't mean that the variable has that sub-type. Rather, it means that the variable can successfully be converted to the type. Literal Values. Literal values are values you insert into your ASP code and use in expressions. By itself, a literal value has no use, but when used in a statement such as an assignment or output statement <% ' examples of literal values Dim sString, nInt, fFloat, dDate, bBool sString = "Hello There" nInt = 1234 fFloat = -25.324 dDate = DateSerial(2004, 10, 28) bBool = True You will notice that there is no way to specify a date literal in ASP. Therefore, you need to use a function call to build a date value using DateSerial or you can use Now() to get the current date and time. The value for a literal value is bound by the limits specified in the table for primitive types above. If you attempt to use a value that is outside the acceptable range for any of the types, you will receive an error message indicating this fact and your ASP page will terminate execution. Constant Definitions. In some cases, you might want to create "named constants" instead of using literal values all of the time. This is particularly useful if you create a common include file that will be used in multiple scripts on your site. You define constants using the Const keyword like so: <% Const PI = 3.141592654 Const RECS_PER_PAGE = 25 Const FSO_FORREADING = 1 Const FSO_FORWRITING = 0 It is common practice to use all uppercase variables for your constant definitions although this is not required. It simply makes your code easier to read and maintain. Naming Conventions. A naming convention is a standard way of naming all of the variables used on your site. There are various different methods used for naming variables and a whole chapter could be devoted to this subject. One of the most popular is Hungarian notation where the first letter of the variable name indicates the dominant sub-type for the variable. The Microsoft standard proposed for Visual Basic is to use a 3-letter naming convention for their variables. This is a little more clear for reading and maintaining code. Some would consider this overkill for a light-weight scripting language. = Summary = = Exercises =
1,782
Linguistics/Phonology. The human brain is an amazing piece of work. Every time you utter a sound, or hear one, there are dozens of things that happen subconsciously and take the sound and reduce it to one of several distinct sounds that we use in our language. The problem is that these distinct sounds are different in different languages. When you come into a new language and you hear a sound you're not used to, you automatically try to fit it into one of your previous categories of sounds. This can cause interesting problems. Let's illustrate this with a (slightly-hypothetical) analogy. There is one group of people from the Land of Men, and another from the Land of Women. In the Land of Men there are only a few colours: red, blue, brown, yellow, pink, green, and a few more. In the Land of Women, however, there are many more: chartreuse, magenta, terracotta, viridian, lavender rose, etc. Whole books could be written about the colours in the land of women, and indeed, some have. When the men visit the Land of Women, they have no end of trouble. You see their road signs are colour-coded. The women have no problem with this. Their stop signs are rust-coloured and their yield signs are painted in auburn. Now the men, they look at both of these colours and see brown. So as far as they can tell, all stop signs are brown in the Land of Women; however, sometimes women will stop at the stop signs and sometimes they drive right through. Obviously the women must be terrible drivers. Likewise, the women notice the men have an annoying habit of always stopping at yield signs. Similarly, speakers of different languages compartmentalize the sounds they hear in words into different categories. For instance, in English the words 'toe' and 'so' are distinguished by their initial consonants: 'toe' begins with the sound /t/ while 'so' begins with /s/. However, many speakers of the language Tok Pisin do not differentiate between these sounds, and they may be interchanged without changing the meaning of words (e.g. [tupu] or [supu] for the word "tupu", meaning 'soup'). Thus knowing how languages classify sounds is at least as important as knowing what sounds they use in the first place. We can speak of a language's phonology as being how it carves up the acoustic space into meaningful units. This is an area of study practiced by "phonologists". Phonemes. The basic unit of study of phonology is the phoneme, which may be defined as sets of phones which function as one unit in a language, and provide contrast between different words. In other words, a phoneme is a category that speakers of a language put certain sounds into. For instance, returning to the Tok Pisin example above, the sounds [s] and [t] would both belong to the phoneme /t/. (In the IPA, phonemes are conventionally enclosed in forward slashes //.) As another example, try pronouncing the English words "keys" and "schools" carefully, paying close attention to the variety of [k] in each. You should find that in the first there is a noticeable puff of air (aspiration), while in the second it is absent. These words may be written more precisely phonetically as [kʰiz] and [skulz]. However, since aspiration never changes the meaning of a word, both of these sounds belong to the phoneme /k/, and so the phonetic representations of these words are /kiz/ and /skulz/. It should be evident why it is appropriate to refer to the phoneme as a level of abstraction away from the phone. We have removed a layer of information which, while interesting in itself, does not interact in many aspects of a language. The phonemic inventory of a language is the collection of phonemes in a language. We looked at English's in the last chapter. Allophony. Two phones are called allophones if they belong to the same phoneme. For instance, in Tok Pisin [t] and [s] are allophones of /t/, and in English [k] and [kʰ] are allophones of /k/. Allophones are often conditioned by their environment, meaning that one can figure out which allophone is used based on context. For example, in most varieties of American English, the English phoneme /t/ is realized as a tap [ɾ] between vowels in normal speech when not preceding a stressed vowel, for example in the word "butter". In a case like this we can say that the plosive [t] and tap [ɾ] allophones of the phoneme /t/ are in complementary distribution, as every environment selects for either one or the other, and the allophones themselves may be referred to as complementary allophones. Similarly [k] and [kʰ] are in complementary distribution, as [k] mainly occurs in the sequence /sk/, while [kʰ] occurs elsewhere. By contrast, allophones may sometimes co-occur in the same environment, in which case they are in free variation. For example, the English word "cat"‍'s word-final /t/ phoneme may be realized either with an audible release, or as the tongue held in the gesture without being released. These phones, notated as [t] and [t̚] in the IPA, are free variants, as either is allowed to occur in the same position. Similarly [s] and [t] are free variants for some speakers of Tok Pisin. Minimal pairs. An important question which may have occurred to you already is: how can we tell what is a phoneme? One of the most robust tools for examining phonemes is the minimal pair. A minimal pair is a pair of words which differ only in one segment. For example, the English words "do" /du/, "too" /tu/, "you" /ju/, "moo" /mu/ all form minimal pairs with each other. In a minimal pair one can be sure that the difference between the words is phonemic in nature, because the segments in question are surrounded by the same environment and thus cannot be allophones of each other. In other words, they are in contrastive distribution. This is not a foolproof tool. In some cases it may by chance be impossible to find a minimal pair for two phonemes even though they clearly contrast. In many cases it is possible to find near-minimal pairs, where the words are so similar that it is unlikely that any environment is conditioning an allophone. Finally this also requires some common sense, since phonemes may be in complementary distribution without being likely allophones. For instance, the English phonemes /h/ and /ŋ/ (both occurring in the word "hung" /hʌŋ/) can never occur in the same environment, as /h/ is always syllable-initial and /ŋ/ always syllable-final. However few would suggest that these phonemes are allophones. Since English speakers never confuse them, they are auditorily quite different, and substituting one for another in a word would render it unintelligible. Unfortunately there is no hard-and-fast consensus on precisely how to be sure sounds are allophones or not, and in many languages there is vigorous debate. Phonological Rules. Phonotactics. Phonotactics are the rules that govern how phonemes can be arranged. Look at the following lists of made-up words: The first three are 'unpronounceable' because they violate English's phonotatic constraints: 'pf' and 'dchb' aren't allowed at the start of a syllable, while 'bg' isn't allowed at the end. The next three are nonsensical words, but they do not violate phonotactics, so they have an 'English-like' feel. Lewis Carroll was particularly skilled in the art of creating such words. Some of his creations were immortalised in his poem "Jabberwocky". Here are a couple of stanzas from his famed work: Note that different languages have different phonotactics. The Czech Republic has cities like Brno and Plzeň, while the Mandarin for Amsterdam is Amusitedan. Czech phonotactics allow for really complicated consonant clusters, while Mandarin allows for none. Morphophonology. Morphophonology (or morphophonemics) looks at how morphology (the structure of words) interacts with phonology. In morphophonology one may talk about "underlying" or "morpho-phonemic" representations of words, which is a level of abstraction beneath the phonemic level. To see how this follows from the definition of morphophonology, it is necessary to look at an example. Compare the Biloxi words: Some also use this approach to deal with cases of neutralization and underspecification. Compare the Turkish words: Similar patterns in other words in Turkish show that while final stops are always devoiced, some will always voice when followed by a vowel added by suffixing, while the others always stay voiceless. Phonemically both "et"s must be represented as /et/, because phonemes are defined as the smallest units that may make words contrast (be distinguishable), so if we said the word for 'to do' was phonemically /ed/ then the two words would have to contrast! Still, we would like to say that on a more abstract level the word for 'to do' ends in a different segment, which doesn't surface (be realized) in some positions. The level of abstraction above the phoneme is known as an underlying or morpho-phonemic representation, and as is conventional we will indicate it here with pipes ||. Underlyingly, these Turkish words may be represented as |et|, |eti|, |ed|, and |edi|, and in the same way other Turkish words with this type of voicing alternation underlyingly end in a voiced stop, which surfaces as a voiceless phoneme when word-final. The parallelism between the morpho-phonemic layer and the phonemic layer should be clear. Just like how phonemes surface as phones conditioned by their environment, underlying segments surface as phonemes. The important difference is that the surfacing of morpho-phonemic segments as phonemes occurs after morphological processes (e.g. adding endings on to words) take place. In a sense, morphophonology is morphologically informed, while plain phonology isn't. Issues. In some theoretical frameworks of speech (such as phonetics and phonology for applied linguistics and language teaching or speech therapy), it is convenient to break up a language's sounds into categorical sounds—that is, sound types called 'phonemes'. The construct of the phoneme, however, is largely a phonological concern in that it is supposed to model and refer to a transcendental entity that superstructurally and/or psychologically sits over the phonetic realizations and common variations of a sound in a language. For example, if the English phoneme /l/ is posited to subsist, it might be said to do so because the /l/ of 'light' creates a clear contrast with a phonetically similar sounding word, such as 'right' or 'write' (both of which have a distinct /r/ at the beginning instead of a distinct /l/). Thus, 'light' and 'write' are a 'minimal pair' illustrating that, in English at least, phonemic /l/ and phonemic /r/ are distinct sound categories, and that such a distinction holds for realized speech. Such a model has the profound weakness of circular logic: phonemes are used to delimit the semantic realm of language (lexical or higher level meaning), but semantic means (minimal pairs of words, such as 'light' vs. 'right' or 'pay' vs. 'bay') are then used to define the phonological realm. Moreover, if phonemes and minimal pairs were such a precise tool, why would they result in such large variations of the sound inventories of languages (such as anywhere from 38–50 phonemes for counts of English)? Also, it is the case that most words (regardless of homophones like 'right' and 'write', or minimal pairs like 'right' and 'light') differentiate meaning on much more information than a contrast between two sounds. The phoneme is really a structuralist and/or psycholinguistic category belonging to phonology that is supposed to subsist ideally over common variations (called 'allophones') but be realized in such ways as the so-called 'clear' [l] at the beginning of a word like 'like' but also as the so-called 'dark' [l] at the end of a word like 'feel'. Such concerns are really largely outside of the realm of phonetics because structuralist and/or psycholinguistic categories are really about cognitive and mentalist aspects of language processing and acquisition. In other words, the phoneme may (or may not) be a reality of phonology; it is in no way an actual physical part of realized speech in the vocal tract. Realized speech is highly co-articulated, displays movement and spreads aspects of sounds over entire syllables and words. It is convenient to think of speech as a succession of segments (which may or may not coincide closely with phonemes, ideal segments) in order to capture it for discussion in written discourse, but actual phonetic analysis of speech confounds such a model. It should be pointed out, however, that if we wish to set down a representation of dynamic, complex speech into static writing, constructs like phonemes are very convenient fictions to indicate what we are trying to set down (alternative units in order to capture language in written form, though, include the syllable and the word). Workbook section. Exercise 1: Kalaallisut. Kalaallisut, or Greenlandic, is an Eskimo-Aleut language spoken by most of the population of Greenland, and has more speakers than all other Eskimo-Aleut languages combined. While Kalaallisut is currently written using five vowel letters, it is analyzed as having only three underlying vowel phonemes. From the following words, deduce Kalaallisut's phonemic vowel inventory and what conditions the allophonic vowels:
3,306
PHP Programming/Functions. Introduction. "Functions" (or "methods" in the context of a class/object) are a way to group common tasks or calculations to be re-used simply. Functions in computer programming are much like mathematical functions: You can give the function values to work with and get a result without having to do any calculations yourself. You can also find a huge list of predefined functions built into PHP in the PHP Manual's function reference. How to call a function. Note that echo is not a function. "Calling a function" means causing a particular function to run at a particular point in the script. The basic ways to call a function include: print('I am human, I am.'); if ($a == 72) { print('I am human, I am.'); $result = sum ($a, 5); while ($i < count($one)) { In our earlier examples we have called several functions. Most commonly we have called the function print() to print text to the output. The parameter for echo has been the string we wanted printed (for example print("Hello World!") prints "Hello World!" to the output). If the function returns some information, we assign it to a variable with a simple assignment operator "=": $var1 = func_name(); Parameters. Parameters are variables that exist only within that function. They are provided by the programmer when the function is called and the function can read and change them locally (except for reference type variables that are changed globally, which is a more advanced topic). When declaring or calling a function that has more than one parameter, you need to separate between different parameters with a comma ','. A function declaration can look like this: function print_two_strings($var1, $var2) { echo $var1; echo "\n"; echo $var2; return NULL; To call this function, you must give the parameters a value. It doesn't matter what the value is, as long as there is one: print_two_strings("Hello", "World"); Output: Hello World When declaring a function, you sometimes want to have the freedom not to use all the parameters. Therefore, PHP allows you to give them default values when declaring the function: function print_two_strings($var1 = "Hello World", $var2 = "I'm Learning PHP") { echo($var1); echo("\n"); echo($var2); These values will only be used, if the function call does not include enough parameters. If there is only one parameter provided, then $var2 = "I'm Learning PHP": print_two_strings("Hello"); Output: Hello I'm Learning PHP Another way to have a dynamic number of parameters is to use PHP's built-in func_num_args, func_get_args, and func_get_arg functions. function mean() { $sum = 0; $param_count = func_num_args(); for ($i = 0; $i < $param_count; $i++) { $sum += func_get_arg($i); $mean = $sum/$param_count; echo "Mean: {$mean}"; return NULL; or function mean() { $sum = 0; $vars = func_get_args(); for ($i = 0; $i < count($vars); $i++) { $sum += $vars[$i]; $mean = $sum/count($vars); echo "Mean: {$mean}"; return NULL; The above functions would calculate the of all of the values passed to them and output it. The difference is that the first function uses func_num_args and func_get_arg, while the second uses func_get_args to load the parameters into an array. The output for both of them would be the same. For example: mean(35, 43, 3); Output: Mean: 27 Returning a value. This function is all well and good, but usually you will want your function to return some information. Generally there are two reasons why a programmer would want information from a function: To return a value from a function use the return() statement in the function. function add_numbers($var1 = 0, $var2 = 0, $var3 = 0) { $var4 = $var1 + $var2 + $var3; return $var4; Example PHP script: function add_numbers($var1 = 0, $var2 = 0, $var3 = 0) { $var4 = $var1 + $var2 + $var3; return $var4; $sum = add_numbers(1, 6, 9); echo "The result of 1 + 6 + 9 is {$sum}"; Result: The result of 1 + 6 + 9 is 16 Notice that a return() statement ends the function's course. If anything appears in a function declaration after the return() statement is executed, it is parsed but not executed. This can come in handy in some cases. For example: function divide ($dividee, $divider) { if ($divider == 0) { // Can't divide by 0. return false; $result = $dividee/$divider; return $result; Notice that there is no else after the if. This is due to the fact that, if $divider does equal 0, the return() statement is executed and the function stops. If you want to return multiple variables, you need to return an array rather than a single variable. For example: function maths ($input1, $input2) { $total = ($input1 + $input2); $difference = ($input1 - $input2); $return = array("tot" => $total, "diff" => $difference); return $return; When calling this from your script, you need to call it into an array. For example: $return = maths(10, 5); In this case $return['tot'] will be the total (e.g. 15), while $return['diff'] will be the difference (5). Runtime function usage. A developer can create functions inside a PHP script without having to use the function name($param...) {} syntax. This can be done by way of programming that can let you run functions dynamically. Executing a function that is based on a variable's name. There are two ways to do it: either using the direct call, or the call_user_func or the call_user_func_array: Using call_user_func* functions to call functions. call_user_func and call_user_func_array only differ in that the call_user_func_array allows you to use the second parameter as array to pass the data very easily, and call_user_func has an infinite number of parameters that is not very useful in a professional way. In these examples, a class will be used for a wider range of using the example: class Some_Class { function my_function($text1, $text2, $text3) { $return = $text1 . "\n\n" . $text2 . "\n\n" . $text3; return $return; $my_class = new Some_Class(); Using call_user_func: $one = "One"; $two = "Two"; $three = "Three"; $callback_func = array(&$my_class, "my_function"); $result = call_user_func($callback_func, $one, $two, $three); echo $result; Using call_user_func_array: $one = "One"; $two = "Two"; $three = "Three"; $callback_func = array(&$my_class, "my_function"); $result = call_user_func_array($callback_func, array($one, $two, $three)); echo $result; Note how call_user_func and call_user_func_array are used in both of the examples. call_user_func_array allows the script to execute the function more dynamically. As there was no example of using both of these functions for a non-class function, here they are: Using call_user_func: $one = "One"; $two = "Two"; $three = "Three"; $callback_func = "my_function"; $result = call_user_func($callback_func, $one, $two, $three); echo $result; Using call_user_func_array: $one = "One"; $two = "Two"; $three = "Three"; $callback_func = "my_function"; $result = call_user_func_array($callback_func, array($one, $two, $three)); echo $result; More complicated examples. $my_func($param1, $param2); $my_class_name = new ClassObject(); $my_class_name->$my_func_from_that_class($param1, $param2); // The -> symbol is a minus sign follow by a "larger than" sign. It allows you to // use a function that is defined in a different PHP class. It comes directly from // object-oriented programming. Via a constructor, a function of that class is // executable. This specific example is a function that returns no values. call_user_func($my_func, $param1, $param2); call_user_func(array(&${$my_class_name}, $my_func), $param1, $param2); // Prefixing a & to a variable that represents a class object allows you to send the // class object as a reference instead of a copy of the object. In this example this // means that $my_class_name Object would have a copy made of it, the function will // act on the copy, and when the function ends. The original object wouldn't suffer // modifications. Passing an object through its reference passes the address in memory // where that object is stored and call_user_func will alter the actual object. call_user_func_array($my_func, array($param1, $param2)); // Most powerful, dynamic example call_user_func_array(array(&${$my_class_name}, $my_func), array($param1, $param2)); function positif($x + $y;) { $x = 2; $y = 5; $z = $x + $y; echo $z; positif = $x + $y; Creating runtime functions. Creating runtime functions is a very good way of making the script more dynamic: $function_name = create_function('$one, $two', 'return $one + $two;'); echo $function_name . "\n\n"; echo $function_name("1.5", "2"); create_function creates a function with parameters $one and $two, with a code to evaluate return… When create_function is executed, it stores the function's info in the memory and returns the function's name. This means that you cannot customise the name of the function although that would be preferred by most developers.
2,612
Active Server Pages/Conditionals and Looping. = Objectives = In this section we will introduce the basic program flow-control statements available to you in Active Server Pages. This includes conditional statement and looping constructs. After studying this section, you should have a good understanding of how to use Active Server Pages to make decisions based on expressions. You should also understand how to repeat a block of program code based on conditions you define. = Content = Conditional statements are used by your program to make decisions and decide what the program should do next. An example of this would be deciding whether a user has entered the correct password for your site by comparing the password entered with the correct password. If the user passes the test, they are sent to their account management Web page. If they fail, they are sent back to the login screen with an error indicating the "password is invalid". This kind of decision making is common in all programming languages. You need to master this aspect of Active Server Pages in order to write dynamic web applications. Another important concept is looping. Looping simply means that you repeat the same block of code multiple times. Since you are going through the code once, going back to the beginning and repeating it again, the direction of program execution looks like a loop (or a circle) which is why we call it looping. We will introduce you to all of the looping methods available to you in this chapter. Program Flow. In general, program flow starts at the very top of the page and continues all the way down the page in the same way that you read a book. All pages are executed this way in ASP and it makes the code very easy to follow until you get to conditional statements and looping constructs. Because of the logical flow of the program, it is easy to trace through your program code and output debugging information to see what your script is doing as it gets processed by the ASP interpreter. Through use of the Response.End statement, you can place breakpoints in your script to stop execution at specific points. More about this will be discussed later. One topic that will not be covered in this section is procedure and object method calls. We will talk about these in later sections. Conditionals. A conditional statement is one where an expression is evaluated and based on the result one or more actions may be taken. This allows you to check for a certain condition and take a specific action that is required when the condition is or is not met. If-Then Statement. The "If-Then" statement is the most basic conditional statement. When put on a single line, this statement says "If the condition is met, then do this." For example, we could test to see if someone is old enough to vote with: If nAge > 18 Then Response.Write "Yes, you can vote!" You may optionally, create an If-Then statement that encloses a block of statements. A block of statements means more than one statement grouped together. In this case, the entire block of statements will be executed only when the condition is met. We use indenting in this case to help us read the code and determine where the block starts and ends. If nAge > 18 Then Response.Write "Yes, you can vote!" bCanVote = true End If As you can see in this program block, the If Then statement defines the start of the program block while the End If statement defines the end of the program block. The program block is indented to make it easier to match the start and end of the block. All of the statements in this program block will only be executed if the conditional statement is met (nAge > 18). When it is not, nothing will be done. If-Then-Else. The If Then statement is great, but what if you want to perform two different actions. One action to be taken when the condition is met and one action to be taken when the condition is not met. That is what the If Then Else statement was create to handle. It basically says: "If the condition is met, then do this, otherwise do this". If nAge > 18 Then bCanVote = true Else bCanVote = False As you can see from the example above, you can put the entire If Then Else statement in one line. In many cases, you will want to avoid this since it tends to make the length of the line very long. Instead, you will probably want to use the program block form like so: If nAge > 18 Then Response.Write "Yes, you can vote!" bCanVote = true Else Response.Write "No, you can not vote." bCanVote = false End If In this case, only one of the two program blocks will be executed. The first will be executed when the condition is met. The second will be executed when the condition is not met. So you can think of the If Then Else statement as saying "one or the other but not both". Although we are using more than one statement in the program blocks shown above, you could also put a single statement within each program block. For debugging purposes, you can even have no statements at all within a program block. This allows you to comment out all the statements in a program block and your script will still run no problem. You will notice that in the conditional expression for the If Then statement, we did not have to enclose the condition with parentheses "(" and ")". You can always use parentheses for grouping expressions together but they are not required to enclose conditional statements. Select Case. The If Then statements are great if you are just evaluating a "true" or "false" condition. But if you want to evaluate an expression other than a boolean and take some action based on the result, you will need another mechanism. This is why the ASP language includes the Select Case statement. This allows you to basically "select" from a list of many cases, the action to perform. The cases for a Select Case are all literal primitive values. You must have an "exact match" in order to match a "case". You may include more than one value to match, but you may not define ranges of values to match, nor may you define a pattern to match. Select Case nAge Case 15, 16 Response.Write "You are almost old enough to vote" Case 17 Response.Write "You might want to register to vote" Case 18 Response.Write "Yes, you can vote!" Case Default Response.Write "Catch-all for all other ages" End Select This select statement is a little deceiving, because it will only tell you that you can vote if the age is 18 and only 18. If the age is greater than 18, then none of the specific cases will be matched. Instead, the optional catch-all case (Case Default) will be executed when the age is greater than 18. Of course, you don't need to include the Case Default if you don't need it. By leaving it off, you are basically saying "do nothing if an exact match for a case is not found". You can use any expression for the Select Case as long as it evaluates to a primitive type. You can not use an object expression because there is no such thing as an object literal to compare it to. However, you can call an object method that returns a primitive type and use this as the expression. In the example shown above, you can see that we are using the carriage return to separate the Case from the block to be executed. There is no need to terminate this type of program block. It is terminated by the instance of the next case or the End Select. You may also put the case and the action to perform on the same line by using the colon (:) as a statement separator: Select Case nAge Case 15, 16 : Response.Write "You are almost old enough to vote" Case 17 : Response.Write "You might want to register to vote" Case 18 : Response.Write "Yes, you can vote!" Case Default : Response.Write "Catch-all for all other ages" End Select Looping Constructs. Looping constructs are used for repeating the same step over-and-over again. There may be many reasons to use a loop. In database-driven web applications, you will often use a loop to iterate over each record returned from a recordset. For Next Loop. The "for next" loop allows you to repeat a block of code using an incremental counter. The counter is initialized at the start of the loop and then incremented at the end of the loop. At the start of each repetition, the counter is checked to see if it has exceeded the ending value. This is best explained by using an example: For I = 1 To 10 Response.Write "I = " & I & "<br>" Next In this example, we have created a loop that is repeated 10 times. We know it executes exactly ten times because you can read the statement as "for every whole integer from 1 to 10, do this". The variable "I" is initialized with a value of 1 and the first repetition of the program block (indented) is executed. The output will be "I = 1". Don't worry about the "<br>" part, this is just HTML code to create a line break on the web page. The second time through the loop, the counter (I) is incremented and takes on a value of 2. So the second line output by this block is "I = 2". This process repeats as I is incremented one at a time until the counter exceeds the end value. So in this case, the last line printed will be "I = 10". In a more advanced case, we can change the way in which the index is incremented. If we wanted the counter to increment by 2, we would write the For Next loop as follows: For I = 1 To 10 Step 2 Response.Write "I = " & I & "<br>" Next The output of this code will be: This time we only get five values output by the loop. This is because we are incrementing the counter by 2 instead of by 1 (the default.) Notice that we do not have to finish on the ending value (10). After incrementing the counter from 9 to 11, the loop checks to see if we have exceeded the ending value (10) and exits the loop. You can also count backwards like this: For I = 10 To 1 Step -1 Response.Write "I = " & I & "<br>" Next And of course, you can substitute expressions and variables for the starting and ending values and even the amount to "step through" the loop: For I = nX + nY To YourFunc(nZ) Step nStep Response.Write "I = " & I & "<br>" Next Do While. Another looping construct is the Do While loop which will repeat a block of program code as long as the condition is met. This is kind of like the If Then conditional statement in that it expects a boolean expression. Here is what it looks like: I = 1 bFound = False Do While I < 10 Response.Write "I = " & I & "<br>" I = I + 1 Loop What we have here is a loop that does exactly the same thing as the For Next example shown in the previous section. Why would you want to use this construct instead of a "for next"? The problem becomes obvious when it is not easy to determine how many repetitions of the loop you need to do. X = 239821.33 Do While X > 1 Response.Write "X = " & X & "<br>" X = X / 2 Loop In this case, we do not know how many times we will have to divide the value of X by two until we end up with a value less than or equal to 1. So we just use the Do While loop to handle the logic for us. Do Until. Almost identical to the Do While looping construct is the Do Until. It works exactly the same way except that it repeats the program block until the condition evaluates to "true". You could basically do the same thing with "Do While" by enclosing your expression with "Not (expr)", but the creators of ASP realized that the code would be cleaner using this more logical statement. X = 239821.33 Do Until X <= 1 Response.Write "X = " & X & "<br>" X = X / 2 Loop Here, we have created a loop that does exactly the same thing as our "do while". We just reversed the logic of the conditional statement and changed the keywords to Do Until. In this case, it doesn't make the code that much cleaner to read. But as we will see later, there are definite cases where you will want to use Do Until. While. The While loop works exactly the same as the Do While except that it has a little shorter construct. Please read the section on "Do While" to understand how this looping construct works. X = 239821.33 While X > 1 Response.Write "X = " & X & "<br>" X = X / 2 Wend Unlike the "Do" loops, the While loop program block is terminated by the "Wend" statement. = Summary = Active Server Pages has many different control flow statements to handle the execution of your ASP page. The conditional statements are: If Then, If Then Else and Select Case. The looping constructs are For Next, Do While, Do Until and While. Conditional expressions in ASP do not need to be enclosed in parentheses ("(" and ")"). For Next loops manipulate a counter and repeat the program block until the counter exceeds the ending value. Do While, Do Until and While loops repeat based on the evaluation of a boolean expression (meaning an expression resulting in the value of "true" or "false"). = Review Questions = = Exercises =
3,276
Algorithms/Introduction. This book covers techniques for the design and analysis of algorithms. The algorithmic techniques covered include: divide and conquer, backtracking, dynamic programming, greedy algorithms, and hill-climbing. Any solvable problem generally has at least one algorithm of each of the following types: On the first and most basic level, the "obvious" solution might try to exhaustively search for the answer. Intuitively, the obvious solution is the one that comes easily if you're familiar with a programming language and the basic problem solving techniques. The second level is the methodical level and is the heart of this book: after understanding the material presented here you should be able to methodically turn most obvious algorithms into better performing algorithms. The third level, the clever level, requires more understanding of the elements involved in the problem and their properties or even a reformulation of the algorithm (e.g., numerical algorithms exploit mathematical properties that are not obvious). A clever algorithm may be hard to understand by being non-obvious that it is correct, or it may be hard to understand that it actually runs faster than what it would seem to require. The fourth and final level of an algorithmic solution is the miraculous level: this is reserved for the rare cases where a breakthrough results in a highly non-intuitive solution. Naturally, all of these four levels are relative, and some clever algorithms are covered in this book as well, in addition to the methodical techniques. Let's begin. Prerequisites. To understand the material presented in this book you need to know a programming language well enough to translate the pseudocode in this book into a working solution. You also need to know the basics about the following data structures: arrays, stacks, queues, linked-lists, trees, heaps (also called priority queues), disjoint sets, and graphs. Additionally, you should know some basic algorithms like binary search, a sorting algorithm (merge sort, heap sort, insertion sort, or others), and breadth-first or depth-first search. If you are unfamiliar with any of these prerequisites you should review the material in the "" book first. When is Efficiency Important? Not every problem requires the most efficient solution available. For our purposes, the term efficient is concerned with the time and/or space needed to perform the task. When either time or space is abundant and cheap, it may not be worth it to pay a programmer to spend a day or so working to make a program faster. However, here are some cases where efficiency matters: In short, it's important to save time when you do not have any time to spare. When is efficiency unimportant? Examples of these cases include prototypes that are used only a few times, cases where the input is small, when simplicity and ease of maintenance is more important, when the area concerned is not the bottle neck, or when there's another process or area in the code that would benefit far more from efficient design and attention to the algorithm(s). Inventing an Algorithm. Because we assume you have some knowledge of a programming language, let's start with how we translate an idea into an algorithm. Suppose you want to write a function that will take a string as input and output the string in lowercase: // "tolower -- translates all alphabetic, uppercase characters in str to lowercase" function tolower(string "str"): string What first comes to your mind when you think about solving this problem? Perhaps these two considerations crossed your mind: The first point is "obvious" because a character that needs to be converted might appear anywhere in the string. The second point follows from the first because, once we consider each character, we need to do something with it. There are many ways of writing the tolower function for characters: function tolower(character "c"): character There are several ways to implement this function, including: These techniques depend upon the character encoding. However such a subroutine is implemented, once we have it, the implementation of our original problem comes immediately: // "tolower -- translates all alphabetic, uppercase characters in str to lowercase" function tolower(string "str"): string let "result" := "" for-each "c" in "str": "result".append(tolower("c")) repeat return "result" end The loop is the result of our ability to translate "every character needs to be looked at" into our native programming language. It became obvious that the tolower subroutine call should be in the loop's body. The final step required to bring the high-level task into an implementation was deciding how to build the resulting string. Here, we chose to start with the empty string and append characters to the end of it. Now suppose you want to write a function for comparing two strings that tests if they are equal, ignoring case: // "equal-ignore-case -- returns true if s and t are equal, ignoring case" function equal-ignore-case(string "s", string "t"): boolean These ideas might come to mind: These ideas come from familiarity both with strings and with the looping and conditional constructs in your language. The function you thought of may have looked something like this: // "equal-ignore-case -- returns true if s or t are equal, ignoring case" function equal-ignore-case(string "s"[1.."n"], string "t"[1.."m"]): boolean if "n" != "m": return false "\if they aren't the same length, they aren't equal\" fi for "i" := 1 to "n": if tolower("s"["i"]) != tolower("t"["i"]): return false fi repeat return true end Or, if you thought of the problem in terms of functional decomposition instead of iterations, you might have thought of a function more like this: // "equal-ignore-case -- returns true if s or t are equal, ignoring case" function equal-ignore-case(string "s", string "t"): boolean return tolower("s").equals(tolower("t")) end Alternatively, you may feel neither of these solutions is efficient enough, and you would prefer an algorithm that only ever made one pass of "s" or "t". The above two implementations each require two-passes: the first version computes the lengths and then compares each character, while the second version computes the lowercase versions of the string and then compares the results to each other. (Note that for a pair of strings, it is also possible to have the length precomputed to avoid the second pass, but that can have its own drawbacks at times.) You could imagine how similar routines can be written to test string equality that not only ignore case, but also ignore accents. Already you might be getting the spirit of the pseudocode in this book. The pseudocode language is not meant to be a real programming language: it abstracts away details that you would have to contend with in any language. For example, the language doesn't assume generic types or dynamic versus static types: the idea is that it should be clear what is intended and it should not be too hard to convert it to your native language. (However, in doing so, you might have to make some design decisions that limit the implementation to one particular type or form of data.) There was nothing special about the techniques we used so far to solve these simple string problems: such techniques are perhaps already in your toolbox, and you may have found better or more elegant ways of expressing the solutions in your programming language of choice. In this book, we explore general algorithmic techniques to expand your toolbox even further. Taking a naive algorithm and making it more efficient might not come so immediately, but after understanding the material in this book you should be able to methodically apply different solutions, and, most importantly, you will be able to ask yourself more questions about your programs. Asking questions can be just as important as answering questions, because asking the right question can help you reformulate the problem and think outside of the box. Understanding an Algorithm. Computer programmers need an excellent ability to reason with multiple-layered abstractions. For example, consider the following code: function foo(integer "a"): if ("a" / 2) * 2 == "a": print "The value " "a" " is even." fi end To understand this example, you need to know that integer division uses truncation and therefore when the if-condition is true then the least-significant bit in "a" is zero (which means that "a" must be even). Additionally, the code uses a string printing API and is itself the definition of a function to be used by different modules. Depending on the programming task, you may think on the layer of hardware, on down to the level of processor branch-prediction or the cache. Often an understanding of binary is crucial, but many modern languages have abstractions far enough away "from the hardware" that these lower-levels are not necessary. Somewhere the abstraction stops: most programmers don't need to think about logic gates, nor is the physics of electronics necessary. Nevertheless, an essential part of programming is multiple-layer thinking. But stepping away from computer programs toward algorithms requires another layer: mathematics. A program may exploit properties of binary representations. An algorithm can exploit properties of set theory or other mathematical constructs. Just as binary itself is not explicit in a program, the mathematical properties used in an algorithm are not explicit. Typically, when an algorithm is introduced, a discussion (separate from the code) is needed to explain the mathematics used by the algorithm. For example, to really understand a greedy algorithm (such as Dijkstra's algorithm) you should understand the mathematical properties that show how the greedy strategy is valid for all cases. In a way, you can think of the mathematics as its own kind of subroutine that the algorithm invokes. But this "subroutine" is not present in the code because there's nothing to call. As you read this book try to think about mathematics as an implicit subroutine. Overview of the Techniques. The techniques this book covers are highlighted in the following overview. Backtracking is generally an inefficient, brute-force technique, but there are optimizations that can be performed to reduce both the depth of the tree and the number of branches. The technique is called backtracking because after one leaf of the tree is visited, the algorithm will go back up the call stack (undoing choices that didn't lead to success), and then proceed down some other branch. To be solved with backtracking techniques, a problem needs to have some form of "self-similarity," that is, smaller instances of the problem (after a choice has been made) must resemble the original problem. Usually, problems can be generalized to become self-similar. Algorithm and code example. Level 1 (easiest). 1. "Find maximum" With algorithm and several different programming languages 2. "Find minimum "With algorithm and several different programming languages 3. "Find average " With algorithm and several different programming languages 4. "Find mode " With algorithm and several different programming languages 5. "Find total " With algorithm and several different programming languages 6. "Counting " With algorithm and several different programming languages 7. "Find mean" With algorithm and several different programming languages Level 2. 1. "Talking to computer Lv 1 " With algorithm and several different programming languages 2. "Sorting-bubble sort " With algorithm and several different programming languages Level 3. 1. "Talking to computer Lv 2 " With algorithm and several different programming languages Level 4. 1. "Talking to computer Lv 3 " With algorithm and several different programming languages 2. "Find approximate maximum " With algorithm and several different programming languages Level 5. 1. "Quicksort"
2,645
Algorithms/Mathematical Background. Before we begin learning algorithmic techniques, we take a detour to give ourselves some necessary mathematical tools. First, we cover mathematical definitions of terms that are used later on in the book. By expanding your mathematical vocabulary you can be more precise and you can state or formulate problems more simply. Following that, we cover techniques for analysing the running time of an algorithm. After each major algorithm covered in this book we give an analysis of its running time as well as a proof of its correctness Asymptotic Notation. In addition to correctness another important characteristic of a useful algorithm is its time and memory consumption. Time and memory are both valuable resources and there are important differences (even when both are abundant) in how we can use them. How can you measure resource consumption? One way is to create a function that describes the usage in terms of some characteristic of the input. One commonly used characteristic of an input dataset is its size. For example, suppose an algorithm takes an input as an array of formula_1 integers. We can describe the time this algorithm takes as a function formula_2 written in terms of formula_1. For example, we might write: where the value of formula_5 is some unit of time (in this discussion the main focus will be on time, but we could do the same for memory consumption). Rarely are the units of time actually in seconds, because that would depend on the machine itself, the system it's running, and its load. Instead, the units of time typically used are in terms of the number of some fundamental operation performed. For example, some fundamental operations we might care about are: the number of additions or multiplications needed; the number of element comparisons; the number of memory-location swaps performed; or the raw number of machine instructions executed. In general we might just refer to these fundamental operations performed as steps taken. Is this a good approach to determine an algorithm's resource consumption? Yes and no. When two different algorithms are similar in time consumption a precise function might help to determine which algorithm is faster under given conditions. But in many cases it is either difficult or impossible to calculate an analytical description of the exact number of operations needed, especially when the algorithm performs operations conditionally on the values of its input. Instead, what really is important is not the precise time required to complete the function, but rather the degree that resource consumption changes depending on its inputs. Concretely, consider these two functions, representing the computation time required for each size of input dataset: They look quite different, but how do they behave? Let's look at a few plots of the function (formula_5 is in red, formula_9 in blue): In the first, very-limited plot the curves appear somewhat different. In the second plot they start going in sort of the same way, in the third there is only a very small difference, and at last they are virtually identical. In fact, they approach formula_10, the dominant term. As n gets larger, the other terms become much less significant in comparison to n3. As you can see, modifying a polynomial-time algorithm's low-order coefficients doesn't help much. What really matters is the highest-order coefficient. This is why we've adopted a notation for this kind of analysis. We say that: We ignore the low-order terms. We can say that: This gives us a way to more easily compare algorithms with each other. Running an insertion sort on formula_1 elements takes steps on the order of formula_14. Merge sort sorts in formula_15 steps. Therefore, once the input dataset is large enough, merge sort is faster than insertion sort. In general, we write when That is, formula_16 holds if and only if there exists some constants formula_19 and formula_20 such that for all formula_21 formula_5 is positive and less than or equal to formula_23. Note that the equal sign used in this notation describes a relationship between formula_5 and formula_9 instead of reflecting a true equality. In light of this, some define Big-O in terms of a set, stating that: when Big-O notation is only an upper bound; these two are both true: If we use the equal sign as an equality we can get very strange results, such as: which is obviously nonsense. This is why the set-definition is handy. You can avoid these things by thinking of the equal sign as a one-way equality, i.e.: does not imply Always keep the O on the right hand side. Big Omega. Sometimes, we want more than an upper bound on the behavior of a certain function. Big Omega provides a lower bound. In general, we say that when i.e. formula_33 if and only if there exist constants c and n0 such that for all n>n0 f(n) is positive and greater than or equal to cg(n). So, for example, we can say that but it is false to claim that Big Theta. When a given function is both O(g(n)) and Ω(g(n)), we say it is Θ(g(n)), and we have a tight bound on the function. A function f(n) is Θ(g(n)) when but most of the time, when we're trying to prove that a given formula_40, instead of using this definition, we just show that it is both O(g(n)) and Ω(g(n)). Little-O and Omega. When the asymptotic bound is not tight, we can express this by saying that formula_41 or formula_42 The definitions are: Note that a function f is in o(g(n)) when for any coefficient of g, g eventually gets larger than f, while for O(g(n)), there only has to exist a single coefficient for which g eventually gets at least as big as f. Big-O with multiple variables. Given a functions with two parameters formula_45 and formula_46, formula_47 if and only if formula_48. For example, formula_49, and formula_50. Algorithm Analysis: Solving Recurrence Equations. Merge sort of n elements: formula_51 This describes one iteration of the merge sort: the problem space formula_1 is reduced to two halves (formula_53), and then merged back together at the end of all the recursive calls (formula_54). This notation system is the bread and butter of algorithm analysis, so get used to it. There are some theorems you can use to estimate the big Oh time for a function if its recurrence equation fits a certain pattern. Substitution method. Formulate a guess about the big Oh time of your equation. Then use proof by induction to prove the guess is correct. Draw the Tree and Table. This is really just a way of getting an intelligent guess. You still have to go back to the substitution method in order to prove the big Oh time. The Master Theorem. Consider a recurrence equation that fits the following formula: for "a" ≥ 1, "b" > 1 and "k" ≥ 0. Here, "a" is the number of recursive calls made per call to the function, "n" is the input size, "b" is how much smaller the input gets, and "k" is the polynomial order of an operation that occurs each time the function is called (except for the base cases). For example, in the merge sort algorithm covered later, we have because two subproblems are called for each non-base case iteration, and the size of the array is divided in half each time. The formula_57 at the end is the "conquer" part of this divide and conquer algorithm: it takes linear time to merge the results from the two recursive calls into the final result. Thinking of the recursive calls of "T" as forming a tree, there are three possible cases to determine where most of the algorithm is spending its time ("most" in this sense is concerned with its asymptotic behaviour): Depending upon which of these three states the tree is in "T" will have different complexities: The Master Theorem<br> Given formula_55 for "a" ≥ 1, "b" > 1 and "k" ≥ 0: For the merge sort example above, where we have thus, formula_61 and so this is also in the "steady state": By the master theorem, the complexity of merge sort is thus Amortized Analysis. [Start with an adjacency list representation of a graph and show two nested for loops: one for each node n, and nested inside that one loop for each edge e. If there are n nodes and m edges, this could lead you to say the loop takes O(nm) time. However, only once could the innerloop take that long, and a tighter bound is O(n+m).]
1,992
Algorithms/Divide and Conquer. The first major algorithmic technique we cover is divide and conquer. Part of the trick of making a good divide and conquer algorithm is determining how a given problem could be separated into two or more similar, but smaller, subproblems. More generally, when we are creating a divide and conquer algorithm we will take the following steps: The first algorithm we'll present using this methodology is the merge sort. Merge Sort. The problem that merge sort solves is general sorting: given an unordered array of elements that have a total ordering, create an array that has the same elements sorted. More precisely, for an array "a" with indexes 1 through "n", if the condition holds, then "a" is said to be sorted. Here is the interface: "// sort -- returns a sorted copy of array a" function sort(array "a"): array Following the divide and conquer methodology, how can "a" be broken up into smaller subproblems? Because "a" is an array of "n" elements, we might want to start by breaking the array into two arrays of size "n"/2 elements. These smaller arrays will also be unsorted and it is meaningful to sort these smaller problems; thus we can consider these smaller arrays "similar". Ignoring the base case for a moment, this reduces the problem into a different one: Given two sorted arrays, how can they be combined to form a single sorted array that contains all the elements of both given arrays: // merge—given a and b (assumed to be sorted) returns a merged array that // preserves order function merge(array "a", array "b"): array So far, following the methodology has led us to this point, but what about the base case? The base case is the part of the algorithm concerned with what happens when the problem cannot be broken into smaller subproblems. Here, the base case is when the array only has one element. The following is a sorting algorithm that faithfully sorts arrays of only zero or one elements: // base-sort -- given an array of one element (or empty), return a copy of the // array sorted function base-sort(array "a"[1.."n"]): array assert ("n" <= 1) return "a".copy() end Putting this together, here is what the methodology has told us to write so far: "// sort -- returns a sorted copy of array a" function sort(array "a"[1.."n"]): array if "n" <= 1: return "a".copy() else: let "sub_size" := "n" / 2 let "first_half" := sort("a"[1..,"sub_size"]) let "second_half" := sort("a"["sub_size" + 1..,"n"]) return merge("first_half", "second_half") fi end And, other than the unimplemented merge subroutine, this sorting algorithm is done! Before we cover how this algorithm works, here is how merge can be written: // merge -- given a and b (assumed to be sorted) returns a merged array that // preserves order function merge(array "a"[1.."n"], array "b"[1.."m"]): array let "result" := new array["n" + "m"] let "i", "j" := 1 for "k" := 1 to "n" + "m": if "i" >= "n": "result"["k"] := "b"["j"]; "j" += 1 else-if "j" >= "m": "result"["k"] := "a"["i"]; "i" += 1 else: if "a"["i"] < "b"["j"]: "result"["k"] := "a"["i"]; "i" += 1 else: "result"["k"] := "b"["j"]; "j" += 1 fi fi repeat end [TODO: how it works; including correctness proof] This algorithm uses the fact that, given two sorted arrays, the smallest element is always in one of two places. It's either at the head of the first array, or the head of the second. Analysis. Let formula_1 be the number of steps the algorithm takes to run on input of size formula_2. Merging takes linear time and we recurse each time on two sub-problems of half the original size, so By the master theorem, we see that this recurrence has a "steady state" tree. Thus, the runtime is: This can be seen intuitively by asking how may times does n need to be divided by 2 before the size of the array for sorting is 1? Why, m times of course! More directly, 2m = n , equivalent to log 2m = log n, equivalent to m log22 = log 2 n , and since log2 2 = 1, equivalent to m = log2n. Since m is the number of halvings of an array before the array is chopped up into bite sized pieces of 1-element arrays, and then it will take m levels of merging a sub-array with its neighbor where the sum size of sub-arrays will be n at each level, it will be exactly n 2 comparisons for merging at each level, with m ( log2n ) levels, thus O(n 2 log n ) <=> O ( n log n). Iterative Version. This merge sort algorithm can be turned into an iterative algorithm by iteratively merging each subsequent pair, then each group of four, et cetera. Due to a lack of function overhead, iterative algorithms tend to be faster in practice. However, because the recursive version's call tree is logarithmically deep, it does not require much run-time stack space: Even sorting 4 gigs of items would only require 32 call entries on the stack, a very modest amount considering if even each call required 256 bytes on the stack, it would only require 8 kilobytes. The iterative version of mergesort is a minor modification to the recursive version - in fact we can reuse the earlier merging function. The algorithm works by merging small, sorted subsections of the original array to create larger subsections of the array which are sorted. To accomplish this, we iterate through the array with successively larger "strides". // sort -- returns a sorted copy of array a function sort_iterative(array "a"[1."n"]): array let "result" := "a".copy() for "power" := 0 to log2("n"-1) let "unit" := 2^power for "i" := 1 to "n" by "unit"*2 if i+"unit"-1 < n: let "a1"[1.."unit"] := "result"[i..i+"unit"-1] let "a2"[1."unit"] := "result"[i+"unit"..min(i+"unit"*2-1, "n")] "result"[i..i+"unit"*2-1] := merge("a1","a2") fi repeat repeat return "result" end This works because each sublist of length 1 in the array is, by definition, sorted. Each iteration through the array (using counting variable "i") doubles the size of sorted sublists by merging adjacent sublists into sorted larger versions. The current size of sorted sublists in the algorithm is represented by the "unit" variable. space inefficiency. Straight forward merge sort requires a space of 2 n , n to store the 2 sorted smaller arrays , and n to store the final result of merging. But merge sort still lends itself for batching of merging. Binary Search. Once an array is sorted, we can quickly locate items in the array by doing a binary search. Binary search is different from other divide and conquer algorithms in that it is mostly divide based (nothing needs to be conquered). The concept behind binary search will be useful for understanding the partition and quicksort algorithms, presented in the randomization chapter. Finding an item in an already sorted array is similar to finding a name in a phonebook: you can start by flipping the book open toward the middle. If the name you're looking for is on that page, you stop. If you went too far, you can start the process again with the first half of the book. If the name you're searching for appears later than the page, you start from the second half of the book instead. You repeat this process, narrowing down your search space by half each time, until you find what you were looking for (or, alternatively, find where what you were looking for would have been if it were present). The following algorithm states this procedure precisely: "// binary-search -- returns the index of value in the given array, or" "// -1 if value cannot be found. Assumes array is sorted in ascending order" function binary-search("value", array "A"[1.."n"]): integer return search-inner("value", "A", 1, "n" + 1) end "// search-inner -- search subparts of the array; end is one past the" "// last element " function search-inner("value", array "A", "start", "end"): integer if "start" == "end": return -1 "// not found" fi let "length" := "end" - "start" if "length" == 1: if "value" == "A"["start"]: return "start" else: return -1 fi fi let "mid" := "start" + ("length" / 2) if "value" == "A"["mid"]: return "mid" else-if "value" > "A"["mid"]: return search-inner("value", "A", "mid" + 1, "end") else: return search-inner("value", "A", "start", "mid") fi end Note that all recursive calls made are tail-calls, and thus the algorithm is iterative. We can explicitly remove the tail-calls if our programming language does not do that for us already by turning the argument values passed to the recursive call into assignments, and then looping to the top of the function body again: "// binary-search -- returns the index of value in the given array, or" "// -1 if value cannot be found. Assumes array is sorted in ascending order" function binary-search("value", array "A"[1.."n"]): integer let "start" := 1 let "end" := "n" + 1 loop: if "start" == "end": return -1 fi "// not found" let "length" := "end" - "start" if "length" == 1: if "value" == "A"["start"]: return "start" else: return -1 fi fi let "mid" := "start" + ("length" / 2) if "value" == "A"["mid"]: return "mid" else-if "value" > "A"["mid"]: "start" := "mid" + 1 else: "end" := "mid" fi repeat end Even though we have an iterative algorithm, it's easier to reason about the recursive version. If the number of steps the algorithm takes is formula_1, then we have the following recurrence that defines formula_1: The size of each recursive call made is on half of the input size (formula_2), and there is a constant amount of time spent outside of the recursion (i.e., computing "length" and "mid" will take the same amount of time, regardless of how many elements are in the array). By the master theorem, this recurrence has values formula_9, which is a "steady state" tree, and thus we use the steady state case that tells us that Thus, this algorithm takes logarithmic time. Typically, even when "n" is large, it is safe to let the stack grow by formula_11 activation records through recursive calls. difficulty in initially correct binary search implementations. The article on wikipedia on Binary Search also mentions the difficulty in writing a correct binary search algorithm: for instance, the java Arrays.binarySearch(..) overloaded function implementation does an iterative binary search which didn't work when large integers overflowed a simple expression of mid calculation codice_1 i.e. codice_2. Hence the above algorithm is more correct in using a length = end - start, and adding half length to start. The java binary Search algorithm gave a return value useful for finding the position of the nearest key greater than the search key, i.e. the position where the search key could be inserted. i.e. it returns " - (keypos+1)" , if the search key wasn't found exactly, but an insertion point was needed for the search key ( insertion_point = "-return_value - 1"). Looking at boundary values, an insertion point could be at the front of the list ( ip = 0, return value = -1 ), to the position just after the last element, ( ip = length(A), return value = "- length(A) - 1") . As an exercise, trying to implement this functionality on the above iterative binary search can be useful for further comprehension. Integer Multiplication. If you want to perform arithmetic with small integers, you can simply use the built-in arithmetic hardware of your machine. However, if you wish to multiply integers larger than those that will fit into the standard "word" integer size of your computer, you will have to implement a multiplication algorithm in software or use a software implementation written by someone else. For example, RSA encryption needs to work with integers of very large size (that is, large relative to the 64-bit word size of many machines) and utilizes special multiplication algorithms. Grade School Multiplication. How do we represent a large, multi-word integer? We can have a binary representation by using an array (or an allocated block of memory) of words to represent the bits of the large integer. Suppose now that we have two integers, formula_12 and formula_13, and we want to multiply them together. For simplicity, let's assume that both formula_12 and formula_13 have formula_2 bits each (if one is shorter than the other, we can always pad on zeros at the beginning). The most basic way to multiply the integers is to use the grade school multiplication algorithm. This is even easier in binary, because we only multiply by 1 or 0: x6 x5 x4 x3 x2 x1 x0 × y6 y5 y4 y3 y2 y1 y0 x6 x5 x4 x3 x2 x1 x0 (when y0 is 1; 0 otherwise) x6 x5 x4 x3 x2 x1 x0 0 (when y1 is 1; 0 otherwise) x6 x5 x4 x3 x2 x1 x0 0 0 (when y2 is 1; 0 otherwise) x6 x5 x4 x3 x2 x1 x0 0 0 0 (when y3 is 1; 0 otherwise) ... et cetera As an algorithm, here's what multiplication would look like: "// multiply -- return the product of two binary integers, both of length n" function multiply(bitarray "x"[1.."n"], bitarray "y"[1.."n"]): bitarray bitarray "p" = 0 for "i":=1 to "n": if "y"["i"] == 1: "p" := add("p", "x") fi "x" := pad("x", 0) "// add another zero to the end of x" repeat return "p" end The subroutine add adds two binary integers and returns the result, and the subroutine pad adds an extra digit to the end of the number (padding on a zero is the same thing as shifting the number to the left; which is the same as multiplying it by two). Here, we loop "n" times, and in the worst-case, we make "n" calls to add. The numbers given to add will at most be of length formula_17. Further, we can expect that the add subroutine can be done in linear time. Thus, if "n" calls to a formula_18 subroutine are made, then the algorithm takes formula_19 time. Divide and Conquer Multiplication. As you may have figured, this isn't the end of the story. We've presented the "obvious" algorithm for multiplication; so let's see if a divide and conquer strategy can give us something better. One route we might want to try is breaking the integers up into two parts. For example, the integer "x" could be divided into two parts, formula_20 and formula_21, for the high-order and low-order halves of formula_22. For example, if formula_22 has "n" bits, we have We could do the same for formula_25: But from this division into smaller parts, it's not clear how we can multiply these parts such that we can combine the results for the solution to the main problem. First, let's write out formula_27 would be in such a system: This comes from simply multiplying the new hi/lo representations of and formula_25 together. The multiplication of the smaller pieces are marked by the "formula_30" symbol. Note that the multiplies by formula_31 and formula_32 does not require a real multiplication: we can just pad on the right number of zeros instead. This suggests the following divide and conquer algorithm: "// multiply -- return the product of two binary integers, both of length n" function multiply(bitarray "x"[1.."n"], bitarray "y"[1.."n"]): bitarray if "n" == 1: return "x"[1] * "y"[1] fi "// multiply single digits: O(1)" let "xh" := "x"["n"/2 + 1, .., "n"] "// array slicing, O(n)" let "xl" := "x"[0, .., "n" / 2] "// array slicing, O(n)" let "yh" := "y"["n"/2 + 1, .., "n"] "// array slicing, O(n)" let "yl" := "y"[0, .., "n" / 2] "// array slicing, O(n)" let "a" := multiply("xh", "yh") "// recursive call; T(n/2)" let "b" := multiply("xh", "yl") "// recursive call; T(n/2)" let "c" := multiply("xl", "yh") "// recursive call; T(n/2)" let "d" := multiply("xl", "yl") "// recursive call; T(n/2)" "b" := add("b", "c") "// regular addition; O(n)" "a" := shift("a", "n") "// pad on zeros; O(n)" "b" := shift("b", "n"/2) "// pad on zeros; O(n)" return add("a", "b", "d") "// regular addition; O(n)" end We can use the master theorem to analyze the running time of this algorithm. Assuming that the algorithm's running time is formula_1, the comments show how much time each step takes. Because there are four recursive calls, each with an input of size formula_34, we have: Here, formula_36, and given that formula_37 we are in the "bottom heavy" case and thus plugging in these values into the bottom heavy case of the master theorem gives us: Thus, after all of that hard work, we're still no better off than the grade school algorithm! Luckily, numbers and polynomials are a data set we know additional information about. In fact, we can reduce the running time by doing some mathematical tricks. First, let's replace the formula_31 with a variable, "z": This appears to be a quadratic formula, and we know that you only need three co-efficients or points on a graph in order to uniquely describe a quadratic formula. However, in our above algorithm we've been using four multiplications total. Let's try recasting and formula_25 as linear functions: Now, for formula_27 we just need to compute formula_45. We'll evaluate formula_46 and formula_47 at three points. Three convenient points to evaluate the function will be at formula_48: [TODO: show how to make the two-parts breaking more efficient; then mention that the best multiplication uses the FFT, but don't actually cover that topic (which is saved for the advanced book)] Base Conversion. [TODO: Convert numbers from decimal to binary quickly using DnC.] Along with the binary, the science of computers employs bases 8 and 16 for it's very easy to convert between the three while using bases 8 and 16 shortens considerably number representations. To represent 8 first digits in the binary system we need 3 bits. Thus we have, 0=000, 1=001, 2=010, 3=011, 4=100, 5=101, 6=110, 7=111. Assume M=(2065)8. In order to obtain its binary representation, replace each of the four digits with the corresponding triple of bits: 010 000 110 101. After removing the leading zeros, binary representation is immediate: M=(10000110101)2. (For the hexadecimal system conversion is quite similar, except that now one should use 4-bit representation of numbers below 16.) This fact follows from the general conversion algorithm and the observation that 8=formula_49 (and, of course, 16=formula_50). Thus it appears that the shortest way to convert numbers into the binary system is to first convert them into either octal or hexadecimal representation. Now let see how to implement the general algorithm programmatically. For the sake of reference, representation of a number in a system with base (radix) N may only consist of digits that are less than N. More accurately, if with formula_52 we have a representation of M in base N system and write If we rewrite (1) as the algorithm for obtaining coefficients ai becomes more obvious. For example, formula_55 and formula_56, and so on. Recursive Implementation. Let's represent the algorithm mnemonically: (result is a string or character variable where I shall accumulate the digits of the result one at a time) result = " if M < N, result = 'M' + result. Stop. S = M mod N, result = 'S' + result M = M/N goto 2 A few words of explanation. " is an empty string. You may remember it's a zero element for string concatenation. Here we check whether the conversion procedure is over. It's over if M is less than N in which case M is a digit (with some qualification for N>10) and no additional action is necessary. Just prepend it in front of all other digits obtained previously. The '+' plus sign stands for the string concatenation. If we got this far, M is not less than N. First we extract its remainder of division by N, prepend this digit to the result as described previously, and reassign M to be M/N. This says that the whole process should be repeated starting with step 2. I would like to have a function say called Conversion that takes two arguments M and N and returns representation of the number M in base N. The function might look like this 1 String Conversion(int M, int N) // return string, accept two integers 2 { 3 if (M < N) // see if it's time to return 4 return new String("+M); // "+M makes a string out of a digit 5 else // the time is not yet ripe 6 return Conversion(M/N, N) + new String("+(M mod N)); // continue 7 } This is virtually a working Java function and it would look very much the same in C++ and require only a slight modification for C. As you see, at some point the function calls itself with a different first argument. One may say that the function is defined in terms of itself. Such functions are called recursive. (The best known recursive function is factorial: n!=n*(n-1)!.) The function calls (applies) itself to its arguments, and then (naturally) applies itself to its new arguments, and then ... and so on. We can be sure that the process will eventually stop because the sequence of arguments (the first ones) is decreasing. Thus sooner or later the first argument will be less than the second and the process will start emerging from the recursion, still a step at a time. Iterative Implementation. Not all programming languages allow functions to call themselves recursively. Recursive functions may also be undesirable if process interruption might be expected for whatever reason. For example, in the Tower of Hanoi puzzle, the user may want to interrupt the demonstration being eager to test his or her understanding of the solution. There are complications due to the manner in which computers execute programs when one wishes to jump out of several levels of recursive calls. Note however that the string produced by the conversion algorithm is obtained in the wrong order: all digits are computed first and then written into the string the last digit first. Recursive implementation easily got around this difficulty. With each invocation of the Conversion function, computer creates a new environment in which passed values of M, N, and the newly computed S are stored. Completing the function call, i.e. returning from the function we find the environment as it was before the call. Recursive functions store a sequence of computations implicitly. Eliminating recursive calls implies that we must manage to store the computed digits explicitly and then retrieve them in the reversed order. In Computer Science such a mechanism is known as LIFO - Last In First Out. It's best implemented with a stack data structure. Stack admits only two operations: push and pop. Intuitively stack can be visualized as indeed a stack of objects. Objects are stacked on top of each other so that to retrieve an object one has to remove all the objects above the needed one. Obviously the only object available for immediate removal is the top one, i.e. the one that got on the stack last. Then iterative implementation of the Conversion function might look as the following. 1 String Conversion(int M, int N) // return string, accept two integers 2 { 3 Stack stack = new Stack(); // create a stack 4 while (M >= N) // now the repetitive loop is clearly seen 5 { 6 stack.push(M mod N); // store a digit 7 M = M/N; // find new M 8 } 9 // now it's time to collect the digits together 10 String str = new String("+M); // create a string with a single digit M 11 while (stack.NotEmpty()) 12 str = str+stack.pop() // get from the stack next digit 13 return str; 14 } The function is by far longer than its recursive counterpart; but, as I said, sometimes it's the one you want to use, and sometimes it's the only one you may actually use. Closest Pair of Points. For a set of points on a two-dimensional plane, if you want to find the closest two points, you could compare all of them to each other, at formula_19 time, or use a divide and conquer algorithm. [TODO: explain the algorithm, and show the n^2 algorithm] [TODO: write the algorithm, include intuition, proof of correctness, and runtime analysis] Use this link for the original document. http://www.cs.mcgill.ca/~cs251/ClosestPair/ClosestPairDQ.html Closest Pair: A Divide-and-Conquer Approach. Introduction. The brute force approach to the closest pair problem (i.e. checking every possible pair of points) takes quadratic time. We would now like to introduce a faster divide-and-conquer algorithm for solving the closest pair problem. Given a set of points in the plane S, our approach will be to split the set into two roughly equal halves (S1 and S2) for which we already have the solutions, and then to merge the halves in linear time to yield an O(nlogn) algorithm. However, the actual solution is far from obvious. It is possible that the desired pair might have one point in S1 and one in S2, does this not force us once again to check all possible pairs of points? The divide-and-conquer approach presented here generalizes directly from the one dimensional algorithm we presented in the previous section. Closest Pair in the Plane. Alright, we'll generalize our 1-D algorithm as directly as possible (see figure 3.2). Given a set of points S in the plane, we partition it into two subsets S1 and S2 by a vertical line l such that the points in S1 are to the left of l and those in S2 are to the right of l. We now recursively solve the problem on these two sets obtaining minimum distances of d1 (for S1), and d2 (for S2). We let d be the minimum of these. Now, identical to the 1-D case, if the closes pair of the whole set consists of one point from each subset, then these two points must be within d of l. This area is represented as the two strips P1 and P2 on either side of l Up to now, we are completely in step with the 1-D case. At this point, however, the extra dimension causes some problems. We wish to determine if some point in say P1 is less than d away from another point in P2. However, in the plane, we don't have the luxury that we had on the line when we observed that only one point in each set can be within d of the median. In fact, in two dimensions, all of the points could be in the strip! This is disastrous, because we would have to compare n2 pairs of points to merge the set, and hence our divide-and-conquer algorithm wouldn't save us anything in terms of efficiency. Thankfully, we can make another life saving observation at this point. For any particular point p in one strip, only points that meet the following constraints in the other strip need to be checked: Simply because points outside of this bounding box cannot be less than d units from p (see figure 3.3). It just so happens that because every point in this box is at least d apart, there can be at most six points within it. Now we don't need to check all n2 points. All we have to do is sort the points in the strip by their y-coordinates and scan the points in order, checking each point against a maximum of 6 of its neighbors. This means at most 6*n comparisons are required to check all candidate pairs. However, since we sorted the points in the strip by their y-coordinates the process of merging our two subsets is not linear, but in fact takes O(nlogn) time. Hence our full algorithm is not yet O(nlogn), but it is still an improvement on the quadratic performance of the brute force approach (as we shall see in the next section). In section 3.4, we will demonstrate how to make this algorithm even more efficient by strengthening our recursive sub-solution. Summary and Analysis of the 2-D Algorithm. We present here a step by step summary of the algorithm presented in the previous section, followed by a performance analysis. The algorithm is simply written in list form because I find pseudo-code to be burdensome and unnecessary when trying to understand an algorithm. Note that we pre-sort the points according to their x coordinates, and maintain another structure which holds the points sorted by their y values(for step 4), which in itself takes O(nlogn) time. ClosestPair of a set of points: Analysis: so, formula_58 which, according the Master Theorem, result formula_59 Hence the merging of the sub-solutions is dominated by the sorting at step 4, and hence takes O(nlogn) time. This must be repeated once for each level of recursion in the divide-and-conquer algorithm, hence the whole of algorithm ClosestPair takes O(logn*nlogn) = O(nlog2n) time. Improving the Algorithm. We can improve on this algorithm slightly by reducing the time it takes to achieve the y-coordinate sorting in Step 4. This is done by asking that the recursive solution computed in Step 1 returns the points in sorted order by their y coordinates. This will yield two sorted lists of points which need only be merged (a linear time operation) in Step 4 in order to yield a complete sorted list. Hence the revised algorithm involves making the following changes: Step 1: Divide the set into..., and recursively compute the distance in each part, returning the points in each set in sorted order by y-coordinate. Step 4: Merge the two sorted lists into one sorted list in O(n) time. Hence the merging process is now dominated by the linear time steps thereby yielding an O(nlogn) algorithm for finding the closest pair of a set of points in the plane. Towers Of Hanoi Problem. [TODO: Write about the towers of hanoi algorithm and a program for it] There are n distinct sized discs and three pegs such that discs are placed at the left peg in the order of their sizes. The smallest one is at the top while the largest one is at the bottom. This game is to move all the discs from the left peg Rules. 1) Only one disc can be moved in each step. 2) Only the disc at the top can be moved. 3) Any disc can only be placed on the top of a larger disc. Solution. Intuitive Idea. In order to move the largest disc from the left peg to the middle peg, the smallest discs must be moved to the right peg first. After the largest one is moved. The smaller discs are then moved from the right peg to the middle peg. Recurrence. Suppose n is the number of discs. To move n discs from peg a to peg b, 1) If n>1 then move n-1 discs from peg a to peg c 2) Move n-th disc from peg a to peg b 3) If n>1 then move n-1 discs from peg c to peg a Pseudocode. void hanoi(n,src,dst){ if (n>1) hanoi(n-1,src,pegs-{src,dst}); print "move n-th disc from src to dst"; if (n>1) hanoi(n-1,pegs-{src,dst},dst); Analysis. The analysis is trivial. formula_60
8,298
Algorithms/Randomization. As deterministic algorithms are driven to their limits when one tries to solve hard problems with them, a useful technique to speed up the computation is randomization. In randomized algorithms, the algorithm has access to a "random source", which can be imagined as tossing coins during the computation. Depending on the outcome of the toss, the algorithm may split up its computation path. There are two main types of randomized algorithms: Las Vegas algorithms and Monte-Carlo algorithms. In Las Vegas algorithms, the algorithm may use the randomness to speed up the computation, but the algorithm must always return the correct answer to the input. Monte-Carlo algorithms do not have the latter restriction, that is, they are allowed to give "wrong" return values. However, returning a wrong return value must have a "small probability", otherwise that Monte-Carlo algorithm would not be of any use. Many approximation algorithms use randomization. Ordered Statistics. Before covering randomized techniques, we'll start with a deterministic problem that leads to a problem that utilizes randomization. Suppose you have an unsorted array of values and you want to find In the immortal words of one of our former computer science professors, "How can you do?" find-max. First, it's relatively straightforward to find the largest element: "// find-max -- returns the maximum element" function find-max(array "vals"[1.."n"]): element let "result" := "vals[1]" for "i" from "2" to "n": "result" := max("result", "vals[i]") repeat return "result" end An initial assignment of formula_1 to "result" would work as well, but this is a useless call to the max function since the first element compared gets set to "result". By initializing result as such the function only requires "n-1" comparisons. (Moreover, in languages capable of metaprogramming, the data type may not be strictly numerical and there might be no good way of assigning formula_1; using vals[1] is type-safe.) A similar routine to find the minimum element can be done by calling the min function instead of the max function. find-min-max. But now suppose you want to find the min and the max at the same time; here's one solution: "// find-min-max -- returns the minimum and maximum element of the given array" function find-min-max(array "vals"): pair end Because find-max and find-min both make "n-1" calls to the max or min functions (when "vals" has "n" elements), the total number of comparisons made in find-min-max is formula_3. However, some redundant comparisons are being made. These redundancies can be removed by "weaving" together the min and max functions: "// find-min-max -- returns the minimum and maximum element of the given array" function find-min-max(array "vals"[1.."n"]): pair let "min" := formula_4 let "max" := formula_1 if "n" is odd: "min" := "max" := "vals"[1] "vals" := "vals"[2..,"n"] "// we can now assume n is even" "n" := "n" - 1 fi for "i":=1 to "n" by 2: "// consider pairs of values in vals" if "vals"["i"] < "vals"["i" + 1]: let "a" := "vals"["i"] let "b" := "vals"["i" + 1] else: let "a" := "vals"["i" + 1] let "b" := "vals"["i"] "// invariant: a <= b" fi if "a" < "min": "min" := "a" fi if "b" > "max": "max" := "b" fi repeat end Here, we only loop formula_6 times instead of "n" times, but for each iteration we make three comparisons. Thus, the number of comparisons made is formula_7, resulting in a formula_8 speed up over the original algorithm. Only three comparisons need to be made instead of four because, by construction, it's always the case that formula_9. (In the first part of the "if", we actually know more specifically that formula_10, but under the else part, we can only conclude that formula_9.) This property is utilized by noting that "a" doesn't need to be compared with the current maximum, because "b" is already greater than or equal to "a", and similarly, "b" doesn't need to be compared with the current minimum, because "a" is already less than or equal to "b". In software engineering, there is a struggle between using libraries versus writing customized algorithms. In this case, the min and max functions weren't used in order to get a faster find-min-max routine. Such an operation would probably not be the bottleneck in a real-life program: however, if testing reveals the routine should be faster, such an approach should be taken. Typically, the solution that reuses libraries is better overall than writing customized solutions. Techniques such as open implementation and aspect-oriented programming may help manage this contention to get the best of both worlds, but regardless it's a useful distinction to recognize. find-median. Finally, we need to consider how to find the median value. One approach is to sort the array then extract the median from the position "vals"["n"/2]: "// find-median -- returns the median element of vals" function find-median(array "vals"[1.."n"]): element assert ("n" > 0) sort("vals") return "vals"["n" / 2] end If our values are not numbers close enough in value (or otherwise cannot be sorted by a radix sort) the sort above is going to require formula_12 steps. However, it is possible to extract the "n"th-ordered statistic in formula_13 time. The key is eliminating the sort: we don't actually require the entire array to be sorted in order to find the median, so there is some waste in sorting the entire array first. One technique we'll use to accomplish this is randomness. Before presenting a non-sorting find-median function, we introduce a divide and conquer-style operation known as partitioning. What we want is a routine that finds a random element in the array and then partitions the array into three parts: These three sections are denoted by two integers: "j" and "i". The partitioning is performed "in place" in the array: "// partition -- break the array three partitions based on a randomly picked element" Note that when the random element picked is actually represented three or more times in the array it's possible for entries in all three partitions to have the same value as the random element. While this operation may not sound very useful, it has a powerful property that can be exploited: When the partition operation completes, the randomly picked element will be in the same position in the array as it would be if the array were fully sorted! This property might not sound so powerful, but recall the optimization for the find-min-max function: we noticed that by picking elements from the array in pairs and comparing them to each other first we could reduce the total number of comparisons needed (because the current min and max values need to be compared with only one value each, and not two). A similar concept is used here. While the code for partition is not magical, it has some tricky boundary cases: "// partition -- break the array into three ordered partitions from a random element" let "m" := 0 let "n" := "vals".length - 2 // for an array vals, vals[vals.length-1] is the last element, which holds the partition, // so the last sort element is vals[vals.length-2] let "irand" := random("m", "n") "// returns any value from m to n" let "x" := "vals"["irand"] swap( "irand","n"+ 1 ) // n+1 = vals.length-1 , which is the right most element, and acts as store for partition element and sentinel for m // values in "vals"["n"..] are greater than "x" // values in "vals"[0.."m"] are less than "x" while (m <= n ) // see explanation in quick sort why should be m <= n instead of m < n in the 2 element case, // vals.length -2 = 0 = n = m, but if the 2-element case is out-of-order vs. in-order, there must be a different action. // by implication, the different action occurs within this loop, so must process the m = n case before exiting. while "vals"["m"] <= "x" // in the 2-element case, second element is partition, first element at m. If in-order, m will increment "m"++ endwhile while "x" < "vals"["n"] && "n" > 0 // stops if vals[n] belongs in left partition or hits start of array "n"—endwhile if ( m >= n) break; swap("m","n") // exchange "vals"["n"] and "vals"["m"] "m"++ // don't rescan swapped elements "n"—endwhile // partition: [0.."m"-1] [] ["n"+1..] note that "m"="n"+1 // if you need non empty sub-arrays: swap("m","vals".length - 1) // put the partition element in the between left and right partitions // in 2-element out-of-order case, m=0 (not incremented in loop), and the first and last(second) element will swap. // partition: [0.."n"-1] ["n".."n"] ["n"+1..] end We can use partition as a subroutine for a general find operation: "// find -- moves elements in vals such that location k holds the value it would when sorted" function find(array "vals", integer "k") assert (0 <= "k" < "vals".length) "// k it must be a valid index" if "vals".length <= 1: return fi let pair ("j", "i") := partition("vals") if "k" <= "i": find("a"[0..,"i"], "k") else-if "j" <= "k": find("a"["j"..,"n"], "k" - "j") fi TODO: debug this! end Which leads us to the punch-line: "// find-median -- returns the median element of vals" function find-median(array "vals"): element assert ("vals".length > 0) let "median_index" := "vals".length / 2; find("vals", "median_index") return "vals"["median_index"] end One consideration that might cross your mind is "is the random call really necessary?" For example, instead of picking a random pivot, we could always pick the middle element instead. Given that our algorithm works with all possible arrays, we could conclude that the running time on average for "all of the possible inputs" is the same as our analysis that used the random function. The reasoning here is that under the set of all possible arrays, the middle element is going to be just as "random" as picking anything else. But there's a pitfall in this reasoning: Typically, the input to an algorithm in a program isn't random at all. For example, the input has a higher probability of being sorted than just by chance alone. Likewise, because it is real data from real programs, the data might have other patterns in it that could lead to suboptimal results. To put this another way: for the randomized median finding algorithm, there is a very small probability it will run suboptimally, independent of what the input is; while for a deterministic algorithm that just picks the middle element, there is a greater chance it will run poorly on some of the most frequent input types it will receive. This leads us to the following guideline: Note that there are "derandomization" techniques that can take an average-case fast algorithm and turn it into a fully deterministic algorithm. Sometimes the overhead of derandomization is so much that it requires very large datasets to get any gains. Nevertheless, derandomization in itself has theoretical value. The randomized find algorithm was invented by C. A. R. "Tony" Hoare. While Hoare is an important figure in computer science, he may be best known in general circles for his quicksort algorithm, which we discuss in the next section. Quicksort. The median-finding partitioning algorithm in the previous section is actually very close to the implementation of a full blown sorting algorithm. Building a Quicksort Algorithm is left as an exercise for the reader, and is recommended first, before reading the next section ( Quick sort is diabolical compared to Merge sort, which is a sort not improved by a randomization step ) . A key part of quick sort is choosing the right median. But to get it up and running quickly, start with the assumption that the array is unsorted, and the rightmost element of each array is as likely to be the median as any other element, and that we are entirely optimistic that the rightmost doesn't happen to be the largest key , which would mean we would be removing one element only ( the partition element) at each step, and having no right array to sort, and a n-1 left array to sort. This is where randomization is important for quick sort, i.e. "choosing the more optimal partition key", which is pretty important for quick sort to work efficiently. Compare the number of comparisions that are required for quick sort vs. insertion sort. With insertion sort, the average number of comparisons for finding the lowest first element in an ascending sort of a randomized array is n /2 . The second element's average number of comparisons is (n-1)/2; the third element ( n- 2) / 2. The total number of comparisons is [ n + (n - 1) + (n - 2) + (n - 3) .. + (n - [n-1]) ] divided by 2, which is [ n x n - (n-1)! ] /2 or about O(n squared) . In Quicksort, the number of comparisons will halve at each partition step if the true median is chosen, since the left half partition doesn't need to be compared with the right half partition, but at each step , the number elements of all partitions created by the previously level of partitioning will still be n. The number of levels of comparing n elements is the number of steps of dividing n by two , until n = 1. Or in reverse, 2 ^ m ~ n, so m = log2 n. So the total number of comparisons is n (elements) x m (levels of scanning) or n x log2n , So the number of comparison is O(n x log 2(n) ) , which is smaller than insertion sort's O(n^2) or O( n x n ). (Comparing O(n x log 2(n) ) with O( n x n ) , the common factor n can be eliminated , and the comparison is log2(n) vs n , which is exponentially different as n becomes larger. e.g. compare n = 2^16 , or 16 vs 32768, or 32 vs 4 gig ). To implement the partitioning in-place on a part of the array determined by a previous recursive call, what is needed a scan from each end of the part , swapping whenever the value of the left scan's current location is greater than the partition value, and the value of the right scan's current location is less than the partition value. So the initial step is :- Assign the partition value to the right most element, swapping if necessary. So the partitioning step is :- increment the left scan pointer while the current value is less than the partition value. decrement the right scan pointer while the current value is more than the partition value , or the location is equal to or more than the left most location. exit if the pointers have crossed ( l >= r), OTHERWISE perform a swap where the left and right pointers have stopped , on values where the left pointer's value is greater than the partition, and the right pointer's value is less than the partition. Finally, after exiting the loop because the left and right pointers have crossed, "swap the "rightmost" partition value, with the last location of the left forward scan pointer ", and hence ends up between the left and right partitions. Make sure at this point , that after the final swap, the cases of a 2 element in-order array, and a 2 element out-of-order array , are handled correctly, which should mean all cases are handled correctly. This is a good debugging step for getting quick-sort to work. For the in-order two-element case, the left pointer stops on the partition or second element , as the partition value is found. The right pointer , scanning backwards, starts on the first element before the partition, and stops because it is in the leftmost position. The pointers cross, and the loop exits before doing a loop swap. Outside the loop, the contents of the left pointer at the rightmost position and the partition , also at the right most position , are swapped, achieving no change to the in-order two-element case. For the out-of-order two-element case, The left pointer scans and stops at the first element, because it is greater than the partition (left scan value stops to swap values greater than the partition value). The right pointer starts and stops at the first element because it has reached the leftmost element. The loop exits because left pointer and right pointer are equal at the first position, and the contents of the left pointer at the first position and the partition at the rightmost (other) position , are swapped , putting previously out-of-order elements , into order. Another implementation issue, is to how to move the pointers during scanning. Moving them at the end of the outer loop seems logical. partition(a,l,r) { v = a[r]; i = l; j = r -1; while ( i <= j ) { // need to also scan when i = j as well as i < j , // in the 2 in-order case, // so that i is incremented to the partition // and nothing happens in the final swap with the partition at r. while ( a[i] < v) ++i; while ( v <= a[j] && j > 0 ) --j; if ( i >= j) break; swap(a,i,j); ++i; --j; swap(a, i, r); return i; With the pre-increment/decrement unary operators, scanning can be done just before testing within the test condition of the while loops, but this means the pointers should be offset -1 and +1 respectively at the start : so the algorithm then looks like:- partition (a, l, r ) { v=a[r]; // v is partition value, at a[r] i=l-1; j=r; while(true) { while( a[++i] < v ); while( v <= a[--j] && j > l ); if (i >= j) break; swap ( a, i, j); swap (a,i,r); return i; And the qsort algorithm is qsort( a, l, r) { if (l >= r) return ; p = partition(a, l, r) qsort(a , l, p-1) qsort( a, p+1, r) Finally, randomization of the partition element. random_partition (a,l,r) { p = random_int( r-l) + l; // median of a[l], a[p] , a[r] if (a[p] < a[l]) p =l; if ( a[r]< a[p]) p = r; swap(a, p, r); this can be called just before calling partition in qsort(). Shuffling an Array. This keeps data in during shuffle This records if an item has been shuffled Number of item in array itemNum = 0 while ( itemNum != lengthOf( inputArray) ){ usedItemArray[ itemNum ] = false None of the items have been shuffled itemNum = itemNum + 1 itemNum = 0 we'll use this again itemPosition = randdomNumber( 0 --- (lengthOf(inputArray) - 1 )) while( itemNum != lengthOf( inputArray ) ){ while( usedItemArray[ itemPosition ] != false ){ itemPosition = randdomNumber( 0 --- (lengthOf(inputArray) - 1 )) temporaryArray[ itemPosition ] = inputArray[ itemNum ] itemNum = itemNum + 1 inputArray = temporaryArray Equal Multivariate Polynomials. [TODO: as of now, there is no known deterministic polynomial time solution, but there is a randomized polytime solution. The canonical example used to be IsPrime, but a deterministic, polytime solution has been found.] Hash tables. Hashing relies on a hashcode function to randomly distribute keys to available slots evenly. In java , this is done in a fairly straight forward method of adding a moderate sized prime number (31 * 17 ) to a integer key , and then modulus by the size of the hash table. For string keys, the initial hash number is obtained by adding the products of each character's ordinal value multiplied by 31. The wikibook Data Structures/Hash Tables chapter covers the topic well. Skip Lists. [TODO: Talk about skips lists. The point is to show how randomization can sometimes make a structure easier to understand, compared to the complexity of balanced trees.] Dictionary or Map , is a general concept where a value is inserted under some key, and retrieved by the key. For instance, in some languages , the dictionary concept is built-in (Python), in others , it is in core libraries ( C++ S.T.L. , and Java standard collections library ). The library providing languages usually lets the programmer choose between a hash algorithm, or a balanced binary tree implementation (red-black trees). Recently, skip lists have been offered, because they offer advantages of being implemented to be highly concurrent for multiple threaded applications. Hashing is a technique that depends on the randomness of keys when passed through a hash function, to find a hash value that corresponds to an index into a linear table. Hashing works as fast as the hash function, but works well only if the inserted keys spread out evenly in the array, as any keys that hash to the same index , have to be deal with as a hash collision problem e.g. by keeping a linked list for collisions for each slot in the table, and iterating through the list to compare the full key of each key-value pair vs the search key. The disadvantage of hashing is that in-order traversal is not possible with this data structure. Binary trees can be used to represent dictionaries, and in-order traversal of binary trees is possible by visiting of nodes ( visit left child, visit current node, visit right child, recursively ). Binary trees can suffer from poor search when they are "unbalanced" e.g. the keys of key-value pairs that are inserted were inserted in ascending or descending order, so they effectively look like "linked lists" with no left child, and all right children. "self-balancing" binary trees can be done probabilistically (using randomness) or deterministically ( using child link coloring as red or black ) , through local 3-node tree rotation operations. A rotation is simply swapping a parent with a child node, but preserving order e.g. for a left child rotation, the left child's right child becomes the parent's left child, and the parent becomes the left child's right child. Red-black trees can be understood more easily if corresponding 2-3-4 trees are examined. A 2-3-4 tree is a tree where nodes can have 2 children, 3 children, or 4 children, with 3 children nodes having 2 keys between the 3 children, and 4 children-nodes having 3 keys between the 4 children. 4-nodes are actively split into 3 single key 2 -nodes, and the middle 2-node passed up to be merged with the parent node , which , if a one-key 2-node, becomes a two key 3-node; or if a two key 3-node, becomes a 4-node, which will be later split (on the way up). The act of splitting a three key 4-node is actually a re-balancing operation, that prevents a string of 3 nodes of grandparent, parent , child occurring , without a balancing rotation happening. 2-3-4 trees are a limited example of B-trees, which usually have enough nodes as to fit a physical disk block, to facilitate caching of very large indexes that can't fit in physical RAM ( which is much less common nowadays). A red-black tree is a binary tree representation of a 2-3-4 tree, where 3-nodes are modeled by a parent with one red child, and 4 -nodes modeled by a parent with two red children. Splitting of a 4-node is represented by the parent with 2 red children, flipping the red children to black, and itself into red. There is never a case where the parent is already red, because there also occurs balancing operations where if there is a grandparent with a red parent with a red child , the grandparent is rotated to be a child of the parent, and parent is made black and the grandparent is made red; this unifies with the previous flipping scenario, of a 4-node represented by 2 red children. Actually, it may be this standardization of 4-nodes with mandatory rotation of skewed or zigzag 4-nodes that results in re-balancing of the binary tree. A newer optimization is to left rotate any single right red child to a single left red child, so that only right rotation of left-skewed inline 4-nodes (3 red nodes inline ) would ever occur, simplifying the re-balancing code. Skip lists are modeled after single linked lists, except nodes are multilevel. Tall nodes are rarer, but the insert operation ensures nodes are connected at each level. Implementation of skip lists requires creating randomly high multilevel nodes, and then inserting them. Nodes are created using iteration of a random function where high level node occurs later in an iteration, and are rarer, because the iteration has survived a number of random thresholds (e.g. 0.5, if the random is between 0 and 1). Insertion requires a temporary previous node array with the height of the generated inserting node. It is used to store the last pointer for a given level , which has a key less than the insertion key. The scanning begins at the head of the skip list, at highest level of the head node, and proceeds across until a node is found with a key higher than the insertion key, and the previous pointer stored in the temporary previous node array. Then the next lower level is scanned from that node , and so on, walking zig-zag down, until the lowest level is reached. Then a list insertion is done at each level of the temporary previous node array, so that the previous node's next node at each level is made the next node for that level for the inserting node, and the inserting node is made the previous node's next node. Search involves iterating from the highest level of the head node to the lowest level, and scanning along the next pointer for each level until a node greater than the search key is found, moving down to the next level , and proceeding with the scan, until the higher keyed node at the lowest level has been found, or the search key found. The creation of less frequent-when-taller , randomized height nodes, and the process of linking in all nodes at every level, is what gives skip lists their advantageous overall structure. a method of skip list implementation : implement lookahead single-linked linked list, then test , then transform to skip list implementation , then same test, then performance comparison. What follows is a implementation of skip lists in python. A single linked list looking at next node as always the current node, is implemented first, then the skip list implementation follows, attempting minimal modification of the former, and comparison helps clarify implementation. class LN: "a list node, so don't have to use dict objects as nodes" def __init__(self): self.k=None self.v = None self.next = None class single_list2: def __init__(self): self.h = LN() def insert(self, k, v): prev = self.h while not prev.next is None and k < prev.next.k : prev = prev.next n = LN() n.k, n.v = k, v n.next = prev.next prev.next = n def show(self): prev = self.h while not prev.next is None: prev = prev.next print prev.k, prev.v, ' ' def find (self,k): prev = self.h while not prev.next is None and k < prev.next.k: prev = prev.next if prev.next is None: return None return prev.next.k import random class SkipList3: def __init__(self): self.h = LN() self.h.next = [None] def insert( self, k , v): ht = 1 while random.randint(0,10) < 5: ht +=1 if ht > len(self.h.next) : self.h.next.extend( [None] * (ht - len(self.h.next) ) ) prev = self.h prev_list = [self.h] * len(self.h.next) # instead of just prev.next in the single linked list, each level i has a prev.next for i in xrange( len(self.h.next)-1, -1, -1): while not prev.next[i] is None and prev.next[i].k > k: prev = prev.next[i] #record the previous pointer for each level prev_list[i] = prev n = LN() n.k,n.v = k,v # create the next pointers to the height of the node for the current node. n.next = [None] * ht #print "prev list is ", prev_list # instead of just linking in one node in the single-linked list , ie. n.next = prev.next, prev.next =n # do it for each level of n.next using n.next[i] and prev_list[i].next[i] # there may be a different prev node for each level, but the same level must be linked, # therefore the [i] index occurs twice in prev_list[i].next[i]. for i in xrange(0, ht): n.next[i] = prev_list[i].next[i] prev_list[i].next[i] = n #print "self.h ", self.h def show(self): #print self.h prev = self.h while not prev.next[0] is None: print prev.next[0].k, prev.next[0].v prev = prev.next[0] def find(self, k): prev = self.h h = len(self.h.next) #print "height ", h for i in xrange( h-1, -1, -1): while not prev.next[i] is None and prev.next[i].k > k: prev = prev.next[i] #if prev.next[i] <> None: #print "i, k, prev.next[i].k and .v", i, k, prev.next[i].k, prev.next[i].v if prev.next[i] <> None and prev.next[i].k == k: return prev.next[i].v if pref.next[i] is None: return None return prev.next[i].k def clear(self): self.h= LN() self.h.next = [None] if __name__ == "__main__": #l = single_list2() l = SkipList3() test_dat = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' pairs = enumerate(test_dat) m = [ (x,y) for x,y in pairs ] while len(m) > 0: i = random.randint(0,len(m)-1) print "inserting ", m[i] l.insert(m[i][0], m[i][1]) del m[i] # l.insert( 3, 'C') # l.insert(2, 'B') # l.insert(4, 'D') # l.insert(1, 'A') l.show() n = int(raw_input("How many elements to test?") ) if n <0: n = -n l.clear() import time l2 = [ x for x in xrange(0, n)] random.shuffle(l2) for x in l2: l.insert(x , x) l.show() print print "finding.." f = 0 t1 = time.time() nf = [] for x in l2: if l.find(x) == x: f += 1 else: nf.append(x) t2 = time.time() print "time", t2 - t1 td1 = t2 - t1 print "found ", f print "didn't find", nf dnf = [] for x in nf: tu = (x,l.find(x)) dnf.append(tu) print "find again ", dnf sl = single_list2() for x in l2: sl.insert(x,x) print "finding.." f = 0 t1 = time.time() for x in l2: if sl.find(x) == x: f += 1 t2 = time.time() print "time", t2 - t1 print "found ", f td2 = t2 - t1 print "factor difference time", td2/td1 Role of Randomness. The idea of making higher nodes geometrically randomly less common, means there are less keys to compare with the higher the level of comparison, and since these are randomly selected, this should get rid of problems of degenerate input that makes it necessary to do tree balancing in tree algorithms. Since the higher level list have more widely separated elements, but the search algorithm moves down a level after each search terminates at a level, the higher levels help "skip" over the need to search earlier elements on lower lists. Because there are multiple levels of skipping, it becomes less likely that a meagre skip at a higher level won't be compensated by better skips at lower levels, and Pugh claims O(logN) performance overall. Conceptually , is it easier to understand than balancing trees and hence easier to implement ? The development of ideas from binary trees, balanced binary trees, 2-3 trees, red-black trees, and B-trees make a stronger conceptual network but is progressive in development, so arguably, once red-black trees are understood, they have more conceptual context to aid memory , or refresh of memory. concurrent access application. Apart from using randomization to enhance a basic memory structure of linked lists, skip lists can also be extended as a global data structure used in a multiprocessor application. See supplementary topic at the end of the chapter. Idea for an exercise. Replace the Linux completely fair scheduler red-black tree implementation with a skip list , and see how your brand of Linux runs after recompiling. Treaps. A treap is a two keyed binary tree, that uses a second randomly generated key and the previously discussed tree operation of parent-child rotation to randomly rotate the tree so that overall, a balanced tree is produced. Recall that binary trees work by having all nodes in the left subtree small than a given node, and all nodes in a right subtree greater. Also recall that node rotation does not break this order ( some people call it an invariant), but changes the relationship of parent and child, so that if the parent was smaller than a right child, then the parent becomes the left child of the formerly right child. The idea of a tree-heap or treap, is that a binary heap relationship is maintained between parents and child, and that is a parent node has higher priority than its children, which is not the same as the left , right order of keys in a binary tree, and hence a recently inserted leaf node in a binary tree which happens to have a high random priority, can be rotated so it is relatively higher in the tree, having no parent with a lower priority. A treap is an alternative to both red-black trees, and skip lists, as a self-balancing sorted storage structure. java example of treap implementation. // Treap example: 2014 SJT, copyleft GNU . import java.util.Iterator; import java.util.LinkedList; import java.util.Random; public class Treap1<K extends Comparable<K>, V> { public Treap1(boolean test) { this.test = test; boolean test = false; static Random random = new Random(System.currentTimeMillis()); class TreapNode { int priority = 0; K k; V val; TreapNode left, right; public TreapNode() { if (!test) { priority = random.nextInt(); TreapNode root = null; void insert(K k, V val) { root = insert(k, val, root); TreapNode insert(K k, V val, TreapNode node) { TreapNode node2 = new TreapNode(); node2.k = k; node2.val = val; if (node == null) { node = node2; } else if (k.compareTo(node.k) < 0) { node.left = insert(k, val, node.left); } else { node.right = insert(k, val, node.right); if (node.left != null && node.left.priority > node.priority) { // right rotate (rotate left node up, current node becomes right child ) TreapNode tmp = node.left; node.left = node.left.right; tmp.right = node; node = tmp; } else if (node.right != null && node.right.priority > node.priority) { // left rotate (rotate right node up , current node becomes left child) TreapNode tmp = node.right; node.right = node.right.left; tmp.left = node; node = tmp; return node; V find(K k) { return findNode(k, root); private V findNode(K k, Treap1<K, V>.TreapNode node) { // TODO Auto-generated method stub if (node == null) return null; if (k.compareTo(node.k) < 0) { return findNode(k, node.left); } else if (k.compareTo(node.k) > 0) { return findNode(k, node.right); } else { return node.val; public static void main(String[] args) { LinkedList<Integer> dat = new LinkedList<Integer>(); for (int i = 0; i < 15000; ++i) { dat.add(i); testNumbers(dat, true); // no random priority balancing testNumbers(dat, false); private static void testNumbers(LinkedList<Integer> dat, boolean test) { Treap1<Integer, Integer> tree= new Treap1<>(test); for (Integer integer : dat) { tree.insert(integer, integer); long t1 = System.currentTimeMillis(); Iterator<Integer> iter = dat.iterator(); int found = 0; while (iter.hasNext()) { Integer j = desc.next(); Integer i = tree.find(j); if (j.equals(i)) { ++found; long t2 = System.currentTimeMillis(); System.out.println("found = " + found + " in " + (t2 - t1)); Treaps compared and contrasted to Splay trees. "Splay trees" are similar to treaps in that rotation is used to bring a higher priority node to the top without changing the main key order, except instead of using a random key for priority, the last accessed node is rotated to the root of the tree, so that more frequently accessed nodes will be near the top. This means that in treaps, inserted nodes will only rotate upto the priority given by their random priority key, whereas in splay trees, the inserted node is rotated to the root, and every search in a splay tree will result in a re-balancing, but not so in a treap. Derandomization. [TODO: Deterministic algorithms for Quicksort exist that perform as well as quicksort in the average case and are guaranteed to perform at least that well in all cases. Best of all, no randomization is needed. Also in the discussion should be some perspective on using randomization: some randomized algorithms give you better confidence probabilities than the actual hardware itself! (e.g. sunspots can randomly flip bits in hardware, causing failure, which is a risk we take quite often)] [Main idea: Look at all blocks of 5 elements, and pick the median (O(1) to pick), put all medians into an array (O(n)), recursively pick the medians of that array, repeat until you have < 5 elements in the array. This recursive median constructing of every five elements takes time T(n)=T(n/5) + O(n), which by the master theorem is O(n). Thus, in O(n) we can find the right pivot. Need to show that this pivot is sufficiently good so that we're still O(n log n) no matter what the input is. This version of quicksort doesn't need rand, and it never performs poorly. Still need to show that element picked out is sufficiently good for a pivot.] Supplementary Topic: skip lists and multiprocessor algorithms. Multiprocessor hardware provides CAS ( compare-and-set) or CMPEXCHG( compare-and-exchange)(intel manual 253666.pdf, p 3-188) atomic operations, where an expected value is loaded into the accumulator register, which is compared to a target memory location's contents, and if the same, a source memory location's contents is loaded into the target memories contents, and the zero flag set, otherwise, if different, the target memory's contents is returned in the accumulator, and the zero flag is unset, signifying , for instance, a lock contention. In the intel architecture, a LOCK instruction is issued before CMPEXCHG , which either locks the cache from concurrent access if the memory location is being cached, or locks a shared memory location if not in the cache , for the next instruction. The CMPEXCHG can be used to implement locking, where spinlocks , e.g. retrying until the zero flag is set, are simplest in design. Lockless design increases efficiency by avoiding spinning waiting for a lock . The java standard library has an implementation of non-blocking concurrent skiplists, based on a paper titled "a pragmatic implementation of non-blocking single-linked lists". The skip list implementation is an extension of the lock-free single-linked list , of which a description follows :- The insert operation is : X -> Y insert N , N -> Y, X -> N ; expected result is X -> N -> Y . A race condition is if M is inserting between X and Y and M completes first , then N completes, so the situation is X -> N -> Y <- M M is not in the list. The CAS operation avoids this, because a copy of -> Y is checked before updating X -> , against the current value of X -> . If N gets to update X -> first, then when M tries to update X -> , its copy of X -> Y , which it got before doing M -> Y , does not match X -> N , so CAS returns non-zero flag set (recall that CAS requires the user to load the accumulator with the expected value, the target location's current value, and then atomically updates the target location with a source location if the target location still contains the accumulator's value). The process that tried to insert M then can retry the insertion after X, but now the CAS checks ->N is X's next pointer, so after retry, X->M->N->Y , and neither insertions are lost. If M updates X-> first, N 's copy of X->Y does not match X -> M , so the CAS will fail here too, and the above retry of the process inserting N, would have the serialized result of X ->N -> M -> Y . The delete operation depends on a separate 'logical' deletion step, before 'physical' deletion. 'Logical' deletion involves a CAS change of the next pointer into a 'marked' pointer. The java implementation substitutes with an atomic insertion of a proxy marker node to the next node. This prevents future insertions from inserting after a node which has a next pointer 'marked' , making the latter node 'logically' deleted. The insert operation relies on another function , "search" , returning 2 unmarked , at the time of the invocation, node pointers : the first pointing to a node , whose next pointer is equal to the second. The first node is the node before the insertion point. The "insert" CAS operation checks that the current next pointer of the first node, corresponds to the unmarked reference of the second, so will fail 'logically' if the first node's "next" pointer has become marked "after" the call to the "search" function above, because the first node has been concurrently logically deleted. "This meets the aim to prevent a insertion occurring concurrently after a node has been deleted." If the insert operation fails the CAS of the previous node's next pointer, the search for the insertion point starts from the start of the entire list again, since a new unmarked previous node needs to be found, and there are no previous node pointers as the list nodes are singly-linked. The delete operation outlined above, also relies on the "search" operation returning two "unmarked" nodes, and the two CAS operations in delete, one for logical deletion or marking of the second pointer's next pointer, and the other for physical deletion by making the first node's next pointer point to the second node's unmarked next pointer. The first CAS of delete happens only after a check that the copy of the original second nodes' next pointer is unmarked, and ensures that only one concurrent delete succeeds which reads the second node's current next pointer as being unmarked as well. The second CAS checks that the previous node hasn't been logically deleted because its next pointer is not the same as the unmarked pointer to the current second node returned by the search function, so only an active previous node's next pointer is 'physically' updated to a copy of the original unmarked next pointer of the node being deleted ( whose next pointer is already marked by the first CAS). If the second CAS fails, then the previous node is logically deleted and its next pointer is marked, and so is the current node's next pointer. A call to "search" function again, tidies things up, because in endeavouring to find the key of the current node and return adjacent unmarked previous and current pointers, and while doing so, it truncates strings of logically deleted nodes . Lock-free programming issues. Starvation could be possible , as failed inserts have to restart from the front of the list. Wait-freedom is a concept where the algorithm has all threads safe from starvation. The ABA problem exists, where a garbage collector recycles the pointer A , but the address is loaded differently, and the pointer is re-added at a point where a check is done for A by another thread that read A and is doing a CAS to check A has not changed ; the address is the same and is unmarked, but the contents of A has changed.
11,655
Algorithms/Backtracking. Backtracking is a general algorithmic technique that considers searching every possible combination in order to solve an optimization problem. Backtracking is also known as depth-first search or branch and bound. By inserting more knowledge of the problem, the search tree can be pruned to avoid considering cases that don't look promising. While backtracking is useful for hard problems to which we do not know more efficient solutions, it is a poor solution for the everyday problems that other techniques are much better at solving. However, dynamic programming and greedy algorithms can be thought of as optimizations to backtracking, so the general technique behind backtracking is useful for understanding these more advanced concepts. Learning and understanding backtracking techniques first provides a good stepping stone to these more advanced techniques because you won't have to learn several new concepts all at once. This methodology is generic enough that it can be applied to most problems. However, even when taking care to improve a backtracking algorithm, it will probably still take exponential time rather than polynomial time. Additionally, exact time analysis of backtracking algorithms can be extremely difficult: instead, simpler upperbounds that may not be tight are given. Longest Common Subsequence (exhaustive version). The LCS problem is similar to what the Unix "diff" program does. The diff command in Unix takes two text files, "A" and "B", as input and outputs the differences line-by-line from "A" and "B". For example, diff can show you that lines missing from "A" have been added to "B", and lines present in "A" have been removed from "B". The goal is to get a list of additions and removals that could be used to transform "A" to "B". An overly conservative solution to the problem would say that all lines from "A" were removed, and that all lines from "B" were added. While this would solve the problem in a crude sense, we are concerned with the minimal number of additions and removals to achieve a correct transformation. Consider how you may implement a solution to this problem yourself. The LCS problem, instead of dealing with lines in text files, is concerned with finding common items between two different arrays. For example, We want to find the longest subsequence possible of items that are found in both "a" and "b" in the same order. The LCS of "a" and "b" is Now consider two more sequences: Here, there are two longest common subsequences of "c" and "d": Note that is "not" a common subsequence, because it is only a valid subsequence of "d" and not "c" (because "c" has 8 before the 32). Thus, we can conclude that for some cases, solutions to the LCS problem are not unique. If we had more information about the sequences available we might prefer one subsequence to another: for example, if the sequences were lines of text in computer programs, we might choose the subsequences that would keep function definitions or paired comment delimiters intact (instead of choosing delimiters that were not paired in the syntax). On the top level, our problem is to implement the following function // "lcs -- returns the longest common subsequence of a and b" function lcs(array "a", array "b"): array which takes in two arrays as input and outputs the subsequence array. How do you solve this problem? You could start by noticing that if the two sequences start with the same word, then the longest common subsequence always contains that word. You can automatically put that word on your list, and you would have just reduced the problem to finding the longest common subset of the rest of the two lists. Thus, the problem was made smaller, which is good because it shows progress was made. But if the two lists do not begin with the same word, then one, or both, of the first element in "a" or the first element in "b" do not belong in the longest common subsequence. But yet, one of them might be. How do you determine which one, if any, to add? The solution can be thought in terms of the back tracking methodology: Try it both ways and see! Either way, the two sub-problems are manipulating smaller lists, so you know that the recursion will eventually terminate. Whichever trial results in the longer common subsequence is the winner. Instead of "throwing it away" by deleting the item from the array we use array slices. For example, the slice represents the elements of the array as an array itself. If your language doesn't support slices you'll have to pass beginning and/or ending indices along with the full array. Here, the slices are only of the form which, when using 0 as the index to the first element in the array, results in an array slice that doesn't have the 0th element. (Thus, a non-sliced version of this algorithm would only need to pass the beginning valid index around instead, and that value would have to be subtracted from the complete array's length to get the pseudo-slice's length.) // "lcs -- returns the longest common subsequence of a and b" function lcs(array "a", array "b"): array if "a".length == 0 OR "b".length == 0: "// if we're at the end of either list, then the lcs is empty" else-if "a"[0] == "b"[0]: "// if the start element is the same in both, then it is on the lcs," "// so we just recurse on the remainder of both lists." return append(new array {"a"[0]}, lcs("a"[1..], "b"[1..])) else "// we don't know which list we should discard from. Try both ways," "// pick whichever is better." let "discard_a" := lcs("a"[1..], "b") let "discard_b" := lcs("a", "b"[1..]) if "discard_a".length > "discard_b".length: let "result" := "discard_a" else let "result" := "discard_b" fi return "result" fi end Shortest Path Problem (exhaustive version). To be improved as Dijkstra's algorithm in a later section. Bounding Searches. If you've already found something "better" and you're on a branch that will never be as good as the one you already saw, you can terminate that branch early. (Example to use: sum of numbers beginning with 1 2, and then each number following is a sum of any of the numbers plus the last number. Show performance improvements.) Constrained 3-Coloring. This problem doesn't have immediate self-similarity, so the problem first needs to be generalized. Methodology: If there's no self-similarity, try to generalize the problem until it has it. Traveling Salesperson Problem. Here, backtracking is one of the best solutions known.
1,605
Algorithms/Dynamic Programming. Dynamic programming can be thought of as an optimization technique for particular classes of backtracking algorithms where subproblems are repeatedly solved. Note that the term "dynamic" in dynamic programming should not be confused with dynamic programming languages, like Scheme or Lisp. Nor should the term "programming" be confused with the act of writing computer programs. In the context of algorithms, dynamic programming always refers to the technique of filling in a table with values computed from other table values. (It's dynamic because the values in the table are filled in by the algorithm based on other values of the table, and it's programming in the sense of setting things in a table, like how television programming is concerned with when to broadcast what shows.) Fibonacci Numbers. Before presenting the dynamic programming technique, it will be useful to first show a related technique, called memoization, on a toy example: The Fibonacci numbers. What we want is a routine to compute the "n"th Fibonacci number: "// fib -- compute Fibonacci(n)" function fib(integer "n"): integer By definition, the "n"th Fibonacci number, denoted formula_1 is How would one create a good algorithm for finding the nth Fibonacci-number? Let's begin with the naive algorithm, which codes the mathematical definition: "// fib -- compute Fibonacci(n)" function fib(integer "n"): integer assert (n >= 0) if "n" == 0: return 0 fi if "n" == 1: return 1 fi return fib("n" - 1) + fib("n" - 2) end Note that this is a toy example because there is already a mathematically closed form for formula_1: where: This latter equation is known as the Golden Ratio. Thus, a program could efficiently calculate formula_1 for even very large "n". However, it's instructive to understand what's so inefficient about the current algorithm. To analyze the running time of codice_1 we should look at a call tree for something even as small as the sixth Fibonacci number: Every leaf of the call tree has the value 0 or 1, and the sum of these values is the final result. So, for any "n," the number of leaves in the call tree is actually formula_1 itself! The closed form thus tells us that the number of leaves in codice_2 is approximately equal to (Note the algebraic manipulation used above to make the base of the exponent the number 2.) This means that there are far too many leaves, particularly considering the repeated patterns found in the call tree above. One optimization we can make is to save a result in a table once it's already been computed, so that the same result needs to be computed only once. The optimization process is called memoization and conforms to the following methodology: Consider the solution presented in the backtracking chapter for the Longest Common Subsequence problem. In the execution of that algorithm, many common subproblems were computed repeatedly. As an optimization, we can compute these subproblems once and then store the result to read back later. A recursive memoization algorithm can be turned "bottom-up" into an iterative algorithm that fills in a table of solutions to subproblems. Some of the subproblems solved might not be needed by the end result (and that is where dynamic programming differs from memoization), but dynamic programming can be very efficient because the iterative version can better use the cache and have less call overhead. Asymptotically, dynamic programming and memoization have the same complexity. So how would a fibonacci program using memoization work? Consider the following program ("f"["n"] contains the "n"th Fibonacci-number if has been calculated, -1 otherwise): function fib(integer "n"): integer if "n" == 0 or n == 1: return "n" else-if "f"["n"] != -1: return "f"["n"] else "f"["n"] = fib("n" - 1) + fib("n" - 2) return "f"["n"] fi end The code should be pretty obvious. If the value of fib(n) already has been calculated it's stored in f[n] and then returned instead of calculating it again. That means all the copies of the sub-call trees are removed from the calculation. The values in the blue boxes are values that already have been calculated and the calls can thus be skipped. It is thus a lot faster than the straight-forward recursive algorithm. Since every value less than n is calculated once, and only once, the first time you execute it, the asymptotic running time is formula_11. Any other calls to it will take formula_12 since the values have been precalculated (assuming each subsequent call's argument is less than n). The algorithm does consume a lot of memory. When we calculate fib("n"), the values fib(0) to fib(n) are stored in main memory. Can this be improved? Yes it can, although the formula_13 running time of subsequent calls are obviously lost since the values aren't stored. Since the value of fib("n") only depends on fib("n-1") and fib("n-2") we can discard the other values by going bottom-up. If we want to calculate fib("n"), we first calculate fib(2) = fib(0) + fib(1). Then we can calculate fib(3) by adding fib(1) and fib(2). After that, fib(0) and fib(1) can be discarded, since we don't need them to calculate any more values. From fib(2) and fib(3) we calculate fib(4) and discard fib(2), then we calculate fib(5) and discard fib(3), etc. etc. The code goes something like this: function fib(integer "n"): integer if "n" == 0 or n == 1: return "n" fi let "u" := 0 let "v" := 1 for "i" := 2 to "n": let "t" := "u" + "v" "u" := "v" "v" := "t" repeat return "v" end We can modify the code to store the values in an array for subsequent calls, but the point is that we don't "have" to. This method is typical for dynamic programming. First we identify what subproblems need to be solved in order to solve the entire problem, and then we calculate the values bottom-up using an iterative process. Longest Common Subsequence (DP version). The problem of Longest Common Subsequence (LCS) involves comparing two given sequences of characters, to find the longest subsequence common to both the sequences. Note that 'subsequence' is not 'substring' - the characters appearing in the subsequence need not be consecutive in either of the sequences; however, the individual characters do need to be in same order as appearing in both sequences. Given two sequences, namely, we defineː as a subsequence of "X", if all the characters "z1, z2, z3, ..., zk", appear in "X", and they appear in a strictly increasing sequence; i.e. "z1" appears in "X" before "z2", which in turn appears before "z3", and so on. Once again, it is not necessary for all the characters "z1, z2, z3, ..., zk" to be consecutive; they must only appear in the same order in "X" as they are in "Z". And thus, we can define "Z = {z1, z2, z3, ..., zk}" as a common subseqeunce of "X" and "Y", if "Z" appears as a subsequence in both "X" and "Y". The backtracking solution of LCS involves enumerating all possible subsequences of "X", and check each subsequence to see whether it is also a subsequence of "Y", keeping track of the longest subsequence we find [see "Longest Common Subsequence (exhaustive version)"]. Since "X" has "m" characters in it, this leads to "2m" possible combinations. This approach, thus, takes exponential time and is impractical for long sequences. Matrix Chain Multiplication. Suppose that you need to multiply a series of formula_14 matrices formula_15 together to form a product matrix formula_16: This will require formula_18 multiplications, but what is the fastest way we can form this product? Matrix multiplication is associative, that is, for any formula_20, and so we have some choice in what multiplication we perform first. (Note that matrix multiplication is "not" commutative, that is, it does not hold in general that formula_21.) Because you can only multiply two matrices at a time the product formula_22 can be paranthesized in these ways: Two matrices formula_28 and formula_29 can be multiplied if the number of columns in formula_28 equals the number of rows in formula_29. The number of rows in their product will equal the number rows in formula_28 and the number of columns will equal the number of columns in formula_29. That is, if the dimensions of formula_28 is formula_35 and formula_29 has dimensions formula_37 their product will have dimensions formula_38. To multiply two matrices with each other we use a function called matrix-multiply that takes two matrices and returns their product. We will leave implementation of this function alone for the moment as it is not the focus of this chapter (how to multiply two matrices in the fastest way has been under intensive study for several years [TODO: propose this topic for the "Advanced" book]). The time this function takes to multiply two matrices of size formula_35 and formula_37 is proportional to the number of scalar multiplications, which is proportional to formula_41. Thus, paranthezation matters: Say that we have three matrices formula_28, formula_29 and formula_44. formula_28 has dimensions formula_46, formula_29 has dimensions formula_48 and formula_44 has dimensions formula_50. Let's paranthezise them in the two possible ways and see which way requires the least amount of multiplications. The two ways are To form the product in the first way requires 75000 scalar multiplications (5*100*100=50000 to form product formula_53 and another 5*100*50=25000 for the last multiplications.) This might seem like a lot, but in comparison to the 525000 scalar multiplications required by the second parenthesization (50*100*100=500000 plus 5*50*100=25000) it is miniscule! You can see why determining the parenthesization is important: imagine what would happen if we needed to multiply 50 matrices! Forming a Recursive Solution. Note that we concentrate on finding a how many scalar multiplications are needed instead of the actual order. This is because once we have found a working algorithm to find the amount it is trivial to create an algorithm for the actual parenthesization. It will, however, be discussed in the end. So how would an algorithm for the optimum parenthesization look? By the chapter title you might expect that a dynamic programming method is in order (not to give the answer away or anything). So how would a dynamic programming method work? Because dynamic programming algorithms are based on optimal substructure, what would the optimal substructure in this problem be? Suppose that the optimal way to parenthesize splits the product at formula_55: Then the optimal solution contains the optimal solutions to the two subproblems That is, just in accordance with the fundamental principle of dynamic programming, the solution to the problem depends on the solution of smaller sub-problems. Let's say that it takes formula_59 scalar multiplications to multiply matrices formula_60 and formula_61, and formula_62 is the number of scalar multiplications to be performed in an optimal parenthesization of the matrices formula_63. The definition of formula_62 is the first step toward a solution. When formula_65, the formulation is trivial; it is just formula_66. But what is it when the distance is larger? Using the observation above, we can derive a formulation. Suppose an optimal solution to the problem divides the matrices at matrices k and k+1 (i.e. formula_67) then the number of scalar multiplications are. That is, the amount of time to form the first product, the amount of time it takes to form the second product, and the amount of time it takes to multiply them together. But what is this optimal value k? The answer is, of course, the value that makes the above formula assume its minimum value. We can thus form the complete definition for the function: A straight-forward recursive solution to this would look something like this "(the language is Wikicode)": function f("m", "n") { if "m" == "n" return 0 let "minCost" := formula_70 for "k" := "m" to "n" - 1 { v := f("m", "k") + f("k" + 1, "n") + "c"("k") if "v" < "minCost" "minCost" := "v" return "minCost" This rather simple solution is, unfortunately, not a very good one. It spends mountains of time recomputing data and its running time is exponential. Using the same adaptation as above we get: function f("m", "n") { if "m" == "n" return 0 else-if "f"["m,n"] != -1: return "f"["m,n"] fi let "minCost" := formula_70 for "k" := "m" to "n" - 1 { v := f("m", "k") + f("k" + 1, "n") + "c"("k") if "v" < "minCost" "minCost" := "v" "f"["m,n"]=minCost return "minCost" Parsing Any Context-Free Grammar. Note that special types of context-free grammars can be parsed much more efficiently than this technique, but in terms of generality, the DP method is the only way to go.
3,335
Algorithms/Greedy Algorithms. In the backtracking algorithms we looked at, we saw algorithms that found decision points and recursed over all options from that decision point. A greedy algorithm can be thought of as a backtracking algorithm where at each decision point "the best" option is already known and thus can be picked without having to recurse over any of the alternative options. The name "greedy" comes from the fact that the algorithms make decisions based on a single criterion, instead of a global analysis that would take into account the decision's effect on further steps. As we will see, such a backtracking analysis will be unnecessary in the case of greedy algorithms, so it is not greedy in the sense of causing harm for only short-term gain. Unlike backtracking algorithms, greedy algorithms can't be made for every problem. Not every problem is "solvable" using greedy algorithms. Viewing the finding solution to an optimization problem as a hill climbing problem greedy algorithms can be used for only those hills where at every point taking the steepest step would lead to the peak always. Greedy algorithms tend to be very efficient and can be implemented in a relatively straightforward fashion. Many a times in O(n) complexity as there would be a single choice at every point. However, most attempts at creating a correct greedy algorithm fail unless a precise proof of the algorithm's correctness is first demonstrated. When a greedy strategy fails to produce optimal results on all inputs, we instead refer to it as a heuristic instead of an algorithm. Heuristics can be useful when speed is more important than exact results (for example, when "good enough" results are sufficient). Event Scheduling Problem. The first problem we'll look at that can be solved with a greedy algorithm is the event scheduling problem. We are given a set of events that have a start time and finish time, and we need to produce a subset of these events such that no events intersect each other (that is, having overlapping times), and that we have the maximum number of events scheduled as possible. Here is a formal statement of the problem: We first begin with a backtracking solution to the problem: "// event-schedule -- schedule as many non-conflicting events as possible" function event-schedule("events" array of "s"[1.."n"], "j"[1.."n"]): set if "n" == 0: return formula_7 fi if "n" == 1: return {"events"[1]} fi let "event" := "events"[1] let "S1" := union(event-schedule("events" - set of conflicting events), "event") let "S2" := event-schedule("events" - {"event"}) if "S1".size() >= "S2".size(): return "S1" else return "S2" fi end The above algorithm will faithfully find the largest set of non-conflicting events. It brushes aside details of how the set is computed, but it would require formula_8 time. Because the algorithm makes two recursive calls on itself, each with an argument of size formula_9, and because removing conflicts takes linear time, a recurrence for the time this algorithm takes is: which is formula_11. But suppose instead of picking just the first element in the array we used some other criterion. The aim is to just pick the "right" one so that we wouldn't need two recursive calls. First, let's consider the greedy strategy of picking the shortest events first, until we can add no more events without conflicts. The idea here is that the shortest events would likely interfere less than other events. There are scenarios were picking the shortest event first produces the optimal result. However, here's a scenario where that strategy is sub-optimal: Above, the optimal solution is to pick event A and C, instead of just B alone. Perhaps instead of the shortest event we should pick the events that have the least number of conflicts. This strategy seems more direct, but it fails in this scenario: Above, we can maximize the number of events by picking A, B, C, D, and E. However, the events with the least conflicts are 6, 2 and 7, 3. But picking one of 6, 2 and one of 7, 3 means that we cannot pick B, C and D, which includes three events instead of just two. = Longest Path solution to critical path scheduling of jobs. Construction with dependency constraints but concurrency can use critical path determination to find minimum time feasible, which is equivalent to a longest path in a directed acyclic graph problem. By using relaxation and breath first search, the shortest path can be the longest path by negating weights(time constraint), finding solution, then restoring the positive weights. Dijkstra's Shortest Path Algorithm. With two (high-level, pseudocode) transformations, Dijsktra's algorithm can be derived from the much less efficient backtracking algorithm. The trick here is to prove the transformations maintain correctness, but that's the whole insight into Dijkstra's algorithm anyway. [TODO: important to note the paradox that to solve this problem it's easier to solve a more-general version. That is, shortest path from s to all nodes, not just to t. Worthy of its own colored box.] To see the workings of Dijkstra's Shortest Path Algorithm, take an example: There is a start and end node, with 2 paths between them ; one path has cost 30 on first hop, then 10 on last hop to the target node, with total cost 40. Another path cost 10 on first hop, 10 on second hop, and 40 on last hop, with total cost 60. The start node is given distance zero so it can be at the front of a shortest distance queue, all the other nodes are given infinity or a large number e.g. 32767 . This makes the start node the first current node in the queue. With each iteration, the current node is the first node of a shortest path queue. It looks at all nodes adjacent to the current node; For the case of the start node, in the first path it will find a node of distance 30, and in the second path, an adjacent node of distance 10. The current nodes distance, which is zero at the beginning, is added to distances of the adjacent nodes, and the distances from the start node of each node are updated, so the nodes will be 30+0 = 30 in the 1st path, and 10+0=10 in the 2nd path. Importantly, also updated is a previous pointer attribute for each node, so each node will point back to the current node, which is the start node for these two nodes. Each node's priority is updated in the priority queue using the new distance. That ends one iteration. The current node was removed from the queue before examining its adjacent nodes. In the next iteration, the front of the queue will be the node in the second path of distance 10, and it has only one adjacent node of distance 10, and that adjacent node will distance will be updated from 32767 to 10 (the current node distance) + 10 ( the distance from the current node) = 20. In the next iteration, the second path node of cost 20 will be examined, and it has one adjacent hop of 40 to the target node, and the target nodes distance is updated from 32767 to 20 + 40 = 60 . The target node has its priority updated. In the next iteration, the shortest path node will be the first path node of cost 30, and the target node has not been yet removed from the queue. It is also adjacent to the target node, with the total distance cost of 30 + 10 = 40. Since 40 is less than 60, the previous calculated distance of the target node, the target node distance is updated to 40, and the previous pointer of the target node is updated to the node on the first path. In the final iteration, the shortest path node is the target node, and the loop exits. Looking at the previous pointers starting with the target node, a shortest path can be reverse constructed as a list to the start node. Given the above example, what kind of data structures are needed for the nodes and the algorithm ? class Node : def __init__(self, label, distance = 32767 ): # a bug in constructor, uses a shared map initializer #, adjacency_distance_map = {} ): self.label = label self.adjacent = {} # this is an adjacency map, with keys nodes, and values the adjacent distance self.distance = distance # this is the updated distance from the start node, used as the node's priority # default distance is 32767 self.shortest_previous = None #this the last shortest distance adjacent node # the logic is that the last adjacent distance added is recorded, for any distances of the same node added def add_adjacent(self, local_distance, node): self.adjacent[node]=local_distance print "adjacency to ", self.label, " of ", self.adjacent[node], " to ", \ node.label def get_adjacent(self) : return self.adjacent.iteritems() def update_shortest( self, node): new_distance = node.adjacent[self] + node.distance #DEBUG print "for node ", node.label, " updating ", self.label, \ " with distance ", node.distance, \ " and adjacent distance ", node.adjacent[self] updated = False # node's adjacency map gives the adjacent distance for this node # the new distance for the path to this (self)node is the adjacent distance plus the other node's distance if new_distance < self.distance : # if it is the shortest distance then record the distance, and make the previous node that node self.distance = new_distance self.shortest_previous= node updated = True return updated MAX_IN_PQ = 100000 class PQ: def __init__(self, sign = -1 ): self.q = [None ] * MAX_IN_PQ # make the array preallocated self.sign = sign # a negative sign is a minimum priority queue self.end = 1 # this is the next slot of the array (self.q) to be used, def insert( self, priority, data): self.q[self.end] = (priority, data) # sift up after insert p = self.end self.end = self.end + 1 self.sift_up(p) def sift_up(self, p): # p is the current node's position # q[p][0] is the priority, q[p][1] is the item or node # while the parent exists ( p >= 1), and parent's priority is less than the current node's priority while p / 2 != 0 and self.q[p/2][0]*self.sign < self.q[p][0]*self.sign: # swap the parent and the current node, and make the current node's position the parent's position tmp = self.q[p] self.q[p] = self.q[p/2] self.q[p/2] = tmp self.map[self.q[p][1]] = p p = p/2 # this map's the node to the position in the priority queue self.map[self.q[p][1]] = p return p def remove_top(self): if self.end == 1 : return (-1, None) (priority, node) = self.q[1] # put the end of the heap at the top of the heap, and sift it down to adjust the heap # after the heap's top has been removed. this takes log2(N) time, where N iis the size of the heap. self.q[1] = self.q[self.end-1] self.end = self.end - 1 self.sift_down(1) return (priority, node) def sift_down(self, p): while 1: l = p * 2 # if the left child's position is more than the size of the heap, # then left and right children don't exist if ( l > self.end) : break r= l + 1 # the selected child node should have the greatest priority t = l if r < self.end and self.q[r][0]*self.sign > self.q[l][0]*self.sign : t = r print "checking for sift down of ", self.q[p][1].label, self.q[p][0], " vs child ", self.q[t][1].label, self.q[t][0] # if the selected child with the greatest priority has a higher priority than the current node if self.q[t] [0] * self. sign > self.q [p] [0] * self.sign : # swap the current node with that child, and update the mapping of the child node to its new position tmp = self. q [ t ] self. q [ t ] = self.q [ p ] self. q [ p ] = tmp self.map [ tmp [1 ] ] = p p = t else: break # end the swap if the greatest priority child has a lesser priority than the current node # after the sift down, update the new position of the current node. self.map [ self.q[p][1] ] = p return p def update_priority(self, priority, data ) : p = self. map[ data ] print "priority prior update", p, "for priority", priority, " previous priority", self.q[p][0] if p is None : return -1 self.q[p] = (priority, self.q[p][1]) p = self.sift_up(p) p = self.sift_down(p) print "updated ", self.q[p][1].label, p, "priority now ", self.q[p][0] return p class NoPathToTargetNode ( BaseException): pass def test_1() : st = Node('start', 0) p1a = Node('p1a') p1b = Node('p1b') p2a = Node('p2a') p2b = Node('p2b') p2c = Node('p2c') p2d = Node('p2d') targ = Node('target') st.add_adjacent ( 30, p1a) #st.add_adjacent ( 10, p2a) st.add_adjacent ( 20, p2a) #p1a.add_adjacent(10, targ) p1a.add_adjacent(40, targ) p1a.add_adjacent(10, p1b) p1b.add_adjacent(10, targ) # testing alternative #p1b.add_adjacent(20, targ) p2a.add_adjacent(10, p2b) p2b.add_adjacent(5,p2c) p2c.add_adjacent(5,p2d) #p2d.add_adjacent(5,targ) #chooses the alternate path p2d.add_adjacent(15,targ) pq = PQ() # st.distance is 0, but the other's have default starting distance 32767 pq.insert( st.distance, st) pq.insert( p1a.distance, p1a) pq.insert( p2a.distance, p2a) pq.insert( p2b.distance, p2b) pq.insert(targ.distance, targ) pq.insert( p2c.distance, p2c) pq.insert( p2d.distance, p2d) pq.insert(p1b.distance, p1b) node = None while node != targ : (pr, node ) = pq.remove_top() #debug print "node ", node.label, " removed from top " if node is None: print "target node not in queue" raise elif pr == 32767: print "max distance encountered so no further nodes updated. No path to target node." raise NoPathToTargetNode # update the distance to the start node using this node's distance to all of the nodes adjacent to it, and update its priority if # a shorter distance was found for an adjacent node ( .update_shortest(..) returns true ). # this is the greedy part of the dijsktra's algorithm, always greedy for the shortest distance using the priority queue. for adj_node, dist in node.get_adjacent(): #debug print "updating adjacency from ", node.label, " to ", adj_node.label if adj_node.update_shortest( node ): pq.update_priority( adj_node.distance, adj_node) print "node and targ ", node, targ, node <> targ print "length of path", targ.distance print " shortest path" #create a reverse list from the target node, through the shortes path nodes to the start node node = targ path = [] while node <> None : path.append(node) node = node. shortest_previous for node in reversed(path): # new iterator version of list.reverse() print node.label if __name__ == "__main__": test_1() Minimum spanning tree. Greedily looking for the minimum weight edges; this could be achieved with sorting edges into a list in ascending weight. Two well known algorithms are Prim's Algorithm and Kruskal's Algorithm. Kruskal selects the next minimum weight edge that has the condition that no cycle is formed in the resulting updated graph. Prim's algorithm selects a minimum edge that has the condition that only one edge is connected to the tree. For both the algorithms, it looks that most work will be done verifying an examined edge fits the primary condition. In Kruskal's, a search and mark technique would have to be done on the candidate edge. This will result in a search of any connected edges already selected, and if a marked edge is encountered, than a cycle has been formed. In Prim's algorithm, the candidate edge would be compared to the list of currently selected edges, which could be keyed on vertex number in a symbol table, and if both end vertexes are found, then the candidate edge is rejected. Maximum Flow in weighted graphs. In a flow graph, edges have a forward capacity, a direction, and a flow quantity in the direction and less than or equal to the forward capacity. Residual capacity is capacity minus flow in the direction of the edge, and flow in the other direction. Maxflow in Ford Fulkerson method requires a step to search for a viable path from a source to a sink vertex, with non-zero residual capacities at each step of the path. Then the minimum residual capacity determines the maximum flow for this path. Multiple iterations of searches using BFS can be done (the Edmond-Karp algorithm), until the sink vertex is not marked when the last node is off the queue or stack. All marked nodes in the last iteration are said to be in the minimum cut. Here are 2 java examples of implementation of Ford Fulkerson method, using BFS. The first uses maps to map vertices to input edges, whilst the second avoids the Collections types Map and List, by counting edges to a vertex and then allocating space for each edges array indexed by vertex number, and by using a primitive list node class to implement the queue for BFS. For both programs, the input are lines of "vertex_1, vertex_2, capacity", and the output are lines of "vertex_1, vertex_2, capacity, flow", which describe the initial and final flow graph. // copyright GFDL and CC-BY-SA package test.ff; import java.io.BufferedReader; import java.io.FileNotFoundException; import java.io.FileReader; import java.io.IOException; import java.util.ArrayList; import java.util.HashMap; import java.util.LinkedList; import java.util.List; import java.util.Map; public class Main { public static void main(String[] args) throws IOException { System.err.print("Hello World\n"); final String filename = args[0]; BufferedReader br = new BufferedReader( new FileReader(filename)); String line; ArrayList<String[]> lines = new ArrayList<>(); while ((line= br.readLine()) != null) { String[] toks = line.split("\\s+"); if (toks.length == 3) lines.add(toks); for (String tok : toks) { System.out.print(tok); System.out.print("-"); System.out.println(""); int [][]edges = new int[lines.size()][4]; // edges, 0 is from-vertex, 1 is to-vertex, 2 is capacity, 3 is flow for (int i = 0; i < edges.length; ++i) for (int j =0; j < 3; ++j) edges[i][j] = Integer.parseInt(lines.get(i)[j]); Map<Integer, List<int[]» edgeMap = new HashMap<>(); // add both ends into edge map for each edge int last = -1; for (int i = 0; i < edges.length; ++i) for (int j = 0; j < 2; ++j) { edgeMap.put(edges[i][j], edgeMap.getOrDefault(edges[i][j], new LinkedList<int[]>()) ); edgeMap.get(edges[i][j]).add(edges[i]); // find the highest numbered vertex, which will be the sink. if ( edges[i][j] > last ) last = edges[i][j]; while(true) { boolean[] visited = new boolean[edgeMap.size()]; int[] previous = new int[edgeMap.size()]; int[][] edgeTo = new int[edgeMap.size()][]; LinkedList<Integer> q = new LinkedList<>(); q.addLast(0); int v = 0; while (!q.isEmpty()) { v = q.removeFirst(); visited[v] = true; if (v == last) break; int prevQsize = q.size(); for ( int[] edge: edgeMap.get(v)) { if (v == edge[0] && !visited[edge[1]] && edge[2]-edge[3] > 0) q.addLast(edge[1]); else if( v == edge[1] && !visited[edge[0]] && edge[3] > 0 ) q.addLast(edge[0]); else continue; edgeTo[q.getLast()] = edge; for (int i = prevQsize; i < q.size(); ++i) { previous[q.get(i)] = v; if ( v == last) { int a = v; int b = v; int smallest = Integer.MAX_VALUE; while (a != 0) { // get the path by following previous, // also find the smallest forward capacity a = previous[b]; int[] edge = edgeTo[b]; if ( a == edge[0] && edge[2]-edge[3] < smallest) smallest = edge[2]-edge[3]; else if (a == edge[1] && edge[3] < smallest ) smallest = edge[3]; b = a; // fill the capacity along the path to the smallest b = last; a = last; while ( a != 0) { a = previous[b]; int[] edge = edgeTo[b]; if ( a == edge[0] ) edge[3] = edge[3] + smallest; else edge[3] = edge[3] - smallest; b = a; } else { // v != last, so no path found // max flow reached break; for ( int[] edge: edges) { for ( int j = 0; j < 4; ++j) System.out.printf("%d-",edge[j]); System.out.println(); // copyright GFDL and CC-BY-SA package test.ff2; import java.io.BufferedReader; import java.io.FileNotFoundException; import java.io.FileReader; import java.io.IOException; import java.util.ArrayList; public class MainFFArray { static class Node { public Node(int i) { v = i; int v; Node next; public static void main(String[] args) throws IOException { System.err.print("Hello World\n"); final String filename = args[0]; BufferedReader br = new BufferedReader(new FileReader(filename)); String line; ArrayList<String[]> lines = new ArrayList<>(); while ((line = br.readLine()) != null) { String[] toks = line.split("\\s+"); if (toks.length == 3) lines.add(toks); for (String tok : toks) { System.out.print(tok); System.out.print("-"); System.out.println(""); int[][] edges = new int[lines.size()][4]; for (int i = 0; i < edges.length; ++i) for (int j = 0; j < 3; ++j) edges[i][j] = Integer.parseInt(lines.get(i)[j]); int last = 0; for (int[] edge : edges) { for (int j = 0; j < 2; ++j) if (edge[j] > last) last = edge[j]; int[] ne = new int[last + 1]; for (int[] edge : edges) for (int j = 0; j < 2; ++j) ++ne[edge[j]]; int[][][] edgeFrom = new int[last + 1][][]; for (int i = 0; i < last + 1; ++i) edgeFrom[i] = new int[ne[i]][]; int[] ie = new int[last + 1]; for (int[] edge : edges) for (int j = 0; j < 2; ++j) edgeFrom[edge[j]][ie[edge[j]]++] = edge; while (true) { Node head = new Node(0); Node tail = head; int[] previous = new int[last + 1]; for (int i = 0; i < last + 1; ++i) previous[i] = -1; int[][] pathEdge = new int[last + 1][]; while (head != null ) { int v = head.v; if (v==last)break; int[][] edgesFrom = edgeFrom[v]; for (int[] edge : edgesFrom) { int nv = -1; if (edge[0] == v && previous[edge[1]] == -1 && edge[2] - edge[3] > 0) nv = edge[1]; else if (edge[1] == v && previous[edge[0]] == -1 && edge[3] > 0) nv = edge[0]; else continue; Node node = new Node(nv); previous[nv]=v; pathEdge[nv]=edge; tail.next = node; tail = tail.next; head = head.next; if (head == null) break; int v = last; int minCapacity = Integer.MAX_VALUE; while (v != 0) { int fv = previous[v]; int[] edge = pathEdge[v]; if (edge[0] == fv && minCapacity > edge[2] - edge[3]) minCapacity = edge[2] - edge[3]; else if (edge[1] == fv && minCapacity > edge[3]) minCapacity = edge[3]; v = fv; v = last; while (v != 0) { int fv = previous[v]; int[] edge = pathEdge[v]; if (edge[0] == fv) edge[3] += minCapacity; else if (edge[1] == fv) edge[3] -= minCapacity; v = fv; for (int[] edge : edges) { for (int j = 0; j < 4; ++j) System.out.printf("%d-", edge[j]); System.out.println();
7,034
Algorithms/Hill Climbing. Hill climbing is a technique for certain classes of optimization problems. The idea is to start with a sub-optimal solution to a problem (i.e., "start at the base of a hill") and then repeatedly improve the solution ("walk up the hill") until some condition is maximized ("the top of the hill is reached"). One of the most popular hill-climbing problems is the network flow problem. Although network flow may sound somewhat specific it is important because it has high expressive power: for example, many algorithmic problems encountered in practice can actually be considered special cases of network flow. After covering a simple example of the hill-climbing approach for a numerical problem we cover network flow and then present examples of applications of network flow. Newton's Root Finding Method. Newton's Root Finding Method is a three-centuries-old algorithm for finding numerical approximations to roots of a function (that is a point formula_1 where the function formula_2 becomes zero), starting from an initial guess. You need to know the function formula_3 and its first derivative formula_4 for this algorithm. The idea is the following: In the vicinity of the initial guess formula_5 we can form the Taylor expansion of the function which gives a good approximation to the function near formula_5. Taking only the first two terms on the right hand side, setting them equal to zero, and solving for formula_10, we obtain which we can use to construct a better solution This new solution can be the starting point for applying the same procedure again. Thus, in general a better approximation can be constructed by repeatedly applying As shown in the illustration, this is nothing else but the construction of the zero from the tangent at the initial guessing point. In general, Newton's root finding method converges quadratically, except when the first derivative of the solution formula_14 vanishes at the root. Coming back to the "Hill climbing" analogy, we could apply Newton's root finding method not to the function formula_3, but to its first derivative formula_4, that is look for formula_1 such that formula_14. This would give the extremal positions of the function, its maxima and minima. Starting Newton's method close enough to a maximum this way, we climb the hill. Example application of Newton's method. The net present value function is a function of time, an interest rate, and a series of cash flows. A related function is Internal Rate of Return. The formula for each period is (CFi x (1+ i/100) t , and this will give a polynomial function which is the total cash flow, and equals zero when the interest rate equals the IRR. In using Newton's method, x is the interest rate, and y is the total cash flow, and the method will use the derivative function of the polynomial to find the slope of the graph at a given interest rate (x-value), which will give the xn+1 , or a better interest rate to try in the next iteration to find the target x where y ( the total returns) is zero. Instead of regarding continuous functions, the hill-climbing method can also be applied to discrete networks. Network Flow. Suppose you have a directed graph (possibly with cycles) with one vertex labeled as the source and another vertex labeled as the destination or the "sink". The source vertex only has edges coming out of it, with no edges going into it. Similarly, the destination vertex only has edges going into it, with no edges coming out of it. We can assume that the graph is fully connected with no dead-ends; i.e., for every vertex (except the source and the sink), there is at least one edge going into the vertex and one edge going out of it. We assign a "capacity" to each edge, and initially we'll consider only integral-valued capacities. The following graph meets our requirements, where "s" is the source and "t" is the destination: We'd like now to imagine that we have some series of inputs arriving at the source that we want to carry on the edges over to the sink. The number of units we can send on an edge at a time must be less than or equal to the edge's capacity. You can think of the vertices as cities and the edges as roads between the cities and we want to send as many cars from the source city to the destination city as possible. The constraint is that we cannot send more cars down a road than its capacity can handle. The goal of network flow is to send as much traffic from formula_19 to formula_20 as each street can bear. To organize the traffic routes, we can build a list of different paths from city formula_19 to city formula_20. Each path has a carrying capacity equal to the smallest capacity value for any edge on the path; for example, consider the following path formula_23: Even though the final edge of formula_23 has a capacity of 8, that edge only has one car traveling on it because the edge before it only has a capacity of 1 (thus, that edge is at full capacity). After using this path, we can compute the residual graph by subtracting 1 from the capacity of each edge: (We subtracted 1 from the capacity of each edge in formula_23 because 1 was the carrying capacity of formula_23.) We can say that path formula_23 has a flow of 1. Formally, a flow is an assignment formula_28 of values to the set of edges in the graph formula_29 such that: Where formula_19 is the source node and formula_20 is the sink node, and formula_36 is the capacity of edge formula_37. We define the value of a flow formula_38 to be: The goal of network flow is to find an formula_38 such that formula_41 is maximal. To be maximal means that there is no other flow assignment that obeys the constraints 1-4 that would have a higher value. The traffic example can describe what the four flow constraints mean: The Ford-Fulkerson Algorithm. The following algorithm computes the maximal flow for a given graph with non-negative capacities. What the algorithm does can be easy to understand, but it's non-trivial to show that it terminates and provides an optimal solution. function net-flow(graph ("V", "E"), node "s", node "t", cost "c"): flow initialize "f"("e") := 0 for all "e" in "E" loop while not "done" for all "e" in "E": "// compute residual capacities" let "cf"("e") := "c"("e") - "f"("e") repeat let "Gf" := ("V", {"e" : "e" in "E" and "cf"("e") > 0}) find a path "p" from "s" to "t" in "Gf" "// e.g., use depth first search" if no path "p" exists: signal "done" let "path-capacities" := map("p", "cf") "// a path is a set of edges" let "m" := min-val-of("path-capacities") "// smallest residual capacity of p" for all ("u", "v") in "p": "// maintain flow constraints" "f"(("u", "v")) := "f"(("u", "v")) + "m" "f"(("v", "u")) := "f"(("v", "u")) - "m" repeat repeat end The Ford-Fulkerson algorithm uses repeated calls to Breadth-First Search ( use a queue to schedule the children of a node to become the current node). Breadth-First Search increments the length of each path +1 so that the first path to get to the destination, the shortest path, will be the first off the queue. This is in contrast with using a Stack, which is Depth-First Search, and will come up with *any* path to the target, with the "descendants" of current node examined, but not necessarily the shortest. Example application of Ford-Fulkerson maximum flow/ minimum cut. An example of application of Ford-Fulkerson is in baseball season elimination. The question is whether the team can possibly win the whole season by exceeding some combination of wins of the other teams. The idea is that a flow graph is set up with teams not being able to exceed the number of total wins which a target team can maximally win for the entire season. There are game nodes whose edges represent the number of remaining matches between two teams, and each game node outflows to two team nodes, via edges that will not limit forward flow; team nodes receive edges from all games they participate. Then outflow edges with win limiting capacity flow to the virtual target node. In a maximal flow state where the target node's total wins will exceed some combination of wins of the other teams, the penultimate depth-first search will cutoff the start node from the rest of the graph, because no flow will be possible to any of the game nodes, as a result of the penultimate depth-first search (recall what happens to the flow , in the second part of the algorithm after finding the path). This is because in seeking the maximal flow of each path, the game edges' capacities will be maximally drained by the win-limit edges further along the path, and any residual game capacity means there are more games to be played that will make at least one team overtake the target teams' maximal wins. If a team node is in the minimum cut, then there is an edge with residual capacity leading to the team, which means what , given the previous statements? What do the set of teams found in a minimum cut represent ( hint: consider the game node edge) ? Example Maximum bipartite matching ( intern matching ). This matching problem doesn't include preference weightings. A set of companies offers jobs which are made into one big set , and interns apply to companies for specific jobs. The applications are edges with a weight of 1. To convert the bipartite matching problem to a maximum flow problem, virtual vertexes s and t are created , which have weighted 1 edges from s to all interns, and from all jobs to t. Then the Ford-Fulkerson algorithm is used to sequentially saturate 1 capacity edges from the graph, by augmenting found paths. It may happen that a intermediate state is reached where left over interns and jobs are unmatched, but backtracking along reverse edges which have residual capacity = forward flow = 1, along longer paths that occur later during breadth-first search, will negate previous suboptimal augmenting paths, and provide further search options for matching, and will terminate only when maximal flow or maximum matching is reached.
2,413
Soil Science. Soil Science. Soil Science encompasses many aspects of soils study such as the disciplines of soil physics, soil chemistry, soil classification, soil microbiology, etc. One of these studies is "pedology". Pedology is the study of soils in a three dimensional context in their natural setting across the landscape in terms of origins, morphology (forms), classifications, characteristics or attributes, profile or cross-section, surface and internal water relations, geology, plant ecology, etc. and interpretations for use and management. Most soils develop from weathered mineral geologic materials and usually include some organic materials in their upper part. This is called the , and it is composed of organic material, generally not considered topsoil, but leaf litter and muck. The rock that the mineral part of the soil originates from is called the parent material. Through the further influence of the soil-forming processes of additions, removals, transfers and transformations the nature of the parent materials are altered to the extent that they become soils. The soil-forming processes are controlled by the soil forming factors of parent material, climate, living organisms, relief, and time. Thus the similarity or contrast of soils from place to place reflects the similarity or contrasts of the soil-forming processes and factors. The primary convention for naming soils is to select a named geographic feature in the vicinity where the soil was first recognized and identified as a new soil, e.g: the Dunkirk soil series for the village of Dunkirk near Lake Erie, NY. All soils found to have closely similar attributes are classified as Dunkirk wherever they are found. That distribution can be more localized or it can be somewhat more extensive. The description of the soil usually identifies attributes such as pH (degree of alkalinity or acidity), maturity (stage of pedogenic development), texture, consistence, color, gravel content and type, drainage class, and other soil features. The United States and other nations have soil maps that show the soils in relation to one another as they occur across the landscape. On these maps each soil is characteristically related to a unique landscape position. Thus, the pattern of distribution and the location and extent of the soil type can be comprehended. Soil maps are a prime requisite for many users who need site information to make informed decisions on many aspects of use and management. Comparison of soil resources and their respective suitabilities and/or limitations can also be used to evaluate alternate sites for intended uses. In the United States the Cooperative Soil Survey has more than a hundred year history. The various surveys were published with text and maps as soft covered books. Now they are published through the internet Web Soil Survey at: http://websoilsurvey.nrcs.usda.gov/app/ Soil Equations. The Bulk Density of a soil is its dry mass divided by the total original volume of the soil. The porosity of a soil is the volume of its pores divided by the total volume of the soil. The particle density of a soil is the mass of its solids divided by the volume of its solids. Soil Water. Field Capacity When a soil has drained freely for several days after rain or irrigation, the water content is the field capacity of the soil. The field capacity is a measure of the soils ability to store water. A sandy soil freely drains and the field capacity is low. A clayey soil holds water and has a high field capacity. A clayey soil can store more water than a sandy soil. Soil structure effects the field capacity. A structureless clay with no large pores has a very high field capacity. A well structured clay with large pores will drain freely and has a lower field capacity. Organic matter can absorb water and increases field capacity. The field capacity of a soil is mainly determined by the pore size distribution. Small pores hold water by capillary forces. Large pores freely drain and do not hold water at field capacity. Wilting Point When a soil is dry and plants suffer from permanent wilting because they are unable to absorb water, the moisture content is the wilting point of the soil. When the moisture content is above the wilting point, plants can absorb water from the soil. Below the wilting point, water is tightly adsorbed to clay particles and is unavailable to plants. Sandy soils have very low wilting points while clays have high wilting points. A clay contains a significant amount of water unavailable to plants. Available Water The amount of water stored by a soil that can be absorbed by a plant is the available water and is equal to the water held at the field capacity minus the wilting point. The available water is the water stored by a soil and is useful to plants. Texture, structure and organic matter have an effect on the available water. Clayey soils generally have a higher available water than sandy soils. A well developed structure can increase the available water. Organic matter absorbs water and increases the available water. The available water stored in a soil is often measured as mm of water. Farmers prefer to sow a crop when there is sufficient water stored in the soil to ensure good plant growth. If the amount of stored water is low, the farmer needs to pray for rain. The amount of water available to plants also depends on the depth of the plant roots. Soluble salts in soils reduces the availability of water to plants. Salt increases the osmotic pressure of water. High salt content in soils will kill plants. Soil Water Infiltration. It is well known that urban development increases flooding. In urban areas the coefficient of runoff increases, and the time of concentration decreases. The coefficient of runoff in undisturbed areas can be as low as 0.1, while in inner city suburbs, coefficient of runoff can be as high as 0.9. During rainfall a high value of coefficient of runoff occurs when most rainfall flows over the land and little water is absorbed by soils. When all the rain flows over the surface and no rain infiltrates into the soil the coefficient of runoff equals 1. If a high volume of water enters the soil the coefficient of runoff is small and approaches 0. The main source of fresh water is rainfall runoff, which is widely used to meet human needs. Runoff is a vital part of long term water supply and renews water resources be they rivers, lakes or reservoirs. Plants need water to grow and evapotranspiration by plants returns water from the soil into the atmosphere. Rate of water infiltration The rate at which water enters the soil during rain or irrigation is the infiltration rate. When rain first enters a dry soil, infiltration is rapid, then decreases as it continues to rain. During the early stages of rainfall, water fills up the dry pores. When the pores are filled with water and the soil is saturated, infiltration slows down until it reaches a constant value. When it has stopped raining, the surface soil will be saturated with water. Free water in a saturated soil will drain under gravity. After about 2 3 days, free water will have percolated downwards and the soil moisture will reach an equilibrium. When the free water has drained downwards, the moisture is at 'field capacity'. At field capacity, soil water is held in the soil by several forces. The major part of the water is held by capillary forces. Some water is adsorbed onto the surfaces of soil particles, especially clay particles and organic matter. Capillary water is the main supply of water to plants. Capillary water may be removed by transpiration by plants and evaporation. If the top soil is dry and the underlying soil is wet, water can move upwards by capillary action. Above the water table there is a capillary fringe. Water moves upwards from the water table by Capillary forces. In some. soils, especially clays which contain many fine pores, the capillary fringe can be up to one metre above the water table. During rain the infiltration rate slows down as the soil becomes wet. Infiltration rates vary greatly between different soils, and typical values are, 100mm/hour during the first hour of rain, decreasing to 10mm/hr after six hours of rain. Peds are small aggregates (1 2mrn ) found in many soils. Pedal soils have a good structure and water flows easily through these soils. Apedal soils have a poor structure and apedal clayey soils have poor water infiltration. Sandy soils generally have high water infiltration and clayey soils have low infiltration rates. If the intensity of rainfall is constant, and infiltration decreases at a typical rate, there is no runoff during the early stages of the rain event. When the infiltration rate is slower than the rainfall intensity, runoff is expected to occur. Infiltration rate is controlled by the soil layer which has the lowest hydraulic conductivity. Often the top soil is a loam and has a high hydraulic conductivity. A clayey subsoil with a low hydraulic conductivity will act as a throttle, slowing down the final infiltration rate. In the field, infiltration can be directly measured using a ring infiltrometer. A circular tube is forced into the soil, which is ponded with water. At specific time intervals, rate of water flow into the soil is recorded. To reduce variability, the rings need to be fairly large, generally rings should be bigger than 112 meter in diameter. Often a double ring infiltrometer is used to reduce errors from lateral water movement. Variability in these experiments is often high, so it is advisable to carry out a number of replicates. It is common in many soils for the infiltration rate to decrease by a factor of ten, after six hours of rain. A steady state equilibrium is usually reached before six hours. Infiltration rates vary greatly between different soils, and typical values are, 10Omm/hour during the first hour of rain, decreasing to 1 mm/hr after six hours of rain.
2,357
Wi-Fi/Cantenna. About Cantennas. A cantenna is a directional for long-range (cf. ) which can be used to increase the range (or snoop) on a wireless network. Originally employing a Pringle's® Potato Chip can, a cantenna can be constructed quickly, easily, and inexpensively out of readily obtained materials. It requires four nuts, a short length of medium gauge wire, a tin can roughly 9 cm (3.66 inches) in diameter, the longer the better, and an N-Female chassis mount collector, which can be purchased at any electronic supply store. The original design employed a Pringle's can, but an optimal design will use a longer tin can. Instructions for constructing and connecting a cantenna can be found at Turnpoint.net. While cantennas are useful for extending a local-area network (), the tiny design makes them ideal for mobile applications, such as . Its design is so simple and ubiquitous that it is often the first antenna that experimenters learn to build. Even the Secret Service has taken an interest in the can antenna. How to make a Cantenna. You'll need
272
Chinese (Mandarin)/Nations of the World. Here's a list of some nations and regions, with their names in Chinese. Note that the country's name can also be used as an adjective. For example, 日本货 (rìběn huò) means "Japanese goods," and is derived from 日本 (rìběn; Japan) and 货 (huò; goods). As an aside, China imports a good number of products from Japan. Between 2001 and 2007, it was the greatest exporter to China, beating the European Union, South Korea, and Taiwan. You could also say 日本椅子 (rìběn yǐzi; Japanese chair), 日本食品 (rìběn shípǐn; Japanese food products), and 日本动画片 (rìběn dònghuà piàn; Japanese cartoons). Terms like these can be shortened, for example, 日货 means the same thing. You can see 日 is an adjective which means "pertaining to Japan," i.e., "Japanese." Another way to describe its function is that it acts like a "root," much like in English. Headlines are often abbreviated this way. For example, 中俄合作 (zhōng é hézuò) can mean "China and Russia cooperate" or "Sino-Russian cooperation." In common conversation, however, excessive abbreviation is undesirable, because it often leads to ambiguity.
354
Statistics/Testing Data/Purpose. Purpose of Statistical test. In general, the purpose of statistical tests is to determine whether some hypothesis is extremely unlikely given observed data. There are two common philosophical approaches to such tests, "significance testing" (due to Fisher) and "hypothesis testing" (due to Neyman and Pearson). Significance testing aims to quantify evidence against a particular hypothesis being true. We can think of it as testing to guide research. We believe a certain statement may be true and want to work out whether it is worth investing time investigating it. Therefore, we look at the opposite of this statement. If it is quite likely then the further study would seem to not make sense. However, if it is extremely unlikely then further study would make sense. A concrete example of this might be in drug testing. We have a number of drugs that we want to test and only limited time, so we look at the hypothesis that an individual drug has no positive effect whatsoever and only look further if this is unlikely. Hypothesis testing rather looks at the evidence for a particular hypothesis being true. We can think of this as a guide to making a decision. We need to make a decision soon and suspect that a given statement is true. Thus we see how unlikely we are to be wrong, and if we are sufficiently unlikely to be wrong we can assume that this statement is true. Often this decision is final and cannot be changed. Statisticians often overlook these differences and incorrectly treat the terms "significance test" and "hypothesis test" as though they are interchangeable. A data analyst frequently wants to know whether there is a difference between two sets of data, and whether that difference is likely to occur due to random fluctuations, or is instead unusual enough that random fluctuations rarely cause such differences. In particular, frequently we wish to know something about the average (or mean), or about the variability (as measured by variance or standard deviation). Statistical tests are carried out by first making some assumption, called the Null Hypothesis, and then determining whether the data observed is unlikely to occur given that assumption. If the probability of seeing the observed data is small enough under the assumed the Null Hypothesis, then the Null Hypothesis is rejected. A simple example might help. We wish to determine if men and women are the same height on average. We select and measure 20 women and 20 men. We assume the Null Hypothesis that there is no difference between the average value of heights for men vs. women. We can then test using the t-test to determine whether our sample of 40 heights would be unlikely to occur given this assumption. The basic idea is to assume heights are normally distributed and to assume that the means and standard deviations are the same for women and for men. Then we calculate the average of our 20 men, and of our 20 women, we also calculate the sample standard deviation for each. Then using the t-test of two means with 40-2 = 38 degrees of freedom we can determine whether the difference in heights between the sample of men and the sample of women is sufficiently large to make it unlikely that they both came from the same normal population.
739
SNFO Flight Planning/Philosophy of this project. Purpose of this. Every year, a group of instructors gathers to incorporate errata, changes, and revisions to each and every Flight Training Instruction (FTI). This process typically involves many hours of painful meetings where every change—from major procedural modification to typographical correction—undergoes intense scrutiny. Without fail, as soon as the updated version arrives from the printer, people begin to notice mistakes. Why does this occur? Despite the best intentions and vast collective experience of those charged with revising the FTI, they are but a small subset of its users, and can't possibly keep track of and correct every error to everyone's satisfaction. Even if every instructor or student who thought of a correction or suggestion while reading the FTI sent that input in to the committee, there would still be items overlooked or, at the very best, phrased ineffectually. By locating this document in a venue where "anyone" has the power to edit, revise, clarify, and make corrections to it, we hope to capitalize on the collective knowledge and intelligence not only of every "instructor" involved in the program, but also that of the "students", as well as "military aviators serving in other training and operational commands", and even "civilians" who may bring fresh insight to what has been, until now, a largely closed instructional guide. What this document is not. First and foremost, it should be emphasized that this version of the "Flight Planning FTI" is not the official source for "anything!" Since it may be modified, in any way, by any person, and at any time, it should always be regarded as "gouge". It is, of course, the goal of this project to facilitate a continuously evolving improvement to every printed version, and in that sense, it should usually be a "better" reference. However, in any conflict between this Wikibook and the printed version, the "published" FTI and its official errata shall be considered authoritative. In addition, it's important for those not directly associated with this training curriculum to remember that this document is not necessarily, nor is it intended to be, applicable to flight operations in general. In particular, although military aviation is subject to many of the same regulations as civil aviation, there are also many instances where service or local directives take precedence and may diverge from FAR Part 91. Similarly, while we also train U.S. Air Force Student Navigators, as well as officers from several allied nations, "all" of the flight operations conducted in this program fall under U.S. Navy flight regulations. Please try to remember that the ultimate audience of this text is Primary Student NFOs/Navigators, and avoid cluttering it with material that, although it may apply to certain subsets of them in the future, is not applicable to the environment at hand. When in doubt, check to make sure any additional material directly supports the terminal and enabling objectives of the course. Finally, there may be instances when this publication is more restrictive than any governing regulation. Again, because we instruct students in a relatively early stage of flight training, it is often necessary to provide more direction than is actually required by the rules under which Fleet aviators operate. Changes modifying these procedures on the grounds that they are not specified by governing directives should be made cautiously. How to contribute. "Everyone" who reads these pages is now part of the group of people described above, charged with making corrections and improvements to this FTI. The difference is, you don't have to sit through hours of painful meetings or have a grand plan for overhauling the course. That's the advantage of the philosophy. It doesn't matter if you're a student in the Flight Planning class right now—you can still provide valuable input. We're eager to have your assistance, but before you get started, please browse through the to get a feel for what a Wikibook is all about and how to contribute. There are several ways in which you can help make this coursebook better: When editing, don't be shy—be bold instead. If you're pretty certain that something needs to be changed, it probably does. Don't worry too much about making a mistake. Every version of each topic is retained in that page's , and it's simple to revert back to a previous version if something is changed in error. That having been said, if you're at all unsure, do your research. Talk to your peers, call the course manager, or, better yet, try to verify your changes using official sources. (The ../References/ page is a good place to begin looking.) This will help make sure that your edits benefit the project instead of leading others astray, and it will also have the added advantage of increasing your professional knowledge. If you think that something needs to be modified but you don't know how to phrase it, can't figure out if it's factually accurate, or if your change is potentially controversial, consider beginning a discourse on that module's Discussion page instead. Others will chime in to help come to a consensus, at which point you, or someone else, can modify the module accordingly. Other concerns. At this point, I'm sure you can readily see the advantages provided by, and incredible potential of, collaborating as a group on this project. However, you may also be asking yourself a few nervous questions: The answer to all four questions, of course, is you! An active community of benevolent readers and contributors is much more powerful than a single bad apple. Any errors that are introduced—accidentally or intentionally—can be easily reverted with a couple of clicks from a page's History. For an incredible example of how the Wiki concept can succeed in creating an accurate reference while remaining completely open and free, take a look at "".
1,291
SNFO Flight Planning. This document is a version of the Flight Training Instruction (FTI) utilized for the Flight Planning course, during the Instrument Navigation (INAV) stage, in the Primary phase of Multi-Service / Air Force training, conducted by Training Air Wing SIX, aboard , .
63
Sanskrit/Everyday Phrases. Phrases: Don't forget, please do come - अवश्यं आगन्तव्य, न विस्मर्तव्यम् Hello/Good Day - नमस्ते Namaste (lit. Greetings to You) Good morning - शुभ प्रभातम् Shubha Prabhatam Good afternoon/evening - नमस्ते Namaste Good night - शुभ रात्रि Shubha Raatri What is Your Name? - किं तव नाम - Kim tava naama? I am Richard - अहम् रिचर्ड: Aham Richardah I am Richard - रिचर्ड अस्मि Richard asmi I am Richard - रिचर्ड अस्मयाहम Richard asm'yaaham My name is Julia - जूलिया अहम्/ जूलिया अस्मि Julia aham (Juliaaham)/ Julia asmi My name is Julia - जूलिया इति नाम अदेइम नम Julia iti naama adeiam nama Words: Numbers: 1 - अदिम/प्रथम/एकम् Adim/Pratham/Ekam Ekah Eka Ekam Dvau Dve Dve Trayah Tisrah Treeni Chatvari Chatasrah Chatvaree The above is a table of usage. The first column is Male gender, the second of female gender, and the third of neuter. 2 - द्व Dva 3 - त्रि Tri 4 - चतुर Chatur 5 - पन्चः Panchaha 6 - षष्ठ Shashta 7 - सप्त Sapta 8 - अष्ट AshTa 9 - नव Nav 10 - दश Dash 100 - शत Shata 1000 - सहस्र Sahasra Greeting: नमस्ते Namaste - Used for every elder and equal in age आत्मा Aatma - Soul पंकज Pankaja - Lotus पद Pada - Foot कर Kara - Hands बाहु Bahu - Arm
617
Introduction to Paleoanthropology/Hominids Early. Overview of Human Evolutionary Origin. The fossil record provides little information about the evolution of the human lineage during the Late Miocene, from 10 million to 5 million years ago. Around 10 million years ago, several species of large-bodied hominids that bore some resemblance to modern orangutans lived in Africa and Asia. About this time, the world began to cool; grassland and savanna habitats spread; and forests began to shrink in much of the tropics. The creatures that occupied tropical forests declined in variety and abundance, while those that lived in the open grasslands thrived. We know that at least one ape species survived the environmental changes that occurred during the Late Miocene, because molecular genetics tells us that humans, gorillas, bonobos and chimpanzees are all descended from a common ancestor that lived sometime between 7 million and 5 million years ago. Unfortunately, the fossil record for the Late Miocene tells us little about the creature that linked the forest apes to modern hominids. Beginning about 5 million years ago, hominids begin to appear in the fossil record. These early hominids were different from any of the Miocene apes in one important way: they walked upright (as we do). Otherwise, the earliest hominids were probably not much different from modern apes in their behavior or appearance. Between 4 million and 2 million years ago, the hominid lineage diversified, creating a community of several hominid species that ranged through eastern and southern Africa. Among the members of this community, two distinct patterns of adaptation emerged: Hominid Species. "Australopithecus africanus". NOTE: "Australopithecus afarensis" and "A. africanus" are known as gracile australopithecines, because of their relatively lighter build, especially in the skull and teeth. (Gracile means "slender", and in paleoanthropology is used as an antonym to "robust"). Despite the use of the word "gracile", these creatures were still more far more robust than modern humans. "Paranthropus robustus". "Australopithecus aethiopicus", "Paranthropus robustus" and "P. boisei" are known as robust australopithecines, because their skulls in particular are more heavily built.
558
Introduction to Paleoanthropology/Hominids Early Chronology. Phylogeny and Chronology. Between 8 million and 4 million years ago. Fossils of "Sahelanthropus tchadensis" (6-7 million years) and "Orrorin tugenensis" (6 million years), discovered in 2001 and 2000 respectively, are still a matter of debate. The discoverers of "Orrorin tugenensis" claim the fossils represent the real ancestor of modern humans and that the other early hominids (e.g., Australopithecus and Paranthropus) are side branches. They base their claim on their assessment that this hominid was bipedal (2 million years earlier than previously thought) and exhibited expressions of certain traits that were more modern than those of other early hominids. Other authorities disagree with this analysis and some question whether this form is even a hominid. At this point, there is too little information to do more than mention these two new finds of hominids. As new data come in, however, a major part of our story could change. Fossils of "Ardipithecus ramidus" (4.4 million years ago) were different enough from any found previously to warrant creating a new hominid genus. Although the evidence from the foramen magnum indicates that they were bipedal, conclusive evidence from legs, pelvis and feet remain somewhat enigmatic. There might be some consensus that "A. ramidus" represent a side branch of the hominid family. Between 4 million and 2 million years ago. "Australopithecus anamensis" (4.2-3.8 million years ago) exhibit mixture of primitive (large canine teeth, parallel tooth rows) and derived (vertical root of canine, thicker tooth enamel) features, with evidence of bipedalism. There appears to be some consensus that this may represent the ancestor of all later hominids. The next species is well established and its nature is generally agreed upon: "Australopithecus afarensis" (4-3 million years ago). There is no doubt that "A. afarensis" were bipeds. This form seems to still remain our best candidate for the species that gave rise to subsequent hominids. At the same time lived a second species of hominid in Chad: "Australopithecus bahrelghazali" (3.5-3 million years ago). It suggests that early hominids were more widely spread on the African continent than previously thought. Yet full acceptance of this classification and the implications of the fossil await further study. Another fossil species contemporaneous with A. afarensis existed in East Africa: "Kenyanthropus platyops" (3.5 million years ago). The fossils show a combination of features unlike that of any other forms: brain size, dentition, details of nasal region resemble genus Australopithecus; flat face, cheek area, brow ridges resemble later hominids. This set of traits led its discoverers to give it not only a new species name but a new genus name as well. Some authorities have suggested that this new form may be a better common ancestor for Homo than "A. afarensis". More evidence and more examples with the same set of features, however, are needed to even establish that these fossils do represent a whole new taxonomy. Little changed from A. afarensis to the next species: "A. africanus": same body size and shape, and same brain size. There are a few differences, however: canine teeth are smaller, no gap in tooth row, tooth row more rounded (more human-like). We may consider A. africanus as a continuation of "A. afarensis", more widely distributed in southern and possibly eastern Africa and showing some evolutionary changes. It should be noted that this interpretation is not agreed upon by all investigators and remains hypothetical. Fossils found at Bouri in Ethiopia led investigators to designate a new species: "A. garhi" (2.5 million years ago). Intriguing mixture of features: several features of teeth resemble early "Homo"; whereas molars are unusually larger, even larger than the southern African robust australopithecines. The evolutionary relationship of A. garhi to other hominids is still a matter of debate. Its discoverers feel it is descended from A. afarensis and is a direct ancestor to Homo. Other disagree. Clearly, more evidence is needed to interpret these specimens more precisely, but they do show the extent of variation among hominids during this period. Two distinctly different types of hominid appear between 2 and 3 million years ago: robust australopithecines ("Paranthropus") and early "Homo" ("Homo habilis"). The first type retains the chimpanzee-sized brains and small bodies of Australopithecus, but has evolved a notable robusticity in the areas of the skull involved with chewing: this is the group of robust australopithecines ("A. boisei", "A. robustus", "A. aethiopicus"). The second new hominid genus that appeared about 2.5 million years ago is the one to which modern humans belong, "Homo". What might have caused the branching that founded the new forms of robust australopithecines (Paranthropus) and Homo? What caused the extinction, around the same time (between 2-3 million years ago) of genus Australopithecus? Finally, what might have caused the extinction of Paranthropus about 1 million years ago? No certainty in answering these questions. But the environmental conditions at the time might hold some clues. Increased environmental variability, starting about 6 million years ago and continuing through time and resulting in a series of newly emerging and diverse habitats, may have initially promoted different adaptations among hominid populations, as seen in the branching that gave rise to the robust hominids and to Homo. And if the degree of the environmental fluctuations continued to increase, this may have put such pressure on the hominid adaptive responses that those groups less able to cope eventually became extinct. Unable to survive well enough to perpetuate themselves in the face of decreasing resources (e.g., Paranthropus, who were specialized vegetarians) these now-extinct hominids were possibly out-competed for space and resources by the better adapted hominids, a phenomenon known as competitive exclusion. In this case, only the adaptive response that included an increase in brain size, with its concomitant increase in ability to understand and manipulate the environment, proved successful in the long run. Hominoid, Hominid, Human. The traditional view has been to recognize three families of hominoid: the "Hylobatidae" (Asian lesser apes: gibbons and siamangs), the "Pongidae", and the "Hominidae". The emergence of hominoids. Hominoids are Late Miocene (15-5 million years ago) primates that share a small number of postcranial features with living apes and humans: When is a hominoid also a hominid? When we say that "Sahelanthropus tchadensis" is the earliest hominid, we mean that it is the oldest fossil that is classified with humans in the family "Hominidae". The rationale for including "Sahelanthropus tchadensis" in the "Hominidae" is based on similarities in shared derived characters that distinguish humans from other living primates. There are three categories of traits that separate hominids from contemporary apes: To be classified as a hominid, a Late Miocene primate (hominoid) must display at least some of these characteristics. "Sahelanthropus tchadensis" is bipedal, and shares many dental features with modern humans. However, the brain of "Sahelanthropus tchadensis" was no bigger than that of contemporary chimpanzees. As a consequence, this fossil is included in the same family (Hominidae) as modern humans, but not in the same genus. Traits defining early "Homo". Early Homo (e.g., "Homo habilis") is distinctly different from any of the earliest hominids, including the australopithecines, and similar to us in the following ways:
1,933
Introduction to Paleoanthropology/Hominids Early Behavior. One of the most important and intriguing questions in human evolution is about the diet of our earliest ancestors. The presence of primitive stone tools in the fossil record tells us that 2.5 million years ago, early hominids ("A. garhi") were using stone implements to cut the flesh off the bones of large animals that they had either hunted or whose carcasses they had scavenged. Earlier than 2.5 million years ago, however, we know very little about the foods that the early hominids ate, and the role that meat played in their diet. This is due to lack of direct evidence. Nevertheless, paleoanthropologists and archaeologists have tried to answer these questions indirectly using a number of techniques. What does chimpanzee food-consuming behavior suggest about early hominid behavior? Meat consuming strategy. Earliest ancestors and chimpanzees share a common ancestor (around 5-7 million years ago). Therefore, understanding chimpanzee hunting behavior and ecology may tell us a great deal about the behavior and ecology of those earliest hominids. In the early 1960s, when Jane Goodall began her research on chimpanzees in Gombe National Park (Tanzania), it was thought that chimpanzees were herbivores. In fact, when Goodall first reported meat hunting by chimpanzees, many people were extremely sceptical. Today, hunting by chimpanzees at Gombe and other locations in Africa has been well documented. We now know that each year chimpanzees may kill and eat more than 150 small and medium-sized animals, such as monkeys (red colobus monkey, their favorite prey), but also wild pigs and small antelopes. Did early hominids hunt and eat small and medium-sized animals? It is quite possible that they did. We know that colobus-like monkeys inhabited the woodlands and riverside gallery forest in which early hominids lived 3-5 Myrs ago. There were also small animals and the young of larger animals to catch opportunistically on the ground. Many researchers now believe that the carcasses of dead animals were an important source of meat for early hominids once they had stone tools to use (after 2.5 million years ago) for removing the flesh from the carcass. Wild chimpanzees show little interest in dead animals as a food source, so scavenging may have evolved as an important mode of getting food when hominids began to make and use tools for getting at meat. Before this time, it seems likely that earlier hominids were hunting small mammals as chimpanzees do today and that the role that hunting played in the early hominids' social lives was probably as complex and political as it is in the social lives of chimpanzees. When we ask when meat became an important part of the human diet, we therefore must look well before the evolutionary split between apes and humans in our own family tree. Nut cracking. Nut cracking behavior is another chimpanzee activity that can partially reflect early hominin behavior. From Jan. 2006 to May 2006, Susana Carvalho and her colleagues conducted a series of observation on several chimpanzee groups in Bossou and Diecké in Guinea, western Africa. Both direct and indirect observation approach was applied to outdoor laboratory and wild environment scenarios. The results of this research show three resource-exploitation strategies that chimpanzees apply to utilize lithic material as hammers and anvils to crack nut: 1) Maximum optimization of time and energy (choose the closest spot to process nut). 2) Transport of nuts to a different site or transport of the raw materials for tools to the food. 3) Social strategy such as the behavior of transport of tools and food to a more distant spot. This behavior might relate to sharing of space and resources when various individuals occupy the same area. The results demonstrate that chimpanzees have similar food exploitation strategies with early hominins which indicate the potential application of primate archaeology to the understanding of early hominin behavior. What do tooth wear patterns suggest about early hominid behavior? Bones and teeth in the living person are very plastic and respond to mechanical stimuli over the course of an individual's lifetime. We know, for example, that food consistency (hard vs. soft) has a strong impact on the masticatory (chewing) system (muscles and teeth). Bones and teeth in the living person are therefore tissues that are remarkably sensitive to the environment. As such, human remains from archaeological sites offer us a retrospective biological picture of the past that is rarely available from other lines of evidence. Also, new technological advances developed in the past ten years or so now make it possible to reconstruct and interpret in amazing detail the physical activities and adaptations of hominids in diverse environmental settings. Some types of foods are more difficult to process than others, and primates tend to specialize in different kinds of diets. Most living primates show three basic dietary adaptations: Many primates, such as humans, show a combination of these patterns and are called omnivores, which in a few primates includes eating meat. The ingestion both of leaves and of insects requires that the leaves and the insect skeletons be broken up and chopped into small pieces. The molars of folivores and insectivores are characterized by the development of shearing crests on the molars that function to cut food into small pieces. Insectivores' molars are further characterized by high, pointed cusps that are capable of puncturing the outside skeleton of insects. Frugivores, on the other hand, have molar teeth with low, rounded cusps; their molars have few crests and are characterized by broad, flat basins for crushing the food. In the 1950s, John Robinson developed what came to be known as the dietary hypothesis. According to this theory there were fundamentally two kinds of hominids in the Plio-Pleistocene. One was the "robust" australopithecine (called Paranthropus) that was specialized for herbivory, and the other was the "gracile" australopithecine that was an omnivore/carnivore. By this theory the former became extinct while the latter evolved into "Homo". Like most generalizations about human evolution, Robinson's dietary hypothesis was controversial, but it stood as a useful model for decades. Detailed analyses of the tooth surface under microscope appeared to confirm that the diet of "A. robustus" consisted primarily of plants, particularly small and hard objects like seeds, nuts and tubers. The relative sizes and shapes of the teeth of both "A. afarensis" and "A. africanus" indicated as well a mostly mixed vegetable diet of fruits and leaves. By contrast, early "Homo" was more omnivorous. But as new fossil hominid species were discovered in East Africa and new analyses were done on the old fossils, the usefulness of the model diminished. For instance, there is a new understanding that the two South African species ("A. africanus" and "A. robustus") are very similar when compared to other early hominid species. They share a suite of traits that are absent in earlier species of Australopithecus, including expanded cheek teeth and faces remodeled to withstand forces generated from heavy chewing. What do isotopic studies suggest about early hominid behavior? Omnivory can be suggested by studies of the stable carbon isotopes and strontium(Sr)-calcium(Ca) ratios in early hominid teeth and bones. For instance, a recent study of carbon isotope (13C) in the tooth enamel of a sample of "A. africanus" indicated that members of this species ate either tropical grasses or the flesh of animals that ate tropical grasses or both. But because the dentition analyzed by these researchers lacked the tooth wear patterns indicative of grass-eating, the carbon may have come from grass-eating animals. This is therefore a possible evidence that the australopithecines either hunted small animals or scavenged the carcasses of larger ones. There is new evidence also that "A. robustus" might not be a herbivore. Isotopic studies reveal chemical signals associated with animals whose diet is omnivorous and not specialized herbivory. The results from 13C analysis indicate that "A. robustus" either ate grass and grass seeds or ate animals that ate grasses. Since the Sr/Ca ratios suggest that "A. robustus" did not eat grasses, these data indicate that "A. robustus" was at least partially carnivorous. Summary. Much of the evidence for the earliest hominids ("Sahelanthropus tchadensis", "Orrorin tugenensis", "Ardipithecus ramidus") is not yet available. "Australopithecus anamensis" shows the first indications of thicker molar enamel in a hominid. This suggests that "A. anamensis" might have been the first hominid to be able to effectively withstand the functional demands of hard and perhaps abrasive objects in its diet, whether or not such items were frequently eaten or were only an important occasional food source. Australopithecus afarensis was similar to "A. anamensis" in relative tooth sizes and probable enamel thickness, yet it did show a large increase in mandibular robusticity. Hard and perhaps abrasive foods may have become then even more important components of the diet of "A. afarensis". "Australopithecus africanus" shows yet another increase in postcanine tooth size, which in itself would suggest an increase in the sizes and abrasiveness of foods. However, its molar microwear does not show the degree of pitting one might expect from a classic hard-object feeder. Thus, even "A. africanus" has evidently not begun to specialize in hard objects, but rather has emphasized dietary breadth (omnivore), as evidenced by isotopic studies. Subsequent "robust" australopithecines do show hard-object microwear and craniodental specializations, suggesting a substantial departude in feeding adaptive strategies early in the Pleistocene. Yet, recent chemical and anatomical studies on "A. robustus" suggest that this species may have consumed some animal protein. In fact, they might have specialized on tough plant material during the dry season but had a more diverse diet during the rest of the year.
2,558
SNFO Flight Planning/Foreword. Course Objective. To provide the SNFO/SNAV with a level of instrument navigation flight planning knowledge prerequisite to his/her learning, understanding, and performance in flight. Specific Instructional Objective. Upon completion of this course, the student will demonstrate his/her knowledge of the instrument navigation flight planning by completing the end-of-course examination with a minimum of 80% accuracy and successful completion of all 2B47 training events.
110
JLPT Guide/About JLPT. The Japanese Language Proficiency Test (JLPT) (日本語能力試験 "nihongo nōryoku shiken") was created in 1984 in response to the increasing demand of students of the Japanese language to certify their proficiency. There are five levels: N5 (easiest) to N1 (hardest). Application. The JLPT tests are held twice a year. In December all five levels can be taken but in July only the most difficult levels, levels 1 and 2, can be taken. The test date for winter is in December and the application period is usually September-October. If you are interested in applying for the JLPT test, you can find your nearest test center online. Also ask about the application period and application fee as they differ in each country. The JLPT is offered in approximately 85 countries. In Japan, the test is administered by the Japan Educational Exchanges and Services (JEES) (財団法人 日本国際教育支援協会 "zaidan hōjin nihon kokusai kyōiku shien kyōkai"), while the Japan Foundation (独立行政法人 国際交流基金 "dokuritsu gyōsei hōjin kokusai kōryū kikin") administers overseas tests. Criteria. Important note to students wishing to study in Japanese universities: The JLPT certification was a requirement for entry into Japanese universities until 2003. The Examination for Japanese University Admission for International Students (EJU) replaces JLPT as the requirement for university entry in Japan. More on this will be written on a separate page. Test sections. To pass the test, the test taker must be over the minimum overall score and also over the minimum score in each section. N1, N2 and N3 have three scoring sections, while N4 and N5 have two scoring sections. Above tables from Japanese Language Proficiency Test. Estimated Study Time. Study hour comparison data published by the Japanese Language Education Center: Old test levels. In 2010, the test changed from four levels to five. Additionally, the criteria for passing the JLPT was changed, requiring a passing mark in all sections of the test, not just an overall passing mark. Some websites have not been updated to reflect this change, but the material on them for N5, N4, and N1 can be considered fairly reliable, since those tests are much the same as before.
583
Introduction to Paleoanthropology/Oldowan. The Olduvai Gorge. 2 million years ago, Olduvai Gorge (Tanzania) was a lake. Its shores were inhabited not only by numerous wild animals but also by groups of hominids, including Paranthropus boisei and Homo habilis, as well as the later Homo erectus. The gorge, therefore, is a great source of Palaeolithic remains as well as a key site providing evidence of human evolutionary development. This is one of the main reasons that drew Louis and Mary Leakey back year after year at Olduvai Gorge. Certain details of the lives of the creatures who lived at Olduvai have been reconstructed from the hundreds of thousands of bits of material that they left behind: various stones and bones. No one of these things, alone, would mean much, but when all are analyzed and fitted together, patterns begin to emerge. Among the finds are assemblages of stone tools dated to between 2.2 Myrs and 620,000 years ago. These were found little disturbed from when they were left, together with the bones of now-extinct animals that provided food. Mary Leakey found that there were two stoneworking traditions at Olduvai. One, the Acheulean industry, appears first in Bed II and lasts until Bed IV. The other, the Oldowan, is older and more primitive, and occurs throughout Bed I, as well as at other African sites in Ethiopia, Kenya and Tanzania. Subsistence patterns. Meat-eating. Until about 2.5 million years ago, early hominids lived on foods that could be picked or gathered: plants, fruits, invertebrate animals such as ants and termites, and even occasional pieces of meat (perhaps hunted in the same manner as chimpanzees do today). After 2.5 million years ago, meat seems to become more important in early hominids' diet. Evolving hominids' new interest in meat is of major importance in paleoanthropology. Out on the savanna, it is hard for a primate with a digestive system like that of humans to satisfy its amino-acid requirements from available plant resources. Moreover, failure to do so has serious consequences: growth depression, malnutrition, and ultimately death. The most readily accessible plant resources would have been the proteins accessible in leaves and legumes, but these are hard for primates like us to digest unless they are cooked. In contrast, animal foods (ants, termites, eggs) not only are easily digestible, but they provide high-quantity proteins that contain all the essential amino acids. All things considered, we should not be surprised if our own ancestors solved their "protein problem" in somewhat the same way that chimps on the savanna do today. Increased meat consumption on the part of early hominids did more than merely ensure an adequate intake of essential amino acids. Animals that live on plant foods must eat large quantities of vegetation, and obtaining such foods consumes much of their time. Meat eaters, by contrast, have no need to eat so much or so often. Consequently, meat-eating hominids may have had more leisure time available to explore and manipulate their environment, and to lie around and play. Such activities probably were a stimulus to hominid brain development. The importance of meat eating for early hominid brain development is suggested by the size of their brains: Hunters or scavengers? The archaeological evidence indicates that Oldowan hominids ate meat. They processed the carcasses of large animals, and we assume that they ate the meat they cut from the bones. Meat-eating animals can acquire meat in several different ways: There has been considerable dispute among anthropologists about how early hominids acquired meat. Some have argued that hunting, division of labor, use of home bases and food sharing emerged very early in hominid history. Others think the Oldowan hominids would have been unable to capture large mammals because they were too small and too poorly armed. Recent zooarchaeological evidence suggests that early hominids (after 2.5 million years ago) may have acquired meat mainly by scavenging, and maybe occasionally by hunting. If hominids obtained most of their meat from scavenging, we would expect to find cut marks mainly on bones left at kill sites by predators (lions, hyenas). If hominids obtained most of their meat from their own kills, we would expect to find cut marks mainly on large bones, like limb bones. However, at Olduvai Gorge, cut marks appear on both kinds of bones: those usually left by scavengers and those normally monopolized by hunters. The evidence from tool marks on bones indicates that humans sometimes acquired meaty bones before, and sometimes after, other predators had gnawed on them. Settlement patterns. During decades of work at Olduvai Gorge, Mary and Louis Leakey and their team laid bare numerous ancient hominid sites. Sometimes the sites were simply spots where the bones of one or more hominid species were discovered. Often, however, hominid remains were found in association with concentrations of animal bones, stone tools, and debris. At one spot, in Bed I, the bones of an elephant lay in close association with more than 200 stone tools. Apparently, the animal was butchered here; there are no indications of any other activity. At another spot (DK-I Site), on an occupation surface 1.8 million years old, basalt stones were found grouped in small heaps forming a circle. The interior of the circle was practically empty, while numerous tools and food debris littered the ground outside, right up to the edge of the circle. Earliest stone industry. Principles. Use of specially made stone tools appears to have arisen as result of need for implements to butcher and prepare meat, because hominid teeth were inadequate for the task. Transformation of lump of stone into a "chopper", "knife" or "scraper" is a far cry from what a chimpanzee does when it transforms a stick into a termite probe. The stone tool is quite unlike the lump of stone. Thus, the toolmaker must have in mind an abstract idea of the tool to be made, as well as a specific set of steps that will accomplish the transformation from raw material to finished product. Furthermore, only certain kinds of stone have the flaking properties that will allow the transformation to take place, and the toolmaker must know about these. Therefore, two main components to remember: Evidence. The oldest Lower Palaeolithic tools (2.0-1.5 million years ago) found at Olduvai Gorge ("Homo habilis") are in the Oldowan tool tradition. Nevertheless, older materials (2.6-2.5 million year ago) have recently been recorded from sites located in Ethiopia (Hadar, Omo, Gona, Bouri - "Australopithecus garhi") and Kenya (Lokalalei). Because of a lack of remarkable differences in the techniques and styles of artifact manufacture for over 1 million years (2.6-1.5 million years ago), a technological stasis was suggested for the Oldowan Industry. The makers of the earliest stone artifacts travelled some distances to acquire their raw materials, implying greater mobility, long-term planning and foresight not recognized earlier. Oldowan stone tools consist of all-purpose generalized chopping tools and flakes. Although these artifacts are very crude, it is clear that they have been deliberately modified. The technique of manufacture used was the percussion. The main intent of Oldowan tool makers was the production of cores and flakes with sharp-edges. These simple but effective Oldowan choppers and flakes made possible the addition of meat to the diet on a regular basis, because people could now butcher meat, skin any animal, and split bones for marrow. Overall, the hominids responsible for making these stone tools understood the flaking properties of the raw materials available; they selected appropriate cobbles for making artifacts; and they were as competent as later hominids in their knapping abilities. Finally, the manufacture of stone tools must have played a major role in the evolution of the human brain, first by putting a premium on manual dexterity and fine manipulation over mere power in the use of the hands. This in turn put a premium in the use of the hands. Early hominid behavior. During the 1970s and 1980s many workers, including Mary Leakey and Glynn Isaac, used an analogy from modern hunter-gatherer cultures to interpret early hominid behavior of the Oldowan period (e.g., the Bed I sites at Olduvai Gorge). They concluded that many of the sites were probably camps, often called "home bases", where group members gathered at the end of the day to prepare and share food, to socialize, to make tools, and to sleep. The circular concentration of stones at the DK-I site was interpreted as the remains of a shelter or windbreak similar to those still made by some African foraging cultures. Other concentrations of bones and stones were thought to be the remains of living sites originally ringed by thorn hedges for defense against predators. Later, other humanlike elements were added to the mix, and early Homo was described as showing a sexual division of labor [females gathering plant foods and males hunting for meat] and some of the Olduvai occupation levels were interpreted as butchering sites. Views on the lifestyle of early "Homo" began to change in the late 1980s, as many scholars became convinced that these hominids had been overly humanized. Researchers began to show that early "Homo" shared the Olduvai sites with a variety of large carnivores, thus weakening the idea that these were the safe, social home bases originally envisioned. Studies of bone accumulations suggested that "H. habilis" was mainly a scavenger and not a full-fledged hunter. The bed I sites were interpreted as no more than "scavenging stations" where early "Homo" brought portions of large animal carcasses for consumption. Another recent suggestion is that the Olduvai Bed I sites mainly represent places where rocks were cached for the handy processing of animal foods obtained nearby. Oldowan toolmakers brought stones from sources several kilometers away and cached them at a number of locations within the group's territory. Stone tools could have been made at the cache sites for use elsewhere, but more frequently portions of carcasses were transported to the toolmaking site for processing. Summary. Current interpretations of the subsistence, settlement, and tool-use patterns of early hominids of the Oldowan period are more conservative than they have been in the past. Based upon these revised interpretations, the Oldowan toolmakers have recently been dehumanized. Although much more advanced than advanced apes, they still were probably quite different from modern people with regard to their living arrangements, methods and sexual division of food procurement and the sharing of food. The label human has to await the appearance of the next representative of the hominid family: "Homo erectus".
2,563
Introduction to Paleoanthropology/Acheulean. In 1866, German biologist Ernst Haeckel had proposed the generic name "Pithecanthropus" for a hypothetical missing link between apes and humans. In late 19th century, Dutch anatomist Eugene Dubois was on the Indonesian island of Java, searching for human fossils. In the fall of 1891, he encountered the now famous Trinil skull cap. The following year his crew uncovered a femur, a left thigh bone, very similar to that of modern humans. He was convinced he had discovered an erect, apelike transitional form between apes and humans. In 1894, he decided to call his fossil species "Pithecanthropus erectus". Dubois found no additional human fossils and he returned to the Netherlands in 1895. Others explored the same deposits on the island of Java, but new human remains appeared only between 1931 and 1933. Dubois's claim for a primitive human species was further reinforced by nearly simultaneous discoveries from near Beijing, China (at the site of Zhoukoudian). Between 1921 and 1937, various scholars undertook fieldwork in one collapsed cave (Locality 1) recovered many fragments of mandibles and skulls. One of them, Davidson Black, a Canadian anatomist, created a new genus and species for these fossils: "Sinanthropus pekinensis" ("Peking Chinese man"). In 1939, after comparison of the fossils in China and Java, some scholars concluded that they were extremely similar. They even proposed that Pithecanthropus and Sinanthropus were only subspecies of a single species, "Homo erectus", though they continued to use the original generic names as labels. From 1950 to 1964, various influential authorities in paleoanthropology agreed that Pithecanthropus and Sinanthropus were too similar to be placed in two different genera; and, by the late 1960s, the concept of Homo erectus was widely accepted. To the East Asian inventory of "H. erectus", many authorities would add European and especially African specimens that resembled the Asian fossil forms. In 1976, a team led by Richard Leakey discovered around Lake Turkana (Kenya) an amazingly well-preserved and complete skeleton of a "H. erectus" boy, called the Turkana Boy (WT-15000). In 1980s and 1990s: Site distribution. Africa. Unlike Australopithecines and even "Homo habilis", "Homo ergaster/erectus" was distributed throughout Africa: Traditionally, Homo erectus has been credited as being the prehistoric pioneer, a species that left Africa about 1 million years ago and began to disperse throughout Eurasia. But several important discoveries in the 1990s have reopened the question of when our ancestors first journeyed from Africa to other parts of the globe. Recent evidence now indicates that emigrant erectus made a much earlier departure from Africa. Israel. Ubeidiyeh Gesher Benot Yaaqov Republic of Georgia. In 1991, archaeologists excavating a grain-storage pit in the medieval town of Dmanisi uncovered the lower jaw of an adult erectus, along with animal bones and Oldowan stone tools. Different dating techniques (paleomagnetism, potassium-argon) gave a date of 1.8 million years ago, that clearly antedate that of Ubeidiya. Also the evidence from Dmanisi suggests now a true migration from Africa. China. Longgupo Cave Zhoukoudian Java. In 1994, report of new dates from sites of Modjokerto and Sangiran where "H. erectus" had been found in 1891. Geological age for these hominid remains had been estimated at about 1 million years old. Recent redating of these materials gave dates of 1.8 million years ago for the Modjokerto site and 1.6 million years ago for the Sangiran site. These dates remained striking due to the absence of any other firm evidence for early humans in East Asia prior to 1 Myrs ago. Yet the individuals from Modjokerto and Sangiran would have certainly traveled through this part of Asia to reach Java. Europe. Did Homo ergaster/erectus only head east into Asia, altogether bypassing Europe? Many paleoanthropologists believed until recently that no early humans entered Europe until 500,000 years ago. But the discovery of new fossils from Spain (Atapuerca, Orce) and Italy (Ceprano) secured a more ancient arrival for early humans in Europe. At Atapuerca, hundreds of flaked stones and roughly eighty human bone fragments were collected from sediments that antedate 780,000 years ago, and an age of about 800,000 years ago is the current best estimate. The artifacts comprise crudely flaked pebbles and simple flakes. The hominid fossils - teeth, jaws, skull fragments - come from several individuals of a new species named Homo antecessor. These craniofacial fragments are striking for derived features that differentiate them from Homo ergaster/erectus, but do not ally them specially with either "H. neanderthalensis" or "H. sapiens".
1,315
Financial Derivatives. This wikibook is devoted to detailing the methods for trading and evaluating financial derivatives, such as futures and options. This wikibook assumes a strong grasp of differential equations and some understanding of statistics. The ideas introduced will "not" make you into a good trader. They will merely provide you with some of the tools that derivatives traders use. Detailed contents list. = References = Fischer Black, ""The pricing of commodity contracts", The Journal of Financial Economics, 3 (1976), pp. 167–179. Stephen A. Ross, "Neoclassical Finance (Princeton Lectures in Finance)"", Princeton University Press, 2004 .
166
Linguistics/Phonetics. Introduction. If you have ever heard a person learning English as a second language say, "I want to go to the bitch" (meaning "I want to go to the beach"), you might understand the importance of mastering phonetics when learning new languages. As such an example illustrates, few people in our society give conscious thought to the sounds they produce and the subtle differences they possess. It is unfortunate, but hardly surprising, that few language-learning books use technical terminology to describe foreign sounds. Language learners often hear unhelpful advice such as "It is pronounced more "crisply"". As scientists, we cannot be satisfied with this state of affairs. If we can classify the sounds of language, we are one step closer to understanding the gestalt of human communication. The study of the production and perception of speech sounds is a branch of linguistics called "phonetics", studied by "phoneticians". The study of how languages treat these sounds is called "phonology", covered in the next chapter. While these two fields have considerable overlap, it should soon become clear that they differ in important ways. Phonetics is the systematic study of the human ability to make and hear sounds which use the vocal organs of speech, especially for producing oral language. It is usually divided into the three branches of (1) articulatory, (2) acoustic and (3) auditory phonetics. It is also traditionally differentiated from (though overlaps with) the field of phonology, which is the formal study of the sound systems (phonologies) of languages, especially the universal properties displayed in ALL languages, such as the psycholinguistic aspects of phonological processing and acquisition. One of the most important tools of phonetics and phonology is a special alphabet called the "International Phonetic Alphabet" or "IPA", a standardized representation of the sounds used in human language. In this chapter, you will learn what sounds humans use in their languages, and how linguists represent those sounds in IPA. Reading and writing IPA will help you understand what's really happening when people speak. Phonetic transcription and the IPA. It is often convenient to split up speech in a language into "segments", which are defined as identifiable units in the flow of speech. In many ways this discretization of speech is somewhat fictional, in that both articulation and the acoustic signal of speech are almost entirely continuous. Additionally, attempts to classify segments by nature must ignore some level of detail, as no two segments produced at separate times are ever identical. Even so, segmentation remains a crucial tool in almost all aspects of linguistics. In phonetics the most basic segments are called "phones", which may be defined as units in speech which can be distinguished acoustically or articulatorily. This definition allows for different degrees of "wideness". In many contexts phones may be thought of as acoustic or articulatory "targets" which may or may not be fully reached in actual speech. Another, more commonly used segment is the "phoneme", which will be defined more precisely in the next chapter. It is important to keep in mind that while the segment may (or may not) be a reality of phonology, it is in no way an actual physical part of realized speech in the vocal tract. Realized speech is highly co-articulated, displays movement and spreads aspects of sounds over entire syllables and words. It is convenient to think of speech as a succession of segments (which may or may not coincide closely with ideal segments) in order to capture it for discussion in written discourse, but actual phonetic analysis of speech confounds such a model. It should be pointed out, however, that if we wish to set down a representation of dynamic, complex speech into static writing, segmental constructs are very convenient fictions to indicate what we are trying to set down. Similarly, syllables and words are convenient structures which capture the prosodic structure of a language, and are often notated in written form, but are not physical realities. The International Phonetic Alphabet (IPA) is a system of phonetic notation which provides a standardized system of transcribing phonetic segments up to a certain degree of detail. It may be represented visually using charts, which may be found in full in Appendix A. We will leave a more detailed description of the IPA to the end of this chapter, but for now just be aware that text in square brackets [] is phonetic transcription in IPA. We will reproduce simplified charts of different subsets of the IPA here as they are explained. Variations of IPA such as the well established Americanist phonetic notation and a new, simplified international version called SaypU are available, but IPA is more comprehensive and so preferred for educational use, despite its complexity. To understand the IPA's taxonomy of phones, it is important to consider articulatory, acoustic, and auditory phonetics. Articulatory phonetics. Articulatory phonetics is concerned with how the sounds of language are physically produced by the vocal apparatus. The units articulatory phonetics deals with are known as "gestures", which are abstract characterizations of articulatory events. Speaking in terms of articulation, the sounds that we utter to make language can be split into two different types: consonants and vowels. For the purposes of articulatory phonetics, consonant sounds are typically characterized as sounds that have constricted or closed configurations of the vocal tract. Vowels, on the other hand, are characterized in articulatory terms as having relatively little constriction; that is, an open configuration of the vocal tract. Vowels carry much of the pitch of speech and can be held different durations, such as a half a beat, one beat, two beats, three beats, etc. of speech rhythm. Consonants, on the other hand, do not carry the prosodic pitch (especially if devoiced and not nasalized) and do not display the potential for the durations that vowels can have. Linguists may also speak of 'semi-vowels' or 'semi-consonants' (often used as synonymous terms). For example, a sound such as [w] phonetically seems more like a vowel (with relative lack of constriction or closure of the vocal tract) but, phonologically speaking, behaves as a consonant in that it always appears before a vowel sound at the beginning (onset) of a syllable. Consonants. Phoneticians generally characterize consonants as being distinguished by settings of the independent variables "place of articulation" (POA) and "manner of articulation" (MOA). In layman's terminology, POA is "where" the consonant is produced, while MOA is "how" the consonant is produced. The following are descriptions of the different POAs: Other POAs are also possible, but will be described in more detail later on. MOA involves a number of different variables which may vary independently: Knowing this information is enough to construct a simplified IPA chart of the consonants of English. As is conventional, MOA is organized in rows, and POA columns. Voicing pairs occur in the same cells; the ones in bold are voiced while the rest are voiceless. Vowels. Vowels are very different from consonants, but our method of decomposing sounds into sets of features works equally well. Vowels can essentially be viewed as being combinations of three variables: Back vowels tend to be rounded, and front vowels unrounded, for reasons which will be covered later in this chapter. However, this tendency is not universal. For instance, the vowel in the French word "bœuf" is what would result from the vowel of the English word "bet" being pronounced with rounding. Some East and Southeast Asian languages possess unrounded back vowels, which are difficult to describe without a sound sample. The "cardinal vowels" are a set of idealized vowels used by phoneticians as a base of reference. The IPA orders the vowels in a similar way to the consonants, separating the three main distinguishing variables into different dimensions. The "vowel trapezoid" may be thought of as a rough diagram of the mouth, with the left being the front, the right the back, and the vertical direction representing height in the mouth. Each vowel is positioned thusly based on height and backness. Rounding isn't indicated by location, but when pairs of vowels sharing the same height and backness occur next to each other, the left member is always unrounded, and the right a rounded vowel. Otherwise, just use the general heuristic that rounded vowels are usually back. The following is a simplified version of the IPA vowel chart: Many of these vowels will be familiar from (General American) English. The following are rough examples: Note, however, that in phonetics we can describe any segment in arbitrarily fine detail. As such, when we say that, say, the vowel in "cat" "is" [æ], we are sacrificing precision. Some of these vowels have no English equivalent, but may be familiar from foreign languages. [a] represents the sound in Spanish "h"a"blo", the front rounded [œ] vowel is that in French "b"œu"f", [y] is that of German "h"ü"ten", and [ɯ] (perhaps the most exotic to most English speakers) is found as the first vowel of European Portuguese "p"e"gar". Other types of phonetics. Acoustic phonetics. Acoustic phonetics deals with the physical medium of speech -- that is, how speech manipulates sound waves. Sound is composed of waves of high- and low-pressure areas which propagate through air. The most basic way to view sound is as a "wave function". This plots the pressure measured by the sound-recording device against time, corresponding closely to the physical nature of sound. Loudness may be found by looking at the "amplitude" of the sound at a given time. However, this approach is fairly limited. Humans, in fact, don't process sound using this raw data. The ear analyzes sound by decomposing it into its constituent frequencies, a mathematical algorithm known as the "Fourier transform". As a sound is produced in the oral tract, the column of air in the tract serves as a "harmonic oscillator", oscillating at numerous frequencies simultaneously. Some of the frequencies of oscillation are at higher amplitudes than others, a property called "resonance". The "resonant frequencies" (frequencies with relatively high resonance) of the vocal tract are known in phonetics as "formants". The formants in a speech sound are numbered by their frequency: f1 (pronounced "eff-one") has the lowest frequency, followed by f2, f3, etc. The analysis of formants turns out to be key to acoustic phonetics, as any change in the shape of the vocal cavity changes which resonances are dominant. There are two basic ways to analyze the formants of a speech signal. Firstly, at any given time the sound contains a mixture of different frequencies of sound. The relative amplitudes (strengths) of different frequencies at a particular time may be shown as a "frequency spectrum". As you can see on the right, frequency is plotted against amplitude, and formants show up as peaks. Another way to view formants is by using a "spectrogram". This plots time against frequency, with amplitude represented by darkness. Formants show up as dark bands, and their movement may be tracked through time. Given the development of modern technology, acoustic analysis is now accessible to anyone with a computer and a microphone. Auditory phonetics. Auditory phonetics is a branch of phonetics concerned with the hearing of speech sounds and with speech perception. As a learner. Children learn the sounds made in their native language within their first year, and after this it becomes difficult to produce sounds foreign to their native languages. Even familiar segments are produced without reflection on the manner of their production. As such, it is highly recommended that you practice pronouncing isolated segments, and listen repeatedly to examples of those which seem exotic to you. Soon, you will be noticing yourself becoming more observant as to the phones you hear in your everyday life, and will be less puzzled by unfamiliar segments in other languages. Workbook section. Exercise 1: English Places of Articulation. The following English words were tagged with the place of articulation of their first segment, but the tags have been scrambled. Match each word with the correct POA: Exercise 2: Brezhoneg. The following text in the Breton language has been translated into English: Ar brezhoneg a zo ur yezh predenek eus skourr ar yezhoù keltiek komzet en darn vrasañ eus Enez Vreizh abaoe an Henamzer betek an aloubadegoù Saoz. Tost eo tre d'ar c'herneveureg (Cornwall:Kernow) ha d'ar c'hembraeg (Wales:Cymru) ha nebeutoc'h d'ar yezhoù keltiek all a zo c'hoazh bev : Skoseg, Gouezeleg ha Manaveg (Scotland, Ireland, Isle of Man) (ar skourr gouezelek). Breton is a Brythonic language belonging to the branch of Celtic spoken over most of Britain from prehistoric times until the Saxon invasions. It is closely related to Cornish (Cornwall:Kernow) and Welsh (Wales:Cymru) and more distantly to the other surviving Celtic languages, Scots, Irish and Manx Gaelic (Scotland, Ireland, Isle of Man) (The Goidelic group). You have been commissioned to create a phonetic transcription of this text using the following recording of a native speaker: Write the transcription, being as accurate as you can. (Hint: The Breton rhotic is similar to that of French.)
3,207
Cryptography/Public Key Overview. We briefly mentioned ../Asymmetric Ciphers/ earlier in this book. In this and following chapters we will describe how they work in much more detail. The discovery of public key cryptography revolutionized the practice of cryptography in the 1970s. In public key cryptography, the key used to encrypt a message is not the same as the key used to decrypt it. This requires an asymmetric key algorithm. (All previous cryptographic algorithms and cryptosystems, now retroactively categorized as "symmetric key cryptography" or "shared key cryptography", always use the same key to encrypt a message and later to decrypt that message). Public key cryptography is cryptography where the key exchange process between person A and person B must not be kept secret. Private keys actually are never exchanged. In fact Person A sends information (possibly about a session key) to Person B so that it is only interpretable to Person B. An intruder cannot discover the meaning of the exchange because Person B has a piece of information that the intruder does not. Person A didn't access Person B's secret information (private key) either he only indirectly accessed it via a "public" key. The public key is formed from the private key by using a One-Way Function. The concepts behind public key cryptography are best expressed by a simple puzzle. "Alice wants to send a trinket to Bob without an intruder stealing it. Each person has a lock and a key." A Non-Public Key Solution This solution, although the most intuitive, suffers from a major problem. The intruder could monitor the boxes and copy the key as it sent. If an intruder has Alice's key the trinket or anything else will be stolen in transit. To some the puzzle seems impossible, but those who understand public key cryptography solve it easily. Public Key Solution The puzzle's trick is double locking the box. This back-and-forth "double lock" process is used in many asymmetric key algorithms, such as ElGamal encryption and Diffie–Hellman key exchange, but not all of them. This is the double lock principle, but it is not Public Cryptography as both keys are secret. In public cryptography one key is public, the other is secret. Nobody knowing the public key is able to decipher a message encrypted with a public key. Only the secret key is able to decipher a message encrypted with a public key. A real-world analogy to public keys would be the padlock. The padlock can be easily closed, but it is much harder to do the reverse, namely opening. It is not impossible, but it requires much more effort to open it than to close it, assuming you don't have the (private) key. Alice could send Bob an open padlock by mail (the equivalent to the public key). Bob then puts a message for Alice into a box and locks the box with the padlock. Now, Bob sends the locked box back to Alice and Alice opens it with her private key. Note that this approach is susceptible to man-in-the-middle attacks. If Charles intercepts the mail with Alice's padlock and replaces it with his own padlock, Bob will lock the box with the wrong padlock and Charles will be able to intercept the answer. Charles could then even lock the box again with Alice's padlock and forward the box to Alice. That way, she will never notice that the message got intercepted. This illustrates that it is very important to obtain public keys (the padlocks) from a trusted source. That's what certificates are for. They come along with the public keys and basically say something like 'I, Microsoft, hereby confirm that this padlock belongs to Alice', and are signed using secure digital signatures. So someone (Bob) is able to send securely an encrypted data to Alice, if Alice had made her key public. Bob is able to prove that he owns a secret key only by providing: Something similar to the double lock principle is Merkle's puzzle, which is the ancestor of the Diffie–Hellman key exchange, which is itself a close cousin to RSA public key system.
937
Introduction to Paleoanthropology/Hominids Acheulean. The hominids. African "Homo erectus": "Homo ergaster". "H. ergaster" existed between 1.8 million and 1.3 million years ago. Like "H. habilis", the face shows: Early "H. ergaster" specimens average about 900 cc, while late ones have an average of about 1100 cc. The skeleton is more robust than those of modern humans, implying greater strength. Body proportions vary: Study of the Turkana Boy skeleton indicates that H. ergaster may have been more efficient at walking than modern humans, whose skeletons have had to adapt to allow for the birth of larger-brained infants. "Homo habilis" and all the australopithecines are found only in Africa, but H. erectus/ergaster was wide-ranging, and has been found in Africa, Asia, and Europe. Asian "Homo erectus". Specimens of "H. erectus" from Eastern Asia differ morphologically from African specimens: As a consequence of these features, they are less like humans than the African forms of "H. erectus". Paleoanthropologists who study extinct populations are forced to decide whether there was one species or two based on morphological traits alone. They must ask whether eastern and western forms are as different from each other as typical species. If systematics finally agree that eastern and western populations of "H. erectus" are distinct species, then the eastern Asian form will keep the name "H. erectus". The western forms have been given a new name: "Homo ergaster" (means "work man") and was first applied to a very old specimen from East Turkana in East Africa. "Homo georgicus". Specimens recovered recently exhibit characteristic "H. erectus" features: sagittal crest, marked constriction of the skull behind the eyes. But they are also extremely different in several ways, resembling "H. habilis": Some researchers propose that these fossils might represent a new species of Homo: "H. georgicus". "Homo antecessor". Named in 1997 from fossils (juvenile specimen) found in Atapuerca (Spain). Dated to at least 780,000 years ago, it makes these fossils the oldest confirmed European hominids. Mid-facial area of antecessor seems very modern, but other parts of skull (e.g., teeth, forehead and browridges) are much more primitive. Fossils assigned to new species on grounds that they exhibit unknown combination of traits: they are less derived in the Neanderthal direction than later mid-Quaternary European specimens assigned to Homo heidelbergensis. "Homo heidelbergensis". Archaic forms of Homo sapiens first appeared in Europe about 500,000 years ago (until about 200,000 years ago) and are called Homo heidelbergensis. Found in various places in Europe, Africa and maybe Asia. This species covers a diverse group of skulls which have features of both Homo erectus and modern humans. Fossil features: Fossils could represent a population near the common ancestry of Neanderthals and modern humans. Footprints of H. heidelbergensis (earliest human footprints) have been found in Italy in 2003. Phylogenic relationships. For almost three decades, paleoanthropologists have often divided the genus Homo among three successive species: In this view, each species was distinguished from its predecessor primarily by larger brain size and by details of cranio-facial morphology: The accumulating evidence of fossils has increasingly undermined a scenario based on three successive species or evolutionary stages. It now strongly favors a scheme that more explicitly recognizes the importance of branching in the evolution of Homo. This new scheme continues to accept "H. habilis" as the ancestor for all later Homo. Its descendants at 1.8-1.7 million mears ago may still be called H. erectus, but H. ergaster is now more widely accepted. By 600,000-500,000 years ago, "H. ergaster" had produced several lines leading to H. neanderthalensis in Europe and "H. sapiens" in Africa. About 600,000 years ago, both of these species shared a common ancestor to which the name H. heidelbergensis could be applied. "Out-of-Africa 1" model. Homo erectus in Asia would be as old as Homo ergaster in Africa. Do the new dates from Dmanisi and Java falsify the hypothesis of an African origin for "Homo erectus"? Not necessarily. If the species evolved just slightly earlier than the oldest African fossils (2.0-1.9 million years ago) and then immediately began its geographic spread, it could have reached Europe and Asia fairly quickly. But the "Out-of-Africa 1" migration is more complex. Conventional paleoanthropological wisdom holds that the first human to leave Africa were tall, large-brained hominids ("Homo ergaster/erectus"). New fossils discovered in Georgia (Dmanisi) are forcing scholars to rethink that scenario completely. These Georgian hominids are far smaller and more primitive in both anatomy and technology than expected, leaving experts wondering not only why early humans first ventured out of Africa, but also how. Summary. "Homo ergaster" was the first hominid species whose anatomy fully justify the label human:
1,386
Introduction to Paleoanthropology/Acheulean Technology. "Homo ergaster/erectus", the author of the Acheulean industry, enjoyed impressive longevity as a species and great geographic spread. We will review several cultural innovations and behavioral changes that might have contributed to the success of "H. ergaster/erectus": The Acheulean industrial complex. Stone tools. By the time "Homo ergaster/erectus" appeared, Oldowan choppers and flake tools had been in use for 800,000 years. For another 100,000 to 400,000 years, Oldowan tools continued to be the top-of-the-line implements for early "Homo ergaster/erectus". Between 1.7 and 1.4 million years ago, Africa witnessed a significant advance in stone tool technology: the development of the Acheulean industry. The Acheulean tool kit included: A biface reveals a cutting edge that has been flaked carefully on both sides to make it straighter and sharper than the primitive Oldowan chopper. The purpose of the two-sided, or bifacial, method was to change the shape of the core from essentially round to flattish, for only with a flat stone can one get a decent cutting edge. One technological improvement that permitted the more controlled working required to shape an Acheulean handax was the gradual implementation, during the Acheulean period, of different kinds of hammers. In earlier times, the toolmaker knocked flakes from the core with another piece of stone. The hard shock of rock on rock tended to leave deep, irregular scars and wavy cutting edges. But a wood or bone hammer, being softer, gave its user much greater control over flaking. Such implements left shallower, cleaner scars and produced sharper and straighter cutting edges. Acheulean handaxes and cleavers are generally interpreted as being implements for processing animal carcasses. Even though cleavers could have been used to chop and shape wood, their wear patterns are more suggestive of use on soft material, such as hides and meat. Acheulean tools represent an adaptation for habitual and systematic butchery, and especially the dismembering of large animal carcasses, as "Homo ergaster/erectus" experienced a strong dietary shift toward more meat consumption. Acheulean tools originated in Africa between 1.7 and 1.4 million years ago. They were then produced continuously throughout "Homo ergaster/erectus' " long African residency and beyond, finally disappearing about 200,000 years ago. Generally, Acheulean tools from sites clearly older than 400,000 to 500,000 years ago are attributed to Homo ergaster/erectus, even in the absence of confirming fossils. At several important Late Acheulean sites, however, the toolmakers' species identity remains ambiguous because the sites lack hominid fossils and they date to a period when "Homo erectus" and archaic "Homo sapiens" (e.g., "Homo heidelbergensis") overlapped in time. Other raw materials. Stone artifacts dominate the Paleolithic record because of their durability, but early people surely used other raw materials, including bone and more perishable substances like wood, reeds, and skin. A few sites, mainly European, have produced wooden artifacts, which date usually between roughly 600,000 and 300,000 years ago: Diffusion of Technology. Wide variability in stone tools present with "H. erectus". In Eastern Asia, H. erectus specimens are associated not with Acheulean tools, but instead with Oldowan tools, which were retained until 200,000 to 300,000 years ago. This pattern was first pointed out by Hallam Movius in 1948. The line dividing the Old World into Acheulean and non-Acheulean regions became known as the Movius line. Handax cultures flourished to the west and south of the line, but in the east, only choppers and flake tools were found. Why were there no Acheulean handax cultures in the Eastern provinces of Asia? In sum, while the Acheulean tradition, with its handaxes and cleavers, was an important lithic advance by Homo ergaster over older technologies, it constituted only one of several adaptive patterns used by the species. Clever and behaviorally flexible, H. ergaster was capable of adjusting its material culture to local resources and functional requirements. Subsistence patterns and diet. Early discoveries of Homo ergaster/erectus fossils in association with stone tools and animal bones lent themselves to the interpretation of hunting and gathering way of life. Nevertheless this interpretation is not accepted by all scholars and various models have been offered to make sense of the evidence. First Scenario: Scavenging. Recently, several of the original studies describing "Homo ergaster/erectus" as a hunter-gatherer have come under intense criticism. Re-examination of the material at some of the sites convinced some scholars (L. Binford) that faunal assemblages were primarily the result of animal activity rather than hunting and gathering. Animal bones showed cut marks from stone tools that overlay gnaw marks by carnivores, suggesting that "Homo ergaster/erectus" was not above scavenging parts of a carnivore kill. According to these scholars, at most sites, the evidence for scavenging by hominids is much more convincing than is that for actual hunting. Which scenario to choose? The key point here is not that "Homo ergaster/erectus" were the first hominid hunters, but that they depended on meat for a much larger portion of their diet than had any previous hominid species. Occasional hunting is seen among nonhuman primates and cannot be denied to australopithecines (see "A. garhi"). But apparently for "Homo ergaster/erectus" hunting took an unprecedented importance, and in doing so it must have played a major role in shaping both material culture and society. Shelter and fire. For years, scientists have searched for evidence that "Homo ergaster/erectus" had gained additional control over its environment through the construction of shelters, and the control and use of fire. The evidence is sparse and difficult to interpret. Shelter. Seemingly patterned arrangements or concentrations of large rocks at sites in Europe and Africa may mark the foundations of huts or windbreaks, but in each case the responsible agent could equally well be stream flow, or any other natural process. Therefore there appears to be no convincing evidence that "Homo ergaster/erectus" regularly constructed huts, windbreaks, or any other sort of shelter during the bulk of its long period of existence. Shelter construction apparently developed late in the species' life span, if at all, and therefore cannot be used as an explanation of "H. ergaster's" capacity for geographic expansion. Fire. Proving the evidence of fire by "Homo ergaster/erectus" is almost equally problematic. Some researchers have suggested that the oldest evidence for fire use comes from some Kenyan sites dated about 1.4 to 1.6 million years ago. Other scholars are not sure. The problem is that the baked earth found at these sites could have been produced as easily by natural fires as by fires started - or at least controlled - by "H. ergaster/erectus". Better evidence of fire use comes from sites that date near the end of "Homo erectus' " existence as a species. Unfortunately, the identity of the responsible hominids (either Homo erectus or archaic Homo sapiens) is unclear. The evidence at present suggests that fire was not a key to either the geographic spread or the longevity of these early humans. Out-of-Africa 1: Behavioral aspects. Researchers proposed originally that it was not until the advent of handaxes and other symmetrically shaped, standardized stone tools that "H. erectus" could penetrate the northern latitudes. Exactly what, if anything, these implements could accomplish that the simple Oldowan flakes, choppers and scrapers that preceded them could not is unknown, although perhaps they conferred a better means of butchering. But the Dmanisi finds of primitive hominids and Oldowan-like industries raise once again the question of what prompted our ancestors to leave their natal land. Yet, there is one major problem with scenarios involving departure dates earlier than about 1.7-1.4 million years ago, and that is simply that they involve geographic spread before the cultural developments (Acheulean industry, meat eating, fire, shelter) that are supposed to have made it possible. A shift toward meat eating might explain how humans managed to survive outside of Africa, but what prompted them to push into new territories remains unknown at this time. Perhaps they were following herds of animal north. Or maybe it was as simple and familiar as a need to know what lay beyond that hill or river or tall savanna grass. Also an early migration could explain technological differences between western and eastern Homo erectus populations. The link between butchering tools and moves into northern latitudes is the skinning and preparation of hides and furs, for reworking 1. into portable shelters, and 2. into clothing. more skilful skinning meant that skins would be better preserved, while fire would lead to meat preservation (smoking) by the simple need to hang cuts of butchered meat high up out of reach of any scavenging animals within smoke filled caves or other dwellings. Having smoked meat then allowed deeper incursions into otherwise hostile terrain, or a long-term food supply available in harsh winters. With readily available and storable energy resources, and protective clothing, they could push out into harsh northern latitudes with comparative ease. Summary. Overall, the evidence suggests that "Homo ergaster" was the first hominid species to resemble historic hunter-gatherers not only in a fully terrestrial lifestyle, but also in a social organization that featured economic cooperation between males and females and perhaps between semi-permanent male-female units.
2,401
Introduction to Paleoanthropology/Modern Humans/Population Variation. One of the notable characteristics of the human species today is its great variability. Human diversity has long fascinated people, but unfortunately it also has led to discrimination. In this chapter we will attempt to address the following questions: Variation and evolution. Human genetic variation generally is distributed in such a continuous range, with varying clusters of frequency. The significance we give our variations, the way we perceive them (in fact, whether or not we perceive them at all) is determined by our culture. Many behavioral traits are learned or acquired by living in a society; other characteristics, such as blue eyes, are passed on physically by heredity. Environment affects both. The physical characteristics of both populations and individuals are a product of the interaction between genes and environments. For most characteristics, there are within the gene pool of Homo sapiens variant forms of genes, known as alleles. This kind of variability, found in many animal species, signifies a rich potential for new combinations of characteristics in future generations. A species faced with changing environmental conditions has within its gene pool the possibility of producing individuals with traits appropriate to its altered life. Many may not achieve reproductive success, but those whose physical characteristics enable them to do well in the new environment will usually reproduce, so that their genes will become more common in subsequent generations. Thus, humankind has been able to occupy a variety of environments. A major expansion into new environments was under way by the time "Homo erectus" appeared on the scene. Populations of this species were living in Africa, Southeast Asia, Europe and China. The differentiation of animal life is the result of selective pressures that, through the Pleistocene, differed from one region to another. Coupled with differing selective pressures were geographical features that restricted or prevented gene flow between populations of different faunal regions. Genetic variants will be expressed in different frequencies in these geographically dispersed populations. In blood type, H. sapiens shows four distinct groups (A, B, O or AB): The Meaning of Race. Early anthropologists tried to explore the nature of human species by systematically classifying H. sapiens into subspecies or races, based on geographic location and physical features such as skin color, body size, head shape and hair texture. Such classifications were continually challenged by the presence of individuals who did not fit the categories. The fact is, generalized references to human types such as "Asiatic" or "Mongoloid", "European" or "Caucasoid", and "African" or "Negroid" were at best mere statistical abstractions about populations in which certain physical features appeared in higher frequencies than in other populations. These categories turned out to be neither definitive nor particularly helpful. The visible traits were found to occur not in abrupt shifts from population to population, but in a continuum that changed gradually. Also one trait might change gradually over a north-south gradient, while another might show a similar change from east to west. Race as a biological concept. To understand why the racial approach to human variation has been so unproductive, we must first understand the race concept in strictly biological terms. In biology, a race is defined as a population of a species that differs in the frequency of different variants of some gene or genes from other populations of the same species. Three important things to note about this definition: The concept of human races. As a device for understanding physical variation in humans, the biological race concept has serious drawbacks: There has been a lot of debate not just about how many human races there may be, but about what "race" is and is not. Often forgotten is the fact that a race, even if it can be defined biologically, is the result of the operation of evolutionary process. Because it is these processes rather than racial categories themselves in which we are really interested, most anthropologists have abandoned the race concept as being of no particular utility. Instead, they prefer to study the distribution and significance of specific, genetically based characteristics, or else the characteristics of small breeding populations that are, after all, the smallest units in which evolutionary change occurs. Physical variables. Not only have attempts to classify people into races proven counterproductive, it has also become apparent that the amount of genetic variation in humans is relatively low, compared to that of other primate species. Nonetheless, human biological variation is a fact of life, and physical anthropologists have learned a great deal about it. Much of it is related to climatic adaptation. A correlation has been noted between body build and climate: Certain body builds are better suited to particular living conditions than others. Anthropologists have also studied such body features as nose, eye shape, hair textures and skin color in relation to climate. Continuing human biological evolution. In the course of their evolution, humans in all parts of the world came to rely on cultural rather than biological adaptation for their survival. Nevertheless, as they spread beyond their tropical homeland into other parts of the world, they did develop considerable physical variation from one population to another. The forces responsible for this include: Although much of this physical variation can still be seen in human populations today, the increasing effectiveness of cultural adaptation has often reduced its importance. Cultural practices today are affecting the human organism in important, often surprising, ways. The probability of alterations in human biological makeup induced by culture raises a number of important questions. By trying to eliminate genetic variants, are we weakening the gene pool by allowing people with hereditary diseases and defects to reproduce? Are we reducing chances for genetic variation by trying to control population size? We are not sure of the answers to these questions.
1,254
Regents Earth Science (High School). This text was written to prepare students for the New York State Regents Earth Science exam. As such, it closely follows the New York State Standards for Mathematics, Science, and Technology. Introductory Concepts. Observation and Inference. Observation basically means watching something and taking note of anything it does. For instance, you might observe a bird flying by watching it closely. To infer is to draw a conclusion based on what one already knows and on that alone. Suppose you see rain on your window - you can infer from that, quite trivially, that the sky is grey. Density. The concept of density is fundamental to understanding many aspects of Earth Science. Density is a derived unit. That is, the density of a substance must be calculated (or derived) from other measurements. Density is calculated by using the mass (grams) and volume (mL or cm3) of a given sample. Mass is determined by using a balance and volume of a fluid or solid can be determined by using a graduated cylinder. In this course, the equation for density is shown on p. 1 of the Earth Science Reference Tables as: D = m/v, in which D represents density; m represents mass; and v represents volume. Some of the other sciences and engineering disciplines make use of a slightly different form of the equation for density. Solutions to problems that require the use of the density equation should include the corresponding metric system (or SI)units. Density is considered an intrinsic property. That is, the density of a material at a specific temperature and pressure remains the same regardless of the size of the sample being considered. Density may be useful in helping to identify specific materials such as minerals, or in helping interpret or predict the behaviors of materials as they interact with other materials, or are subjected to changes in temperature or pressure. Percent Error. The concept of "percent error" is also known as "percent deviation from an accepted value", and the second term may be more helpful in understanding what is actually being determined in the equation used to calculate it. All equations that are given in the Earth Science Reference Tables are on page 1. The equation is (|(Accepted Value-Measured Value)|/(Accepted Value))*100, sometimes written as: Because one has already multiplied by 100, the equation will yield a percentage value, so the appropriate unit of % should be shown in your calculated value. One could report the value as positive or negative, and this would give information about whether your measurement is "high" or "low", but usually the absolute value of the percent deviation is reported (the difference is converted to a positive number). The concept of percent error or percent deviation relates to a measure used by some people in statistics in which you determine how close a value that is measured (or that is calculated from measurements) is to a value that is given as "known" or "accepted". Thus the term "error" sometimes leads to a sense of panic that something was done wrong if the percent deviation from the accepted value is high. It is possible to have very high percent deviations if the accepted value is a small number, even if your measured value is close to the accepted. It is also possible that the particular thing one is measuring (say the mass for a mineral sample) may have a density that is slightly different from what is "accepted" because it contains some "impurities" (variations in the elemental composition). If one calculates a high value for a percent deviation from an accepted value, just make sure that the calculation has been done correctly, but never change the values measured to lower this percentage as these are based on actual observations! Sample Questions. Multiple Choice. Which of these represents the density of a metal block whose mass is 200 g and volume 150 cubic centimeters? (A) 3 000 grams per cubic centimeter. (B) 50 grams per cubic centimeter. (C) 350 grams per cubic centimeter. (D) 1.3 grams per cubic centimeter. Constructed Response. (D) 1.3 grams per cubic centimeter Earth's Dimensions. Shape of the Earth. The Earth is an oblate spheroid. It bulges at the equator and is flattened at the poles. Evidence of its spherical shape comes from photographs of the Earth from space, seeing the shadow of Earth during a lunar eclipse, and the fact that ships seem to sink as they move farther into sea. Also, the altitude of the star Polaris increases with latitude. Evidence for the fact that it is oblate comes from photographs of the Earth from space and variations in gravity along the Earth's surface (stronger at the poles where flattened, weaker at the equator where bulging, also, gravity does not pull directly down). Also, the altitude of Polaris does not vary uniformly when increasing latitude. The best model for the earth is a globe or a ball because Yunis says so though the Earth is oblate, it is only slightly so. On Regents exams, very often there is a question about the shape of the Earth and the answer is a ping pong ball because it is smooth and round. Rocks and Minerals. Minerals. Minerals are naturally occurring, inorganic, crystalline solids. Inorganic substances are those not formed by living things, and in the majority of cases, this is true of all minerals, though there are some notable exceptions. Inorganic molecules usually do not contain carbon as a component in their atomic structures, though again, there are some inorganic minerals that are exceptions to this general statement. A substance is said to be crystalline if it has a regular, repeating atomic structure. Each mineral has a specific atomic structure and formula, such as is given on page 16 of the Earth Science Reference Tables. The individual molecule made by these atoms forms the most basic structure of that type of mineral, and then typically combines chemically with others of the same kind so that the mineral becomes larger. Mineral samples may be microscopic or extremely large, but all the crystals of a single type of mineral share common chemical and physical properties because they share the same chemical composition, types of chemical bonds, and positions of the individual atoms within each of the mineral's molecules. This fact, that a single mineral, whether large or small reacts similarly to chemical and physical tests is very useful in identification. Minerals are solids. There is actually a group of substances known as crystalline liquids, but those do not fit into the definition needed for this course. Difference Between Rocks and Minerals. Rocks - Any natural formed aggregate or mass of mineral matter constituting an essential and appreciable part of the Earth's crust (AGI). Mineral - A naturally formed chemical element or compound having a definite range in chemical composition, and usually a characteristic crystal form (AGI). Properties of Minerals. Hardness. The hardness of a mineral is measured in several relative scales that are determined by how readily one mineral scratches the surface of another. The Mohs scale of hardness has ten level. #1 Talc, #2 gypsum, #3 calcite... up to #10 DIAMOND the hardest natural substance because of the internal arrangement of its carbon atoms and the atomic bond forces in its tetrahedral shape. Streak. The streak of a mineral refers to the color of the powder left behind when the mineral is scratched across a very hard surface, usually an unglazed porcelain plate. Most streak plates are white, though a few may be dark colored. Since a streak plate has a hardness of about 7 on Moh's scale of hardness, minerals that scratch a streak plate will obviously leave no powder behind (other than that of the streak plate itself). Typically, we group minerals by the colors of the streaks they produce. Oddly, many metallic minerals that appear shiny when viewed in a larger sample leave a dark or colored line on a streak plate. Since very small quantities of an element may cause a mineral to take on different colors, but streak tends to stay more or less consistent, we sometimes say that the color of the powdered mineral, or streak, is a more accurate test for a mineral's identity than using the color alone. Color. The color of a mineral is generally easily influenced by a number of factors such as very small quantities of impurities in the mineral's atomic structure. Weathering of a mineral's surface may also influence how the color appears, so color is generally not as reliable a way to identify a specific mineral as is using tests such as hardness, the way a mineral breaks, or even the streak (the color of the powdered mineral). There are also many other optical properties besides color that minerals may posses that may be unique to particular minerals, or groups of minerals. Cleavage and Fracture. The way a mineral breaks is controlled mostly by the internal arrangement of its molecules. Though all minerals are considered crystalline and therefore have a repeating pattern of atoms in their structures, sometimes the chemical bonds in these patterns allow a mineral to break along smooth planes. Other times an uneven surface is produced. Since the way a mineral breaks is an expression of the molecular pattern, we can use this visible feature to infer properties about the internal structure of the mineral. Further, how a mineral breaks is a characteristic identifying feature of specific minerals. Breaking minerals may produce surprising results since some minerals may produce smooth crystal shapes as they form, yet break unevenly. When a mineral breaks along smooth planes it possesses the property called cleavage. Minerals may break along one direction of cleavage or several. When there is more than one direction of cleavage, we are also interested in knowing the angles created between different cleavage directions. A single cleavage direction is inferred to extend throughout the entire sample, so it is important to not confuse parallel sides of the same cleavage plane that occur on different sides of a mineral sample. An example would be a mineral such as halite, that breaks with 3 directions of cleavage each at a 90 degree angle to each other. This pattern of breaking will produce a mineral sample shape that tends to be cubic, but any two opposite sides of the cubic shape are actually in the same plane, so the six sides of the broken sample result from three directions of cleavage. It is fun to slowly break a mineral using a tool such as a bench vise and watch the cleavage planes develop as the stress is applied to the sample, though you should wear appropriate safety equipment because sometimes little chips of the sample tend to fly off unpredictably! When a mineral breaks unevenly, and does not break along smooth planes, it is said to break with fracture. Sometimes the type of fracture is itself characteristic, such as with the mineral quartz. Quartz has what is known as conchoidal fracture. Its fractured surfaces look somewhat like broken glass (most glass is made from quartz), and the name conchoidal comes from the word "conch" (like the sea shell) because this type of fracture makes a shell-like depression in the mineral as it breaks. Knowing how specific minerals break and whether they have a certain type of fracture or if they have one or several directions of cleavage is one of the ways to help identify minerals or predict the behaviors a certain mineral may exhibit if it were to break. Useful information on minerals related to cleavage and fracture is found on page 16 of the Earth Science Reference Tables. Luster. Luster refers to the way that light reflects off the surface of a mineral sample, particularly a freshly broken or unweathered surface of the sample. The two major divisions in the classification we make are whether minerals have metallic or non-metallic luster. On page 16 of the Earth Science Reference Tables, the only common mineral listed that may have either metallic or non-metallic luster is hematite. Like all physical properties of minerals, the luster of a mineral is due to the particular aspects of its chemical composition and its atomic bonds. Rocks. Sedimentary. Break down (erosion) of rock will yield smaller sized particles (sediments). As sediments are moved from one place to another by a transporting medium (either by wind or water) they are deposited in basins (loactions where deposition occurs). If a cementing material is introduced to these sediments then lithification (formation of rock) will result and the outcome will be a Sedimentary rock. Among the common cementing material is calcium carbonate and silica. Sedimentary rock = Erosion of rock + transport of sediments + lithification. Size of particles of a sedimentary rock indicate the environment of deposition. Coarse grained particle sizes indicate a terrestrial (land) to shallow marine environment. The smaller the particle size indicates the more we move towards the deeper sea environment. Common sedimentary rocks are sandstone, limestone, shale and siltstone. Igneous. Igneous rocks are formed when molten rock (magma) cools and solidifies, with or without crystallization, either below the surface as intrusive (plutonic) rocks or on the surface as extrusive (volcanic) rocks. This magma can be derived from either the Earth's mantle or pre-existing rocks made molten by extreme temperature and pressure changes. Over 700 types of igneous rocks have been described, most of them formed beneath the surface of the Earth's crust. The word "igneous" is derived from the Latin ignis, meaning "fire". Magma origination The Earth's crust is about 35 kilometers thick under the continents, but averages only some 7-10 kilometers beneath the oceans. The continental crust is composed primarily of crystalline basement; stable igneous and metamorphic rocks such as granulite, granite and various other intrusive rocks. Oceanic crust is composed primarily of basalt, gabbro and peridotite. The crust floats on the asthenospheric mantle, which is convecting due to the forces of plate tectonics. The mantle, which extends to a depth of nearly 3,000 kilometers is the source of all magma. Most of the magma which forms igneous rocks is generated within the upper parts of the mantle at temperatures estimated between 600 to 1600 °C. Melting of rocks requires temperature, water and pressure. The mantle is generally over 1000 to 1200 °c beneath the crust, at depths of between 7 and 70km. However, most magma is generated at depths of between 20 and 50 km. Melting begins because of upwelling of hot mantle from deeper portions of the earth, nearer the Planetary core; because of water driven off subducted oceanic crust at subduction zones (providing water to lower the melting point of the rocks) and because of decompression caused by rifting. Melting of the continental crust occurs rarely because it is usually dry, and composed of minerals and rocks which are resistant to melting such as pyroxene granulite. However, addition of heat from the mantle or from mantle plumes, subduction related compression and burial as well as some rifting, can prompt the continental crust to melt. As magma cools, minerals crystallize from the melt at different temperatures (fractional crystallization). There are relatively few minerals which are important in the formation of igneous rocks. This is because the magma from which the minerals crystallize is rich in only certain elements: silicon, oxygen, aluminum, sodium, potassium, calcium, iron, and magnesium. These are the elements which combine to form the silicate minerals, which account for over ninety percent of all igneous rocks. Bowen's reaction series is important for understanding the idealized sequence of fractional crystallization of a magma. Igneous rocks make up approximately ninety five percent of the upper part of the Earth's crust, but their great abundance is hidden on the Earth's surface by a relatively thin but widespread layer of sedimentary and metamorphic rocks. Igneous rock are geologically important because: their minerals and global chemistry gives information about the composition of the mantle, from where some igneous rocks are extracted, and the temperature and pressure conditions that allowed this extraction, and/or of other pre-existing rock that melted; their absolute ages can be obtained from various forms of radiometric dating and thus can be compared to adjacent geological strata, allowing a time sequence of events; their features are usually characteristic of a specific tectonic environment, allowing tectonic reconstitution (see plate tectonics); in some special circumstances they host important mineral deposits (ores): for example, tungsten, tin, and uranium, are commonly associated with granites. Morphology and setting In terms of modes of occurrence, igneous rocks can be either intrusive (plutonic) or extrusive (volcanic). Intrusive igneous rocks Intrusive igneous rocks are formed from magma that cools and solidifies within the earth. Surrounded by pre-existing rock (called country rock), the magma cools slowly, and as a result these rocks are coarse grained. The mineral grains in such rocks can generally be identified with the naked eye. Intrusive rocks can also be classified according to the shape and size of the intrusive body and its relation to the other formations into which it intrudes. Typical intrusive formations are batholiths, stocks, laccoliths, sills and dikes. The extrusive types usually are called lavas. The central cores of major mountain ranges consist of intrusive igneous rocks, usually granite. When exposed by erosion, these cores (called batholiths) may occupy huge areas of the surface. Coarse grained intrusive igneous rocks which form at depth within the earth are termed as abyssal; intrusive igneous rocks which form near the surface are termed hypabyssal. Extrusive igneous rocks Extrusive igneous rocks are formed at the Earth's surface as a result of the melting of rocks within the mantle. The melted rock, called magma rises due to contrasting density with the surrounding mantle. When it reaches the surface, magma extruded onto the surface either beneath water or air, is called lava. Eruptions of volcanoes under the air are termed sub-aerial whereas those occurring underneath the ocean are termed submarine. Black smokers and mid ocean ridge basalt are examples of submarine volcanic activity. Magma which erupts from a volcano behaves according to its temperature and composition, which cause a highly different range of viscosity. High temperature magma, which is usually basaltic in composition, behaves in a manner similar to thick oil and, as it cools, treacle. This forms pahoehoe type lava. Intermediate composition magma such as andesite tends to form cinder cones of intermingled ash, tuff and lava, and may have viscosity similar to thick, cold molasses or even rubber when erupted. Felsic magma such as rhyolite is usually erupted at low temperature and is up to 10,000 times as viscous as basalt. These volcanoes rarely form lava flows, and usually erupt explosively. Felsic and intermediate rocks which erupt at surface often do so violently, with explosions driven by release of gases such as carbon dioxide trapped in the magma. Such volcanic deposits are called pyroclastic deposits, and include tuff, agglomerate and ignimbrite. Fine volcanic ash is also erupted and forms ash tuff deposits which can often cover vast areas. Because lava cools and crystallizes rapidly, it is fine grained. If the cooling has been so rapid as to prevent the formation of even small crystals the resulting rock may be a glass (such as the rock obsidian). Because of this fine grained texture it is much more difficult to distinguish between the different types of extrusive igneous rocks than between different types of intrusive igneous rocks. Generally, the mineral constituents of fine grained extrusive igneous rocks can only be determined by examination of thin sections of the rock under a microscope, so only an approximate classification can usually be made in the field. Classification Igneous rock are classified according to mode of occurrence, texture, chemical composition, and the geometry of the igneous body. The classification of the many types of different igneous rocks can provide us with important information about the conditions under which they formed. Two important variables used for the classification of igneous rocks are particle size, which largely depends upon the cooling history, and the mineral composition of the rock. Feldspars, quartz, olivines, pyroxenes, amphiboles, and micas are all important minerals in the formation of igneous rocks, and they are basic to the classification of these rocks. All other minerals present are regarded as nonessential (called accessory minerals). In a simplified classification, igneous rock types are separated on the basis of the type of feldspar present, the presence or absence of quartz, and in rocks with no feldspar or quartz, the type of iron or magnesium minerals present. Igneous rocks which have crystals large enough to be seen by the naked eye are called phaneritic; those with crystals too small to be seen are called aphanitic. Generally speaking, phaneritic implies an intrusive origin; aphanitic an extrusive one. The crystals embedded in fine grained igneous rocks are termed porphyritic. The porphyritic texture develops when some of the crystals grow to considerable size before the main mass of the magma consolidates into the finer grained uniform material. Texture Texture is an important criterion for the naming of volcanic rocks. The texture of volcanic rocks, including the size, shape, orientation, and distribution of grains and the intergrain relationships, will determine whether the rock is termed a tuff, a pyroclastic lava or a simple lava. However, the texture is only a subordinate part of classifying volcanic rocks, as most often there needs to be chemical information gleaned from rocks with extremely fine-grained groundmass or which are airfall tuffs which may be formed from volcanic ash. Textural criteria are less critical in classifying intrusive rocks where the majority of minerals will be visible to the naked eye or at least using a hand lens, magnifying glass or microscope. Plutonic rocks tend also to be less texturally varied and less prone to gaining structural fabrics. Textural terms can be used to differentiate different intrusive phases of large plutons, for instance porphyritic margins to large intrusive bodies, porphyry stocks and subvolcanic apophyses. Mineralogical classification is used most often to classify plutonic rocks and chemical classifications are preferred to classify volcanic rocks, with phenocryst species used as a prefix, eg; "olivine-bearing picrite" or "orthoclase-phyric rhyolite". Chemical classification Igneous rocks can be classified according to chemical or mineralogical parameters: Chemical - Total alkali - silica content (TAS diagram) for volcanic rock classification used when modal or mineralogic data is unavailable: acid igneous rocks containing a high silica content, greater than 63% SiO2 (examples rhyolite and dacite) intermediate igneous rocks containing between 52 - 63% SiO2 (example andesite) basic igneous rocks have low silica 45 - 52% and typically high iron - magnesium content (example basalt) ultrabasic igneous rocks with less than 45% silica. (examples picrite and komatiite) alkalic igneous rocks with 5 - 15% alkali (K2O + Na2O) content (examples phonolite and trachyte) Note: the acid-basic terminology is used more broadly in older geological literature. Chemical classification also extends to differentiating rocks which are chemically similar according to the TAS diagram, for instance; Ultrapotassic; rocks containing molar K2O/Na2O >3 Peralkaline; rocks containing molar K2O + Na2O/ Al2O3 >1 Peraluminous; rocks containing molar K2O + Na2O/ Al2O3 <1 Mineralogical classification For volcanic rocks, mineralogy is important in classifying and naming lavas. The most important criteria is the phenocryst species, followed by the ground mass mineralogy. Often, where the groundmass is aphanitic, chemical classification must be used to properly identify a volcanic rock. Mineralogical contents - felsic versus mafic felsic rock, with predominance of quartz, alkali feldspar and/or feldspathoids: the felsic minerals; these rocks (e.g., granite) are usually light colored, and have low density. mafic rock, with predominance of mafic minerals pyroxenes, olivines and calcic plagioclase; these rocks (example, basalt) are usually dark colored, and have higher density than felsic rocks. ultramafic rock, with more than 90% of mafic minerals (e.g., dunite) For intrusive, plutonic and usually phaneritic igneous rocks where all minerals are visible at least via microscope, the mineralogy is used to classify the rock. This usually occurs on ternary diagrams, where the relative proportions of three minerals are used to classify the rock. The following table is a simple subdivision of igneous rocks according both to their composition and mode of occurrence. Mode of occurrence Acid Intermediate Basic Ultrabasic Intrusive Granite Diorite Gabbro Peridotite Extrusive Rhyolite Andesite Basalt Komatiite Example of classification Granite is an igneous, intrusive rock (crystallized at depth), with felsic composition (rich in silica and with more than 10% of felsic minerals) and phaneritic, subeuhedral texture (minerals are visible for the unaided eye and some of them retain original crystallographic shapes). Granite is the most abundant intrusive rock that can be found in the continents. Etymology Volcanic rocks are named after Vulcan, the Roman name for the god of fire. Intrusive rocks are also called plutonic rocks, named after Pluto, the Roman god of the underworld. Metamorphic Rock is new rock that forms when existing rocks are changed by heat, pressure, or chemicals. Heat and pressure deep underground bake and squeeze sedimentary and igneous rocks. The minerals within the rocks change, often becoming harder. In this way they form new rocks called metamorphic rocks. After millions of years, the top layers of rocks closest to the earth's surface are worn away by weather, shifts in the earth's crust, oceans, and rivers and metamorphic rock can appear on the surface. Dynamic Earth. Plate Tectonics. Plate tectonics (from the Greek word for "one who constructs and destroys", τεκτων, tekton) is a theory of geology developed to explain the phenomenon of continental drift and is currently the theory accepted by the vast majority of scientists working in this area. In the theory of plate tectonics the outermost part of the Earth's interior is made up of two layers: the lithosphere comprising (1) the crust, which has an elemental composition of: oxygen, 46.6%; silicon, 27.7%; aluminum, 8.1%; and iron, 5.0%; and (2) the solidified uppermost part of the mantle. Below the lithosphere lies the asthenosphere which comprises the inner viscous part of the mantle. The mantle makes up 84% of the earth's volume and can sometimes get as hot as 3700 °C. Over millions of years the mantle behaves like a superheated and extremely viscous liquid, but in response to sudden forces, such as earthquakes, it behaves like a rigid solid and can 'ring like a bell'. The lithosphere essentially "floats" on the asthenosphere. The lithosphere is broken up into what are called tectonic plates. The ten major plates are: African, Antarctic, Australian, Eurasian, North American, South American, Pacific, Cocos, Nazca, and the Indian plates. These plates (and the more numerous minor plates) move in relation to one another at one of three types of plate boundaries: convergent, divergent, and transform. Earthquakes, volcanic activity, mountain-building, and oceanic trench formation occur along plate boundaries (most notably around the Pacific Ring of Fire). Plate tectonic theory arose out of two separate geological observations: continental drift, noticed in the early 20th century, and seafloor spreading, noticed in the 1960s. The theory itself was developed during the late 1960s and has since almost universally been accepted by scientists and has revolutionized the earth sciences (akin in its unifying and explanatory power for diverse geological phenomena as the development of the periodic table was for chemistry, the discovery of the genetic code for biology, and quantum mechanics in physics). The division of the Earth's interior into lithospheric and asthenospheric components is based on their mechanical differences. The lithosphere is cooler and more rigid, whilst the asthenosphere is hotter and mechanically weaker. This division should not be confused with the chemical subdivision of the Earth into (from innermost to outermost) core, mantle, and crust. The key principle of plate tectonics is that the lithosphere exists as separate and distinct tectonic plates, which float on the fluid-like (visco-elastic liquid) asthenosphere. The relative fluidity of the asthenosphere allows the tectonic plates to undergo motion in different directions. One plate meets another along a plate boundary, and plate boundaries are commonly associated with geological events such as earthquakes and the creation of topographic features like mountains, volcanoes and oceanic trenches. The majority of the world's active volcanoes occur along plate boundaries, with the Pacific Plate's Ring of Fire being most active and famous. These boundaries are discussed in further detail below. Tectonic plates can include continental crust or oceanic crust, and typically, a single plate carries both. For example, the African Plate includes the continent and parts of the floor of the Atlantic and Indian Oceans. The part of the tectonic plate which is common in all cases is the uppermost solid layer of the upper mantle which lays beneath both continental and oceanic crust and is considered, together with the crust, lithosphere. The distinction between continental crust and oceanic crust is based on the density of constituent materials; oceanic crust is denser than continental crust owing to their different proportions of various elements, particularly, silicon. Oceanic crust has less silicon and more heavier elements ("mafic") than continental crust ("felsic"). As a result, oceanic crust generally lies below sea level (for example most of the Pacific Plate), while the continental crust projects above sea level. There are three types of plate boundaries, characterized by the way the plates move relative to each other. They are associated with different types of surface phenomena. The different types of plate boundaries are: 1. Transform boundaries occur where plates slide, or perhaps more accurately grind, past each other along transform faults. The relative motion of the two plates is either sinistral (left side toward the observer) or dextral (right side toward the observer). 2. Divergent boundaries occur where two plates slide apart from each other. 3. Convergent boundaries (or active margins) occur where two plates slide towards each other commonly forming either a subduction zone (if one plate moves underneath the other) or an orogenic belt (if the two simply collide and compress). Plate boundary zones occur in more complex situations where three or more plates meet and exhibit a mixture of the above three boundary types. Transform (conservative) boundaries The left- or right-lateral motion of one plate against another along transform faults can cause highly visible surface effects. Because of friction, the plates cannot simply glide past each other. Rather, stress builds up in both plates and when it reaches a level that exceeds the slipping-point of rocks on either side of the transform-faults the accumulated potential energy is released as strain, or motion along the fault. The massive amounts of energy that are released are the cause of earthquakes, a common phenomenon along transform boundaries. A good example of this type of plate boundary is the San Andreas Fault complex, which is found in the western coast of North America and is one part of a highly complex system of faults in this area. At this location, the Pacific and North American plates move relative to each other such that the Pacific plate is moving northwest with respect to North America. In 50 million years or so, the part of California that is west of the San Andreas Fault will be a separate island near the Alaska area. It should be noted that the actual direction of movement of the plates which abut at a transform like the San Andreas Fault is often not the same as their relative motion at the fault. For instance, the North American Plate as measured by GPS is actually moving southwestward, nearly perpendicular to the Pacific Plate while the Pacific Plate is actually moving slightly more westward than its relative northwest motion along the San Andreas Fault. [1] The resultant compressive forces are taken up by thrust faulting in the larger fault zone, producing California's Coast Range. The conspicuous bend in these ranges (the Transverse Ranges) and the San Andreas Fault itself in Southern California is a possible result of crustal spreading in the Great Basin region superimposed on the overall movement of the North American Plate. There is speculation by some geologists that rifting may be developing in the Great Basin as the crust here is measurably thinning. Divergent (constructive) boundaries At divergent boundaries, two plates move apart from each other and the space that this creates is filled with new crustal material sourced from molten magma that forms below. The origin of new divergent boundaries at triple junctions is sometimes thought to be associated with the phenomenon known as hotspots. Here, exceedingly large convective cells bring very large quantities of hot asthenospheric material near the surface and the kinetic energy is thought to be sufficient to break apart the lithosphere. The hot spot which may have initiated the Mid-Atlantic Ridge system currently underlies Iceland which is widening at a rate of a few centimeters per century. Divergent boundaries are typified in the oceanic lithosphere by the rifts of the oceanic ridge system, including the Mid-Atlantic Ridge and the East Pacific Rise, and in the continental lithosphere by rift valleys such as the famous East African Great Rift Valley. Divergent boundaries can create massive fault zones in the oceanic ridge system. Spreading is generally not uniform, so where spreading rates of adjacent ridge blocks are different massive transform faults occur. These are the fracture zones, many bearing names, that are a major source of submarine earthquakes. A sea floor map will show a rather strange pattern of blocky structures that are separated by linear features perpendicular to the ridge axis. If one views the sea floor between the fracture zones as conveyor belts carrying the ridge on each side of the rift away from the spreading center the action becomes clear. Crest depths of the old ridges, parallel to the current spreading center, will be older and deeper (due to thermal contraction and subsidence). It is at mid-ocean ridges that one of the key pieces of evidence forcing acceptance of the sea-floor spreading hypothesis was found. Airborne geomagnetic surveys showed a strange pattern of symmetrical magnetic reversals on opposite sides of ridge centers. The pattern was far too regular to be coincidental as the widths of the opposing bands were too closely matched. Scientists had been studying polar reversals and the link was made. The magnetic banding directly corresponds with the Earth's polar reversals. This was confirmed by measuring the ages of the rocks within each band. The banding furnishes a map in time and space of both spreading rate and polar reversals. There is at least one plate that has no creative ridge associated with it: the Caribbean Plate. The Caribbean Plate is generally believed to have originated at a now extinct ridge in the Pacific Ocean, yet it remains in motion according to GPS measurements. The complex tectonics of this region are the subject of ongoing research. Convergent (destructive) boundaries The nature of a convergent boundary depends on the type of lithosphere in the plates that are colliding. Where a dense oceanic plate collides with a less-dense continental plate, the oceanic plate is typically thrust underneath, forming a subduction zone. At the surface, the topographic expression is commonly an oceanic trench on the ocean side and a mountain range on the continental side. An example of a continental-oceanic subduction zone is the area along the western coast of South America where the oceanic Nazca Plate is being subducted beneath the continental South American Plate. As the subducting plate descends, its temperature rises driving off volatiles (most importantly water). As this water rises into the mantle of the overriding plate, it lowers its melting temperature, resulting in the formation of magma with large amounts of dissolved gases. This can erupt to the surface, forming long chains of volcanoes inland from the continental shelf and parallel to it. The continental spine of South America is dense with this type of volcano. In North America the Cascade mountain range, extending north from California's Sierra Nevada, is also of this type. Such volcanoes are characterized by alternating periods of quiet and episodic eruptions that start with explosive gas expulsion with fine particles of glassy volcanic ash and spongy cinders, followed by a rebuilding phase with hot magma. The entire Pacific ocean boundary is surrounded by long stretches of volcanoes and is known collectively as The Ring of Fire. Where two continental plates collide the plates either crumple and compress or one plate burrows under or (potentially) overrides the other. Either action will create extensive mountain ranges. The most dramatic effect seen is where the northern margin of the Indian Plate is being thrust under a portion of the Eurasian plate, lifting it and creating the Himalayas and the Tibetan Plateau beyond. It has also caused parts of the Asian continent to deform westward and eastward on either side of the collision. When two plates with oceanic crust converge they typically create an island arc as one plate is subducted below the other. The arc is formed from volcanics which erupt through the overriding plate as the descending plate melts below it. The arc shape occurs because of the spherical surface of the earth (nick the peel of an orange with a knife and note the arc formed by the straight-edge of the knife). A deep undersea trench is located in front of such arcs where the descending slab dips downward. Good examples of this type of plate convergence would be Japan and the Aleutian Islands in Alaska. As noted above, the plates are able to move because of the relative weakness of the asthenosphere. Dissipation of heat from the mantle is acknowledged to be the source of energy driving plate tectonics. Three-dimensional imaging of the Earth's interior (seismic tomography), indicates that convection of some sort is occurring throughout the mantleTanimoto 2000. How this convection relates to the motion of the plates is a matter of ongoing study and discussion. Somehow, this energy must be translated to the lithosphere in order for tectonic plates to move. There are essentially two forces that could be accomplishing this: friction and gravity. Friction Mantle drag Convection currents in the mantle are transmitted through the asthenosphere; motion is driven by friction between the asthenosphere and the lithosphere. Trench suction Local convection currents exert a downward frictional pull on plates in subduction zones at ocean trenches. Gravity Ridge-push Plate motion is driven by the higher elevation of plates at mid-ocean ridges. Essentially stuff slides downhill. The higher elevation is caused by the relatively low density of hot material upwelling in the mantle. The real motion producing force is the upwelling and the energy source that runs it. This is a misnomer as nothing is pushing and tensional features are dominant along ridges. Also, it is difficult to explain continental break-up with this. Slab-pull Plate motion is driven by the weight of cold, dense plates sinking into the mantle at trenches. There is considerable evidence that convection is occurring in the mantle at some scale. The upwelling of material at mid-ocean ridges is almost certainly part of this convection. Some early models of plate tectonics envisioned the plates riding on top of convection cells like conveyor belts. However, most scientists working today believe that the asthenosphere is not strong enough to directly cause motion by friction. Slab pull is widely believed to be the strongest force directly operating on plates. Recent models indicate that trench suction plays an important role as well. However, it should be noted that the North American Plate, for instance, is nowhere being subducted, yet it is in motion. Likewise the African, Eurasian and Antarctic Plates. The over-all driving force for plate motion and its energy source are still debatable subjects of on-going research. Lunar drag In a study published in the January-February 2006 issue of the Geological Society of America's journal Bulletin, a team of Italian and U.S. scientists argue that a component of the westward motion of the world's tectonic plates is due to the tidal attraction of the moon. As the Earth spins eastward beneath the moon, they say, the moon's gravity ever so slightly pulls the Earth's surface layer back westward. It may also explain why Venus and Mars have no plate tectonics since Venus has no moon, and Mars' moons are too small to have significant tidal effects on Mars. [2] This is not a new argument, however. It was originally raised by the "father" of the plate tectonic hypothesis, Alfred Wegener. It was challenged by the physicist Harold Jeffreys who calculated that the magnitude of tidal friction required would have quickly brought the earth's rotation to a halt long ago. One might also note that many plates are actually moving north and eastward, not west. Plate motion is measured directly with the Global positioning satellite system (GPS). Continental drift For more details on this topic, see Continental drift. Continental drift was one of many ideas about tectonics proposed in the late 19th and early 20th centuries. The theory has been superseded by and the concepts and data have been incorporated within plate tectonics. By 1915 Alfred Wegener was making serious arguments for the idea with the first edition of The Origin of Continents and Oceans. In that book he noted how the east coast of South America and the west coast of Africa looked as if they were once attached. Wegener wasn't the first to note this (Francis Bacon, Benjamin Franklin and Snider-Pellegrini preceded him), but he was the first to marshal significant fossil and paleo-topographical and climatological evidence to support this simple observation. However, his ideas were not taken seriously by many geologists, who pointed out that there was no apparent mechanism for continental drift. Specifically they did not see how continental rock could plow through the much denser rock that makes up oceanic crust. In 1947, a team of scientists led by Maurice Ewing utilizing the Woods Hole Oceanographic Institution’s research vessel Atlantis and an array of instruments, confirmed the existence of a rise in the central Atlantic Ocean, and found that the floor of the seabed beneath the layer of sediments consisted of basalt, not granite which was common on the continents. They also found that the oceanic crust was much thinner than continental crust. All these new findings raised important and intriguing questions. [3] Beginning in the 1950s, scientists, using magnetic instruments (magnetometers) adapted from airborne devices developed during World War II to detect submarines, began recognizing odd magnetic variations across the ocean floor. This finding, though unexpected, was not entirely surprising because it was known that basalt -- the iron-rich, volcanic rock making up the ocean floor-- contains a strongly magnetic mineral (magnetite) and can locally distort compass readings. This distortion was recognized by Icelandic mariners as early as the late 18th century. More important, because the presence of magnetite gives the basalt measurable magnetic properties, these newly discovered magnetic variations provided another means to study the deep ocean floor. When newly formed rock cools, such magnetic materials recorded the Earth's magnetic field at the time. As more and more of the seafloor was mapped during the 1950s, the magnetic variations turned out not to be random or isolated occurrences, but instead revealed recognizable patterns. When these magnetic patterns were mapped over a wide region, the ocean floor showed a zebra-like pattern. Alternating stripes of magnetically different rock were laid out in rows on either side of the mid-ocean ridge: one stripe with normal polarity and the adjoining stripe with reversed polarity. The overall pattern, defined by these alternating bands of normally and reversely polarized rock, became known as magnetic striping. When the rock strata of the tips of separate continents are very similar it suggests that these rocks were formed in the same way implying that they were joined initially. For instance, some parts of Scotland contain rocks very similar to those found in eastern North America. Furthermore, the Caledonian Mountains of Europe and parts of the Appalachian Mountains of North America are very similar in structure and lithology. Floating continents The prevailing concept was that there were static shells of strata under the continents. It was early observed that although granite existed on continents, seafloor seemed to be composed of denser basalt. It was apparent that a layer of basalt underlies continental rocks. However, based upon abnormalities in plumb line deflection by the Andes in Peru, Pierre Bouguer deduced that less-dense mountains must have a downward projection into the denser layer underneath. The concept that mountains had "roots" was confirmed by George B. Airy a hundred years later during study of Himalayan gravitation, and seismic studies detected corresponding density variations. By the mid-1950s the question remained unresolved of whether mountain roots were clenched in surrounding basalt or were floating like an iceberg. Plate tectonic theory Significant progress was made in the 1960s, and was prompted by a number of discoveries, most notably the Mid-Atlantic ridge. The most notable was the 1962 publication of a paper by American geologist Harry Hess (Robert S. Dietz published the same idea one year earlier in Nature. However, priority belongs to Hess, since he distributed an unpublished manuscript of his 1962 article already in 1960). Hess suggested that instead of continents moving through oceanic crust (as was suggested by continental drift) that an ocean basin and its adjoining continent moved together on the same crustal unit, or plate. In the same year, Robert R. Coats of the U.S. Geological Survey described the main features of island arc subduction in the Aleutian Islands. His paper, though little-noted (and even ridiculed) at the time, has since been called "seminal" and "prescient". In 1967, Jason Morgan proposed that the Earth's surface consists of 12 rigid plates that move relative to each other. Two months later, in 1968, Xavier Le Pichon published a complete model based on 6 major plates with their relative motions. Explanation of magnetic striping Seafloor magnetic striping.The discovery of magnetic striping and the stripes being symmetrical around the crests of the mid-ocean ridges suggested a relationship. In 1961, scientists began to theorise that mid-ocean ridges mark structurally weak zones where the ocean floor was being ripped in two lengthwise along the ridge crest. New magma from deep within the Earth rises easily through these weak zones and eventually erupts along the crest of the ridges to create new oceanic crust. This process, later called seafloor spreading, operating over many millions of years has built the 50,000 km-long system of mid-ocean ridges. This hypothesis was supported by several lines of evidence: 1. at or near the crest of the ridge, the rocks are very young, and they become progressively older away from the ridge crest; 2. the youngest rocks at the ridge crest always have present-day (normal) polarity; 3. stripes of rock parallel to the ridge crest alternated in magnetic polarity (normal-reversed-normal, etc.), suggesting that the Earth's magnetic field has flip-flopped many times. By explaining both the zebra like magnetic striping and the construction of the mid-ocean ridge system, the seafloor spreading hypothesis quickly gained converts and represented another major advance in the development of the plate-tectonics theory. Furthermore, the oceanic crust now came to be appreciated as a natural "tape recording" of the history of the reversals in the Earth's magnetic field. Subduction discovered A profound consequence of sea floor spreading is that new crust was, and is now, being continually created along the oceanic ridges. This idea found great favor with some scientists who claimed that the shifting of the continents can be simply explained by a large increase in size of the Earth since its formation. However, this so-called "Expanded earth theory" hypothesis was unsatisfactory because its supporters could offer no convincing geologic mechanism to produce such a huge, sudden expansion. Most geologists believe that the Earth has changed little, if at all, in size since its formation 4.6 billion years ago, raising a key question: how can new crust be continuously added along the oceanic ridges without increasing the size of the Earth? This question particularly intrigued Harry Hess, a Princeton University geologist and a Naval Reserve Rear Admiral, and Robert S. Dietz, a scientist with the U.S. Coast and Geodetic Survey who first coined the term seafloor spreading. Dietz and Hess were among the small handful who really understood the broad implications of sea floor spreading. If the Earth's crust was expanding along the oceanic ridges, Hess reasoned, it must be shrinking elsewhere. He suggested that new oceanic crust continuously spread away from the ridges in a conveyor belt-like motion. Many millions of years later, the oceanic crust eventually descends into the oceanic trenches -- very deep, narrow canyons along the rim of the Pacific Ocean basin. According to Hess, the Atlantic Ocean was expanding while the Pacific Ocean was shrinking. As old oceanic crust was consumed in the trenches, new magma rose and erupted along the spreading ridges to form new crust. In effect, the ocean basins were perpetually being "recycled," with the creation of new crust and the destruction of old oceanic lithosphere occurring simultaneously. Thus, Hess' ideas neatly explained why the Earth does not get bigger with sea floor spreading, why there is so little sediment accumulation on the ocean floor, and why oceanic rocks are much younger than continental rocks. Mapping with earthquakes During the 20th century, improvements in seismic instrumentation and greater use of earthquake-recording instruments (seismographs) worldwide enabled scientists to learn that earthquakes tend to be concentrated in certain areas, most notably along the oceanic trenches and spreading ridges. By the late 1920s, seismologists were beginning to identify several prominent earthquake zones parallel to the trenches that typically were inclined 40-60° from the horizontal and extended several hundred kilometers into the Earth. These zones later became known as Wadati-Benioff zones, or simply Benioff zones, in honor of the seismologists who first recognized them, Kiyoo Wadati of Japan and Hugo Benioff of the United States. The study of global seismicity greatly advanced in the 1960s with the establishment of the Worldwide Standardized Seismograph Network (WWSSN) to monitor the compliance of the 1963 treaty banning above-ground testing of nuclear weapons. The much-improved data from the WWSSN instruments allowed seismologists to map precisely the zones of earthquake concentration worldwide. Geological paradigm shift The acceptance of the theories of continental drift and sea floor spreading (the two key elements of plate tectonics) can be compared to the Copernican revolution in astronomy (see Nicolaus Copernicus). Within a matter of only several years geophysics and geology in particular were revolutionized. The parallel is striking: just as pre-Copernican astronomy was highly descriptive but still unable to provide explanations for the motions of celestial objects, pre-tectonic plate geological theories described what was observed but struggled to provide any fundamental mechanisms. The problem lay in the question "How?". Before acceptance of plate tectonics, geology in particular was trapped in a "pre-Copernican" box. However, by comparison to astronomy the geological revolution was much more sudden. What had been rejected for decades by any respectable scientific journal was eagerly accepted within a few short years in the 1960s and 1970s. Any geological description before this had been highly descriptive. All the rocks were described and assorted reasons, sometimes in excruciating detail, were given for why they were where they are. The descriptions are still valid. The reasons, however, today sound much like pre-Copernican astronomy. One simply has to read the pre-plate descriptions of why the Alps or Himalaya exist to see the difference. In an attempt to answer "how" questions like "How can rocks that are clearly marine in origin exist thousands of meters above sea-level in the Dolomites?", or "How did the convex and concave margins of the Alpine chain form?", any true insight was hidden by complexity that boiled down to technical jargon without much fundamental insight as to the underlying mechanics. With plate tectonics answers quickly fell into place or a path to the answer became clear. Collisions of converging plates had the force to lift sea floor into thin atmospheres. The cause of marine trenches oddly placed just off island arcs or continents and their associated volcanoes became clear when the processes of subduction at converging plates were understood. Mysteries were no longer mysteries. Forests of complex and obtuse answers were swept away. Why were there striking parallels in the geology of parts of Africa and South America? Why did Africa and South America look strangely like two pieces that should fit to anyone having done a jigsaw puzzle? Look at some pre-tectonics explanations for complexity. For simplicity and one that explained a great deal more look at plate tectonics. A great rift, similar to the Great Rift Valley in north eastern Africa, had split apart a single continent, eventually forming the Atlantic Ocean, and the forces were still at work in the Mid-Atlantic Ridge. We have inherited some of the old terminology, but the underlying concept is as radical and simple as "The Earth moves" was in astronomy. Meteorology. Measuring Weather - Weather Variables. Temperature. A measurement of energy in a substance's molecules. More active molecules bump into each other and friction occurs and is therefore warm or hot. A cooler substance is less active and it's molecules don't rub together so less heat is given off. Temperature is measured in many units but most commonly degrees Celsius, degrees Fahrenheit, kelvin, or BTU. Temperature is measured using a thermometer. Dewpoint. The temperature at which H2O molecules in the air will condense. Relative Humidity. The percentage of H2O molecules in the air compared to the possible amount (absolute humidity). A hydrometer or hygrometer can be used to measure relative humidity. Multiple Choice A student using a sling psychrometer measured a wet-bulb temperature of 10°C and a dry-bulb temperature of 16° What was the dewpoint? (1)10°C (3) 6°C (2)45°C (4) 4°C Air Pressure. Air is made of molecules. Most of the air is made of nitrogen: N2 (78%). There is also some oxygen: O2 (21%) as well as small amounts of carbon dioxide: CO2 and other gases. Air molecules are moving around very quickly. When air molecules hit a surface, they exert a force known as pressure. Imagine that you are at the bottom of an ocean of air. The air pressure that you measure is caused by the weight of all of this air. Pressure is measured using a barometer and labeled in millibars (mb). Wind Speed. The velocity of wind gusts. Commonly measured in mph (miles per hour) or km/h (kilometers per hour). The tool used to measure wind speed is called an anemometer. Wind Direction. The direction on a wind rose that the wind is blowing. The tool used to measure wind direction is a weather vane. Extreme Weather. Thunderstorms Probably the most common of storms, these form when large-scale convection occurs, drawing warm, moist air up into colder parts of the troposphere. This can happen along fronts, because of local topography such as mountains, and because of the rising of a warm air parcel over a piece of warm land. the moisture is converted into deep cumulus and cumulonimbus clouds, the warm air of which, becomes the updraft. As the air rises up high enough, perhaps even to the tropopause, it cools, and begins to sink, forming a downdraft. This huge mixing of air molecules throws both water droplets, which nucleate to a small particles such as dust or pollen, as well as electrical charges all about the cloud. Remember that water molecules are polar, which means they have separated charges. This means that they carry electrical particles easily. The winds separate ice droplets and water droplets, the latter of which tends to carry a positive charge. Negative charges in the air and along the ground align with the cloud and travel with it. When they meet, as in a high point on a tree or cliff, an electrical discharge occurs, and lightning is born. The super-hot zap of energy quickly heats the immediate air surrounding it, creating a loud shock wave known as thunder. The water droplets often coelace, forming bigger and bigger drops until they are heavy enough to fall as rain. If the droplets are carried up to the freezing point repeatedly, adding new water layers on each time until it too is heavy enough to fall, hail results. Sample Questions. Multiple Choice. A student using a sling psychrometer measured a wet-bulb temperature of 10°C and a dry-bulb temperature of 16° What was the dewpoint? (1)10°C (3) 6°C (2)45°C (4) 4°C Climate and Water Cycle. Hydrologic Cycle Precipitation Evaporation Infiltration Run off Earth's History. Sample Questions. Question # 1 What is the Earth's shape "most" likely related to? The answer would be c)ping pong ball Question # 2 A layer that is not folded, what layer is the oldest? The answer would be c)the bottom layer Constructed Response. 3. How can we find the number of half-life that has passed? Astronomy. Astronomy is the study of all matter and energy systems that exist outside, or "above", the region of the Earth's atmosphere. It also includes the histories and possible future behaviors of this matter and these energy systems. The Earth is derived from the matter and energy that surrounds it, and continues to be influenced by events in space. Introduction to the Solar System and Planets. The solar system consists of the sun and anything that orbits it. It has planets, which are spherical objects that orbit the sun and rotate on an axis. The planets are divided into terrestrial planets, which include Mercury, Venus, Earth, and Mars and are made of solids, and Jovian planets, which include Jupiter, Saturn, Uranus, and Neptune and are made of gases. Many planets have moons revolving around them. Asteroids are mostly non-spherical objects that orbit the sun. They are found mostly in a belt between Mars and Jupiter. Comets are objects made of ice, dust, and gas. They orbit the sun in a very elliptical shape. When they are close to the sun, they melt partially, which causes them to have a 'tail'. Meteorites are small rock like substances orbiting the sun. When they enter Earth's atmosphere, they burn and are called meteors or shooting starts. If they hit the Earth, they can leave impact craters. Patterns in Celestial Movements. The Sun's Daily Path. The sun moves from the east to the west. During the winter, it rises and sets due south of east and its apparent path is lower in the sky. During the Summer, it rises and sets due north of east and its apparent path is higher in the sky. Earth's Revolution. Earth's orbit causes the seasons and takes approximately 365.25 days to complete. The orbit is maintained by the balance of gravity, or the attractive force between to substances, and inertia. The shape of the orbit is slightly elliptical, with one of the foci being the sun. This causes the Earth to be closest to the sun in January and farthest from the sun in July with the apparent diameter of the sun being larger in July. It does "not" cause the seasons. Sample Questions. Constructed Response. The answer would be, a ping pong ball Study Tips for the Regents Earth Science Exam. Understanding Relationships Between Variables. In science, we use either sentences or graphs to show how one thing affects another. For example, we can say that the more homework you do, the higher your grade will be. Another way to say this is... Sentences that describe relationships often follow this format... Using the directions above, create sentences that describe the relationships between the following... These relationships can also be represented as graphs. External Links. Earth science is a lot of common sense thinking, for example if you think about it you could know it without being taught, it just makes sense. For example, if you are seeing which rock layer comes first and there is an intrusion, if you think about it the intrusion can't have come first because it had nothing to intrude so the rock layers had to lay down first
14,765
Introduction to Paleoanthropology/Origin of Language. Recognition of symbol use in the archaeological record (following Philip Chase's criteria): Important effort to clarify definition and category of data we are dealing with. Yet if we follow these criteria, most artefacts from Lower (Acheulean) and Middle Paleolithic (60,000-50,000 years ago) are ruled out, because of lack of evidence for repeated patterning and intentionality. Contribution of evolutionary psychology to origins of art. Is intelligence a single, general-purpose domain or a set of domains? Evolutionary psychologists answer: set of domains, which they call "mental modules", "multiple intelligences", "cognitive domains"; these "mental modules" interact, are connected; Anatomically modern humans have better interaction between modules than other animals; therefore, able to perform more complex behaviors; Four cognitive and physical processes exist: The first three are found in non-human primates and most hominids. Yet, only modern humans seem to have developed the fourth one. For Neanderthals, intentional communication and classification were probably sealed in social intelligence module, while mark-making and attribution of meaning (both implicating material objects) were hidden. Only with arrival of modern humans, connection between modules made art possible by allowing intentional communication to escape into the domain of mark-making. Problems with data and chronology. We could easily look at this transition in a smooth way: The passage from one industry to the next, one hominid to the next, etc. Evolutionary paths well structured and detailed, as in textbooks, but a bit too clear-cut, that is simplistic and reductionist. After 1.8 million years ago, when "H. ergaster/erectus" moved out-of-Africa, the picture of human evolution becomes much more complex. Situation due to several reasons: Overall, presence of differentiated cultural provinces in Africa and Eurasia which have their own evolutionary pace. Dates don't seem to reveal a clear-cut divide between the Lower and Middle Paleolithic and don't fit anymore in a specific and rigorous time frame: By focusing on a transition happening only at 50,000 yrs ago would be overlooking some major human innovations and evolutionary trends that took place earlier and on a much longer period. We need to focus more on "H. heidelbergensis" and its material culture and other behavioral patterns to realize that the transition was not at 50,000 years ago, but between 600,000 and 60,000 yrs ago. The revolution that wasn't. "Revolution" is in this context the Upper Paleolithic Revolution, with the development from 50,000 yrs ago of "Homo sapiens sapiens", considered the only species anatomically AND behaviorally modern. By "modern human behavior," we mean: By overlooking and even not considering recent discoveries from the 1990s regarding the periods before 50,000 years ago, we are misled to consider the evidence after that date as the result of biological and cultural revolution. Recent observations in Africa, Europe and Asia from sites between 600,000 and 250,000 years ago (Acheulean period) seem to document very different patterns: "The Revolution That Wasn't". Origins of language. Sometime during the last several million years, hominids evolved the ability to communicate much more complex and detailed information (about nature, technology, and social relationships) than any other creatures. Yet we, cannot reconstruct the evolutionary history of language as we reconstruct the history of bipedalism because the ability to use language leaves no clear traces in the fossil record. Therefore, there is no consensus among paleoanthropologists about when language evolved. But from new information learned from DNA testing by at the Institute for Evolutionary Anthropology in Germany, the very stable gene FOXP2 (this is the gene that makes speech possible) suddenly changed approximately 250,000 years ago, 2 of the molecular units in the 715-unit DNA sequence abruptly changed. Neanderthals did not have this modification in their gene sequence, whereas Homo Sapien Sapiens did have this modification and were much more articulate . We are going to try to clarify the current situation by reviewing the recent evidence on the topic, focusing on specific criteria that could reveal essential information on early forms of language: The intellectual and linguistic skills of early hominids. Australopithecines. Reconstruction work on australopithecines indicates that their vocal tract was basically like that of apes, with the larynx and pharynx high up in the throat. This would not have allowed for the precise manipulation of air that is required for modern human languages. The early hominids could make sounds, but they would have been more like those of chimpanzees. H. ergaster/erectus. Brain capacity. Their average cranial capacity was just a little short of the modern human minimum, and some individual erectus remains fall within the human modern range. It is difficult to be certain what this fact means in terms of intelligence. Brain asymmetry. Paleoanthropologist Ralph Holloway has looked at the structure of "H. erectus" brains. He made endocasts of the inside surfaces of fossil crania, because the inside of the skull reflects some of the features of the brain it once held. One intriguing find is that the brains of "H. erectus" were asymmetrical: the right and left halves of the brain did not have the same shape. This is found to a greater extent in modern humans, because the two halves of the human brain perform different functions. Language and the ability to use symbols, for example, are functions of the left hemisphere, while spatial reasoning (like the hand-eye coordination needed to make complex tools) is performed by the right hemisphere. This hints that "H. erectus" also had hemisphere specialization, perhaps even including the ability to communicate through a symbolic language. Vocal apparatus. Further evidence of language use by "H. erectus" is suggested by the reconstruction of the vocal apparatus based on the anatomy of the cranial base. Even though the vocal apparatus is made up of soft parts, those parts are connected to bone; so the shape of the bone is correlated with the shape of the larynx, pharynx and other features. "H. erectus" had vocal tracts more like those of modern humans, positioned lower in the throat and allowing for a greater range and speed of sound production. Thus, erectus could have produced vocal communication that involved many sounds with precise differences. Whether or not they did so is another question. But given their ability to manufacture fairly complex tools and to survive in different and changing environmental circumstances, "H. ergaster/erectus" certainly could have had complex things to "talk about". Therefore it is not out of question that erectus had a communication system that was itself complex, even though some scholars are against this idea. Summary. Scientists struggle with the definition of human behavior, while dealing with evidence dating to the early part of the Lower Paleolithic (7-2 million years ago). Definition of modern human behavior is not easier to draw. The answer to this topic should not be found only in the period starting at around 50,000 yrs ago. Evidence now shows that the period between 500,000 and 250,000 years ago was rich in attempts at elaborating new behavioral patterns, either material or more symbolic. On another level, beginning about 1.6 million years ago, brain size began to increase over and beyond that which can be explained by an increase in body size. Some researchers point to evidence that suggests that from 1.6 million years to about 300,000 years ago, the brain not only dramatically increased in size but also was being neurally reorganized in a way that increased its ability to process information in abstract (symbolic) way. This symbolism allowed complex information to be stored, relationships to be derived, and information to be efficiently retrieved and communicated to others in various ways. "H. heidelbergensis" seems to be the author of these new behavioral patterns, not "H. erectus". "H. heidelbergensis", especially in Africa, shows therefore evidence of new stone tool technology (blades), grinding stone and pigment (ochre) processing before 200,000 years ago. These new patterns connected with "H. heidelbergensis" could therefore be seen as critical advantages over "H. erectus" in the human evolutionary lineage.
2,006
Introduction to Paleoanthropology/Hominids MiddlePaleolithic. The second phase of human migration. The time period between 250,000 and 50,000 years ago is commonly called the Middle Paleolithic. At the same time that Neanderthals occupied Europe and Western Asia, other kinds of people lived in the Far East and Africa, and those in Africa were significantly more modern than the Neanderthals. These Africans are thus more plausible ancestors for living humans, and it appears increasingly likely that Neanderthals were an evolutionary dead end, contributing few if any genes to historic populations. Topics to be covered in this chapter: Neanderthals. History of Research. In 1856, a strange skeleton was discovered in Feldhofer Cave in the Neander Valley (in German "Neandertal" but previously it was called "Neanderthal", "thal" = valley) near Dusseldorf, Germany. The skull cap was as large as that of a present-day human but very different in shape. Initially this skeleton was interpreted as that of a congenital idiot. The Forbes Quarry (Gibraltar) female cranium (now also considered as Neanderthal) was discovered in 1848, eight years before the Feldhofer find, but its distinctive features were not recognized at that time. Subsequently, numerous Neanderthal remains were found in Belgium, Croatia, France, Spain, Italy, Israel and Central Asia. Anthropologists have been debating for 150 years whether Neanderthals were a distinct species or an ancestor of Homo sapiens sapiens. In 1997, DNA analysis from the Feldhofer Cave specimen showed decisively that Neanderthals were a distinct lineage. These data imply that Neanderthals and "Homo sapiens sapiens" were separate lineages with a common ancestor, Homo heidelbergensis, about 600,000 years ago. Anatomy. Unlike earlier hominids (with some rare exceptions), Neanderthals are represented by many complete or nearly complete skeletons. Neanderthals provide the best hominid fossil record of the Plio-Pleistocene, with about 500 individuals. About half the skeletons were children. Typical cranial and dental features are present in the young individuals, indicating Neanderthal features were inherited, not acquired. Morphologically the Neanderthals are a remarkably coherent group. Therefore they are easier to characterize than most earlier human types. Neanderthal skull has a low forehead, prominent brow ridges and occipital bones. It is long and low, but relatively thin walled. The back of the skull has a characteristic rounded bulge, and does not come to a point at the back. Cranial capacity is relatively large, ranging from 1,245 to 1,740 cc and averaging about 1,520 cc. It overlaps or even exceeds average for "Homo sapiens sapiens". The robust face with a broad nasal region projects out from the braincase. By contrast, the face of modern Homo sapiens sapiens is tucked under the brain box, the forehead is high, the occipital region rounded, and the chin prominent. Neanderthals have small back teeth (molars), but incisors are relatively large and show very heavy wear. Neanderthal short legs and arms are characteristic of a body type that conserves heat. They were strong, rugged and built for cold weather. Large elbow, hip, knee joints, and robust bones suggest great muscularity. Pelvis had longer and thinner pubic bone than modern humans. All adult skeletons exhibit some kind of disease or injury. Healed fractures and severe arthritis show that they had a hard life, and individuals rarely lived past 40 years old. Chronology. Neanderthals lived from about 250,000 to 30,000 years ago in Eurasia. The earlier ones, like at Atapuerca (Sima de Los Huesos), were more generalized. The later ones are the more specialized, "classic" Neanderthals. The last Neanderthals lived in Southwest France, Portugal, Spain, Croatia, and the Caucasus as recently as 27,000 years ago. Geography. The distribution of Neanderthals extended from Uzbekistan in the east to the Iberian peninsula in the west, from the margins of the Ice Age glaciers in the north to the shores of the Mediterranean sea in the south. South-West France (Dordogne region) is among the richest in Neanderthal cave shelters: Other sites include: No Neanderthal remains have been discovered in Africa or East Asia. Homo sapiens. Chronology and Geography. The time and place of Homo sapiens origin has preoccupied anthropologists for more than a century. For the longest time, many assumed their origin was in South-West Asia. But in 1987, anthropologist Rebecca Cann and colleagues compared DNA of Africans, Asians, Caucasians, Australians, and New Guineans. Their findings were striking in two respects: The human within-species variability was only 1/25th as much as the average difference between human and chimpanzee DNA. The human and chimpanzee lineages diverged about 5 million years ago. 1/25th of 5 million is 200,000. Cann therefore concluded that Homo sapiens originated in Africa about 200,000 years ago. Much additional molecular data and hominid remains further support a recent African origin of Homo sapiens, now estimated to be around 160,000-150,000 years ago. Earliest Evidence. The Dmanisi evidence suggests early Europeans developed in Asia and migrated to Europe creating modern Europeans with minor interaction with African Homo types. July 5 2002 issue of the journal Science and is the subject of the cover story of the August issue of National Geographic magazine. New Asian finds are significant, they say, especially the 1.75 million-year-old small-brained early-human fossils found in Dmanisi, Georgia, and the 18,000-year-old "hobbit" fossils (Homo floresiensis) discovered on the island of Flores in Indonesia. Such finds suggest that Asia's earliest human ancestors may be older by hundreds of thousands of years than previously believed, the scientists say. Robin Dennell, of the University of Sheffield in England, and Wil Roebroeks, of Leiden University in the Netherlands, describe their ideas in the December 22, 2005 issue of Nature. The fossil and archaeological finds characteristic of early modern humans are represented at various sites in East and South Africa, which date between 160,000 and 77,000 years ago. Herto (Middle Awash, Ethiopia). In June 2003, publication of hominid remains of a new subspecies: "Homo sapiens idaltu". Three skulls (two adults, one juvenile) are interpreted as the earliest near-modern humans: 160,000-154,000 BP. They exhibit some modern traits (very large cranium; high, round skull; flat face without browridge), but also retain archaic features (heavy browridge; widely spaced eyes). Their anatomy and antiquity link earlier archaic African forms to later fully modern ones, providing strong evidence that East Africa was the birthplace of "Homo sapiens". Omo-Kibish (Ethiopia). In 1967, Richard Leakey and his team uncovered a partial hominid skeleton (Omo I), which had the features of Homo sapiens. Another partial fragment of a skull (Omo II) revealed a cranial capacity over 1,400 cc. Dating of shells from the same level gave a date of 130,000 years. Ngaloba, Laetoli area (Tanzania). A nearly complete skull (LH 18) was found in Upper Ngaloba Beds. Its morphology is largely modern, yet it retains some archaic features such as prominent brow ridges and a receding forehead. Dated at about 120,000 years ago. Border Cave (South Africa). Remains of four individuals (a partial cranium, 2 lower jaws, and a tiny buried infant) were found in a layer dated to at least 90,000 years ago. Although fragmentary, these fossils appeared modern. Klasies River (South Africa). Site occupied from 120,000 to 60,000 years ago. Most human fossils come from a layer dated to around 90,000 years ago. They are fragmentary: cranial, mandibular, and postcranial pieces. They appear modern, especially a fragmentary frontal bone that lacks a brow ridge. Chin and tooth size also have a modern aspect. Blombos Cave (South Africa). A layer dated to 77,000 BCE yielded 9 human teeth or dental fragments, representing five to seven individuals, of modern appearance. Anatomy. African skulls have reduced browridges and small faces. They tend to be higher, more rounded than classic Neanderthal skulls, and some approach or equal modern skulls in basic vault shape. Where cranial capacity can be estimated, the African skulls range between 1,370 and 1,510 cc, comfortably within the range of both the Neanderthals and anatomically modern people. Mandibles tend to have significantly shorter and flatter faces than did the Neanderthals. Postcranial parts indicate people who were robust, particularly in their legs, but who were fully modern in form. Out-of-Africa 2: The debate. Most anthropologists agree that a dramatic shift in hominid morphology occurred during the last glacial epoch. About 150,000 years ago the world was inhabited by a morphologically heterogeneous collection of hominids: Neanderthals in Europe; less robust archaic Homo sapiens in East Asia; and somewhat more modern humans in East Africa (Ethiopia) and also SW Asia. By 30,000 years ago, much of this diversity had disappeared. Anatomically modern humans occupied all of the Old World. In order to understand how this transition occurred, we need to answer two questions: Unfortunately, genes don't fossilize, and we cannot study the genetic composition of ancient hominid populations directly. However, there is a considerable amount of evidence that we can bring to bear on these questions through the anatomical study of the fossil record and the molecular biology of living populations. The shapes of teeth from a number of hominid species suggest that arrivals from Asia played a greater role in colonizing Europe than hominids direct from Africa, a new analysis of more than 5,000 ancient teeth suggests. (Proceedings of the National Academy of Sciences Aug 2007) Two opposing hypotheses for the transition to modern humans have been promulgated over the last decades: The "Multi-regional" model. This model proposes that ancestral Homo erectus populations throughout the world gradually and independently evolved first through archaic Homo sapiens, then to fully modern humans. In this case, the Neanderthals are seen as European versions of archaic sapiens. Recent advocates of the model have emphasized the importance of gene flow among different geographic populations, making their move toward modernity not independent but tied together as a genetic network over large geographical regions and over long periods of time. Since these populations were separated by great distances and experienced different kinds of environmental conditions, there was considerable regional variation in morphology among them. One consequence of this widespread phyletic transformation would be that modern geographic populations would have very deep genetic roots, having begun to separate from each other a very long time ago, perhaps as much as a million years. This model essentially sees multiple origins of Homo sapiens, and no necessary migrations. The "Out-of-Africa"/"Replacement" model. This second hypothesis considers a geographically discrete origin, followed by migration throughout the rest of the Old World. By contrast with the first hypothesis, here we have a single origin and extensive migration. Modern geographic populations would have shallow genetic roots, having derived from a speciation event in relatively recent times. Hominid populations were genetically isolated from each other during the Middle Pleistocene. As a result, different populations of Homo erectus and archaic Homo sapiens evolved independently, perhaps forming several hominid species. Then, between 200,000 and 100,000 years ago, anatomically modern humans arose someplace in Africa and spread out, replacing other archaic sapiens including Neanderthals. The replacement model does not specify how anatomically modern humans usurped local populations. However, the model posits that there was little or no gene flow between hominid groups. Hypothesis testing. If the "Multi-regional Model" were correct, then it should be possible to see in modern populations echoes of anatomical features that stretch way back into prehistory: this is known as regional continuity. In addition, the appearance in the fossil record of advanced humans might be expected to occur more or less simultaneously throughout the Old World. By contrast, the "Out-of-Africa Model" predicts little regional continuity and the appearance of modern humans in one locality before they spread into others. Out-of-Africa 2: The evidence. Until relatively recently, there was a strong sentiment among anthropologists in favor of extensive regional continuity. In addition, Western Europe tended to dominate the discussions. Evidence has expanded considerably in recent years, and now includes molecular biology data as well as fossils. Now there is a distinct shift in favor of some version of the "Out-of-Africa Model". Discussion based on detailed examination of fossil record and mitochondrial DNA needs to address criteria for identifying: Fossil record. Regional Continuity. The fossil evidence most immediately relevant to the origin of modern humans is to be found throughout Europe, Asia, Australasia, and Africa, and goes back in time as far as 300,000 years ago. Most fossils are crania of varying degrees of incompleteness. They look like a mosaic of Homo erectus and Homo sapiens, and are generally termed archaic sapiens. It is among such fossils that signs of regional continuity are sought, being traced through to modern populations. For example, some scholars (Alan Thorne) argue for such regional anatomical continuities among Australasian populations and among Chinese populations. In the same way, some others believe a good case can be made for regional continuity in Central Europe and perhaps North Africa. Replacement. By contrast, proponents of a replacement model argue that, for most of the fossil record, the anatomical characters being cited as indicating regional continuity are primitive, and therefore cannot be used uniquely to link specific geographic populations through time. The equatorial anatomy of the first modern humans in Europe presumably is a clue to their origin: Africa. There are sites from the north, east and south of the African continent with specimens of anatomical modernity. One of the most accepted is Klasies River in South Africa. The recent discovery of remains of H. sapiens idaltu at Herto (Ethiopia) confirms this evidence. Does this mean that modern Homo sapiens arose as a speciation event in Eastern Africa (Ethiopia), populations migrating north, eventually to enter Eurasia? This is a clear possibility. The earlier appearance of anatomically modern humans in Africa than in Europe and in Asia too supports the "Out-of-Africa Model". Molecular biology. Just as molecular evidence had played a major role in understanding the beginnings of the hominid family, so too could it be applied to the later history, in principle. However, because that later history inevitably covers a shorter period of time - no more than the past 1 million years - conventional genetic data would be less useful than they had been for pinpointing the time of divergence between hominids and apes, at least 5 million years ago. Genes in cell nuclei accumulate mutations rather slowly. Therefore trying to infer the recent history of populations based on such mutations is difficult, because of the relative paucity of information. DNA that accumulates mutations at a much higher rate would, however, provide adequate information for reading recent population history. That is precisely what mitochondrial DNA (mtDNA) offers. MtDNA is a relatively new technique to reconstruct family trees. Unlike the DNA in the cell nucleus, mtDNA is located elsewhere in the cell, in compartments that produce the energy needed to keep cells alive. Unlike an individual's nuclear genes, which are a combination of genes from both parents, the mitochondrial genome comes only from the mother. Because of this maternal mode of inheritance, there is no recombination of maternal and paternal genes, which sometimes blurs the history of the genome as read by geneticists. Potentially, therefore, mtDNA offers a powerful way of inferring population history. MtDNA can yield two major conclusions relevant for our topic: the first addresses the depth of our genetic routes, the second the possible location of the origin of anatomically modern humans. Expectations. Results. The argument that genetic variation among widely separated populations has been homogenized by gene flow (interbreeding) is not tenable any more, according to population geneticists. Intermediate Model. Although these two hypotheses dominate the debate over the origins of modern humans, they represent extremes; and there is also room for several intermediate models. In any case the result would be a much less clearcut signal in the fossil record. Case studies. Southwest Asia. Neanderthal fossils have been found in Israel at several sites: Kebara, Tabun, and Amud. For many years there were no reliable absolute dates. Recently, these sites were securely dated. The Neanderthals occupied Tabun around 110,000 years ago. However, the Neanderthals at Kebara and Amud lived 55,000 to 60,000 years ago. By contrast, at Qafzeh Cave, located nearby, remains currently interpreted as of anatomically modern humans have been found in a layer dated to 90,000 years ago. These new dates lead to the surprising conclusion that Neanderthals and anatomically modern humans overlapped - if not directly coexisted - in this part of the world for a very long time (at least 30,000 years). Yet the anatomical evidence of the Qafzeh hominid skeletons reveals features reminiscent of Neanderthals. Although their faces and bodies are large and heavily built by today's standards, they are nonetheless claimed to be within the range of living peoples. Yet, a recent statistical study comparing a number of measurements among Qafzeh, Upper Paleolithic and Neanderthal skulls found those from Qafzeh to fall in between the Upper Paleolithic and Neanderthal norms, though slightly closer to the Neanderthals. Portugal. The Lagar Velho 1 remains, found in a rockshelter in Portugal dated to 24,500 years ago, correspond to the complete skeleton of a four-year-old child. This skeleton has anatomical features characteristic of early modern Europeans: Yet, intriguingly, a number of features also suggest Neanderthal affinities: Thus, the Lagar Velho child appears to exhibit a complex mosaic of Neanderthal and early modern human features. This combination can only have resulted from a mixed ancestry; something that had not been previously documented for Western Europe. The Lagar Velho child is interpreted as the result of interbreeding between indigenous Iberian Neanderthals and early modern humans dispersing throughout Iberia sometime after 30,000 years ago. Because the child lived several millennia after Neanderthals were thought to have disappeared, its anatomy probably reflects a true mixing of these populations during the period when they coexisted and not a rare chance mating between a Neanderthal and an early modern human. Population dispersal into Australia/Oceania. Based on current data (and conventional view), the evidence for the earliest colonization of Australia would be as follows: Over the past decade, however, this consensus has been eroded by the discovery and dating of several sites: Yet these early dates reveal numerous problems related to stratigraphic considerations and dating methods. Therefore, many scholars are skeptical of their value. If accurate, these dates require significant changes in current ideas, not just about the initial colonization of Australia, but about the entire chronology of human evolution in the early Upper Pleistocene. Either fully modern humans were present well outside Africa at a surprisingly early date or the behavioral capabilities long thought to be uniquely theirs were also associated, at least to some degree, with other hominids. As a major challenge, the journey from Southeast Asia and Indonesia to Australia, Tasmania and New Guinea would have required sea voyages, even with sea levels at their lowest during glacial maxima. So far, there is no archaeological evidence from Australian sites of vessels that could have made such a journey. However, what were coastal sites during the Ice Age are mostly now submerged beneath the sea. Summary. Overall the evidence suggested by mitochondrial DNA is the following:
5,033
Introduction to Paleoanthropology/MiddlePaleolithic Technology. Stone tool industry. Neanderthals and their contemporaries seem to have been associated everywhere with similar stone tool industries, called the Mousterian (after Le Moustier Cave in France). Therefore no fundamental behavioral difference is noticeable. The implication may be that the anatomical differences between Neanderthals and near-moderns have more to do with climatic adaptation and genetic flow than with differences in behavior. Archaeological sites are dominated by flake tools. By contrast, Acheulean sites are dominated by large handaxes and choppers. Handaxes are almost absent from Middle Paleolithic sites. Oldowan hominids used mainly flake tools as well. However, unlike the small, irregular Oldowan flakes, the Middle Paleolithic hominids produced quite symmetric, regular flakes using sophisticated methods. The main method is called the Levallois and it involves three steps in the core reduction: Mousterian tools are more variable than Acheulean tools. Traditionally tools have been classified into a large number of distinct types based on their shape and inferred function. Among the most important ones are: Francois Bordes found that Middle Paleolithic sites did not reveal a random mix of tool types, but fell into one of four categories that he called Mousterian variants. Each variant had a different mix of tool types. Bordes concluded that these sites were the remains of four wandering groups of Neanderthals, each preserving a distinct tool tradition over time and structured much like modern ethnic groups. Recent studies give reason to doubt Bordes' interpretation. Many archaeologists believe that the variation among sites results from differences in the kinds of activity performed at each locality. For example, Lewis Binford argued that differences in tool types depend on the nature of the site and the nature of the work performed. Some sites may have been base camps where people lived, while others may have been camps at which people performed subsistence tasks. Different tools may have been used at different sites for woodworking, hide preparation, or butchering prey. Recently, however, microscopic studies of wear patterns on Mousterian tools suggest that the majority of tools were used mainly for woodworking. As a result, there seems to be no association between a tool type (such as a point or a side-scraper) and the task for which it was used. Microscopic analyses of the wear patterns on Mousterian tools also suggest that stone tools were hafted, probably to make spears. Mousterian hominids usually made tools from rocks acquired locally. Raw materials used to make tools can typically be found within a few kilometers of the site considered. Subsistence Patterns. Neanderthal sites contain bones of many animals alongside Mousterian stone tools. European sites are rich in bones of red deer, fallow deer, bison, aurochs, wild sheep, wild goat and horse, while eland, wildebeest, zebra are found often at African sites. Archaeologists find only few bones of very large animals such as hippopotamus, rhinoceros and elephant, even though they were plentiful in Africa and Europe. This pattern has provoked as much debate at similar ambiguity as for earlier hominids, regarding the type of food procurement responsible for the presence of these bones: hunting or scavenging. Several general models have be offered to explain the Mousterian faunal exploitation: Obligate Scavenger Model. Some archaeologists (such as Lewis Binford) believe that Neanderthals and their contemporaries in Africa never hunted anything larger than small antelope, and even these prey were acquired opportunistically, not as a result of planned hunts. Any bones of larger animals were acquired by scavenging. As evidence in support of this view, the body parts which dominate (skulls and foot bones) are commonly available to scavengers. Binford believes that hominids of this period did not have the cognitive skills necessary to plan and organize the cooperative hunts necessary to bring down large prey. Mousterian hominids were nearly as behaviorally primitive as early Homo. Flexible Hunter-Scavenger Model. Other scientists argue that Neanderthals likely were not obligate scavengers, but that during times when plant foods were abundant they tended to scavenger rather than hunt. At other times, when plant foods were less abundant, Neanderthals hunted regularly. Their interpretation is of a flexible faunal exploitation strategy that shifted between hunting and scavenging. Less-Adept Hunter Model. Other scientists believe that Neanderthals were primarily hunters who regularly killed large animals. But they were less effective hunters than are modern humans. They point out that animal remains at Mousterian sites are often made up of one or two species: The prey animals are large creatures, and they are heavily overrepresented at these sites compared with their occurrence in the local ecosystem. It is hard to see how an opportunistic scavenger would acquire such a non-random sample of the local fauna. One important feature of this model is that animal prey hunted such as eland are not as dangerous prey as buffalo. Middle Paleolithic hominids were forced to focus on the less dangerous (but less abundant) eland, because they were unable to kill the fierce buffalo regularly. Fully Adept Hunter Model. Finally some scientists argue that scavenging was not a major component of the Middle Paleolithic diet and there is little evidence of a less effective hunting strategy. Skeletal element abundance and cut mark patterning would be consistent with hunting. Overall, there is currently no evidence that Middle Paleolithic hominids differed from Upper Paleolithic hominids in scavenging or hunting, the most fundamental aspect of faunal exploitation. The differences that separate Middle Paleolithic hominids from modern hominids may not reside in scavenging versus hunting or the types of animals that they pursued. Differences in the effectiveness of carcass use and processing, with their direct implications for caloric yield, may be more important. Neanderthals lacked sophisticated meat and fat storage technology, as well as productive fat rendering technology. At a minimum, the lack of storage capabilities and a lower caloric yield per carcass have forced Neanderthals to use larger foraging ranges to increase the likelihood of successful encounters with prey. Cannibalism. Marks on human bones from Middle Paleolithic can be the result of two phenomena: violence and cannibalism. Violence. Violence can be recognized on bone assemblages by: Evidence for violence in the Middle Paleolithic is extremely rare. Body processing and cannibalism. By contrast, evidence of body processing and cannibalism is becoming more widespread at different times and in different geographical areas. Criteria. Criteria required for a "minimal taphonomic signature" of cannibalism: These criteria must be found on both hominid and ungulate remains. Finally the types of bones usually broken are the crania and limb bones. Patterns. Different behavioral patterns toward the dead among Middle Paleolithic Neanderthals:
1,650
Modern Physics/Potential Momentum. Potential Momentum. In classical physics we know that kinematics can often be described by a potential energy alone. Now we've seen that in relativity the energy is just the temporal component of the momentum 4-vector, so we should expect the same of the potential energy. To see how this works, we'll reason by analogy from the classical case. For a free, non-relativistic particle of mass "m", the total energy "E" equals the kinetic energy "K" and is related to the momentum Π of the particle by In the non-relativistic case, the momentum is Π= "mv", where v is the particle velocity. If the particle is not free, but is subject to forces associated with a potential energy "U"("x","y","z"), then the equation must be modified to account for the contribution of "U" to the total energy: The force on the particle is related to the potential energy by For a free, relativistic particle, we have The obvious way to add forces to the relativistic case is by rewriting this equation with a potential energy: However formula_6 is a four-vector, so an equation with something subtracted from just one of the components of this four-vector is not relativistically invariant. In other words, this equation doesn't obey the principle of relativity, and therefore cannot be correct! How can we fix this problem? One way is to define a new four-vector with "U/c" being its timelike part and some new vector Q being its spacelike part: We then subtract Q from the momentum Π. When we do this, equation (13.5) becomes The quantity Q is called the potential momentum and "Q" is the potential four-momentum. If |Π-Q| is much smaller than "mc", this becomes approximately This expression for the energy has the same form as the Hamiltonian we looked at for classical velocity dependent forces, so we know it predicts a force perpendicular to the velocity, when the condition is met. It turns out to be perpendicular even when the condition is met. In classical physics the potential momentum is an optional extra. In relativity it is a necessary part of any potential field. Some additional terminology is useful. We define as the kinetic momentum since in the classical case it reduces to "mv". In order to avoid confusion, we rename Π the total momentum. Thus, the total momentum equals the kinetic plus the potential momentum, in analogy with energy.
566
Modern Physics/The Law of Gravitation. Law of Gravitation. How do we calculate the force of gravity on an object? It seems like a simple enough task. However, this is only applicable when we want to calculate the force of gravity from earth on an object on earth. What if we want to calculate the force of gravity of an asteroid in space from another asteroid, or the force of gravity on a space craft from the sun. This is where universal gravitation comes in. Of Newton's accomplishments, the discovery of the universal law of gravitation ranks as one of his greatest. Imagine two masses, "M"1 and "M"2, separated by a distance "r". The gravitational force has the magnitude where "G" is the "gravitational constant": The force is always attractive, and acts along the line joining the center of the two masses. Lets notice a couple things here. First of all, since gravity is a force, it is a vector. The formula described above only gives us it's magnitude. What is the direction? Well depends on which force of gravity we are talking about. Since there are 2 objects here, "M"1 and "M"2, there are also two forces, the force of gravity on "M"1 by "M"2 and the force of gravity on "M"2 by "M"1. The magnitude of both forces are the same but the directions are opposite one another. This is actually a case of Newton's second law, as both forces are equal and opposite. Notice that the ``r`` is squared in the bottom of the fraction. This tells us that not only does gravity get weaker as ``r`` increases, but it gets much weaker as ``r`` increases. This is why although technically all objects in the universe attract every other object through gravity, we can often disregard the force as it is negligible. Vector Notation. Let's say that we have two masses, M and m, separated by a distance r, and a distance vector R. The relationship between R and r is given by: We will also change our force into a force vector, acting in the direction of R: And this gives us our final vector equation: Notice that since the ratio between R and r is normalized, the addition of these terms does not alter the equation, only the direction in which the force is acting.
526
Introduction to Paleoanthropology/Evolution/Food Production. Food Production. The ways in which humans procure resources are not unlimited. Essentially, there are five major procurement patterns practiced in the world today: Food Collection: Hunting and Gathering. People who practice a hunting and gathering subsistence strategy simply rely on whatever food is available in their local habitat, for the most part collecting various plant foods, hunting wild game, and fishing (where the environment permits). They collect but they do not produce any food. For example, crops are not cultivated and animals are not kept for meat or milk. Hunters and gathers do and did modify the landscape to increase the amount of available food. One of the main ways hunters and gatherers modified their environment was through the use of burning. Today, only about 30,000 people make their living in this fashion. Cultures of agriculturalists, having larger ecological footprints have pushed most hunters and gatherers out of the areas where plant food and game is abundant into the more marginal of the earth: the Arctic, arid deserts, and dense tropical rain forests. Food Production: Terminology. Food Production: General term which covers types of domestication involving both plants and animals, each of which requires radically different practices. Cultivation: Term refers to all types of plant culture, from slash-and-burn to growing crop trees. Terminological distinctions within the term cultivation are based on types on gardens maintained and means with which they are cultivated. Example: distinction between horticulture and agriculture Animal Husbandry: Term refers to all types of animal rearing practices, ranging from chicken to cattle. Centers of early domestication. Southwest Asia. Credited with domesticating: Dog, pig, goat, sheep, wheat, barley, oat, peas, lentils, apples. China. Credited with domesticating: Rice Africa. Credited with domesticating: Sorghum, cattle Mesoamerica. Credited with domesticating: Maize (corn), squash, pumpkin, sunflower, turkey
488
High School Mathematics Extensions/Supplementary/Partial Fractions. Introduction. Before we begin, consider the following: How do we calculate this sum? At first glance it may seem difficult, but if you use variables instead of numbers each term in the sum above would take the form: which you can rewrite as Thus we can rewrite the original problem as follows: Regrouping: So all terms except the first and the last cancel out, yielding: Believe it or not, we've just done partial fractions! Partial fractions is a method of breaking down complex fractions which involve products into sums of simpler fractions. General method. So, how do we do partial fractions? Look at the example below:<br> Factorizing the denominator:<br> Then we suppose we can break it down into two fractions with denominator "(z - 1)" and "(z - 2)" respectively, let their numerators be "a" and "b:"<br> Multiplying by "(z - 1)(z - 2)": Therefore by matching coefficients of like power of z, we have: Furthermore: Therefore<br> Generally speaking, this method only works with proper fractions. Numerators with greater exponents than their denominators need to be divided first. 1. Can you find an equivalent expression to formula_16 that is defined for formula_17 ? Power factors. On the last section we have talked about factorizing the denominator, and have each factor as the denominator of each term. But what happens when there are repeating factors? Can we apply the same method? See the example below: Other than the method suggested above, we would like to use another approach to handle the problem. We first leave out some factor to make it into non-repeated form, do partial fraction on it, then multiply the factor back, then apply partial fraction on the 2 fractions. Partial fraction on the latter part: Multiplying by "(x + 2)(x - 1)": By matching coefficients of like power of x, we have: Substitute "a = 4 - b" into "(B)", Hence "b = 1" and "a = 3". We carry on: Now we do partial fraction once more: Multiplying by "(x + 2)(x - 1)": By matching coefficients of like power of x , we have: Substitute "a = -b" into "(B)", we have: Hence "b = 1/3" and "a = -1/3". So finally, 2. What about formula_35 ? Quadratic factors. We should always try to factor the denominator, for the sake of simplicity. There are some cases, though, in which factoring a polynomial leads to complex coefficients. Since these do anything but simplify our task, we shall leave the polynomial "as it is". That is, as irreducible quadratic factors: When dealing with the quadratic factor, we should use the following partial fraction: Leading to: Multiplying by "(x + 3)(x^2 + 2)": By matching coefficients of like power of x , we have: Solving: Therefore "b = 5", "a = 2" and "c = 7". Finally: 3. Try breaking down formula_47 .
786
Introduction to Paleoanthropology/UpperPaleolithic. Early Upper Paleolithic Cultures. Aurignacian. The Aurignacian indicates a tool industry from the Upper Paleolithic in Europe. A tool industry contains standardized tools and production techniques that indicate a shared cultural knowledge. The Aurignacian industry contains blades (long, thin stone flakes), burins (stone chisels for working bone and wood), bone points, as well as the beginning of prehistoric art. First Discovered. The name is associated with the Aurignac Rockshelter in the Pyrenees in Southwestern France. Material Culture. The tools of the Aurignacian are quite standardized which shows planning and foresight in their creation. In addition, the inclusion of tools to work bone and wood show a wider variety of raw materials being used in tool production. Mortuary practices. The Aurignacian period includes definitive elaborate burials with grave goods. Burials can indicate the social status of the deceased as well as the beginning of religious concept associated with life and death. Grace goods provide archaeologists important social and cultural information. Symbolic Expression. Proliferation of various forms of personal ornaments: Artistic Expression. Types of evidence: Engraved block characteristics: Figurine characteristics: Gravettian. Artistic Expression. Types: Animal figuring characteristics: Animals most frequently depicted are dangerous species (felines and bears), continuing Aurignacian tradition By contrast, Magdalenian animal statuettes from the same region show very different patterns (N=139): Dangerous animals represent only 10% of total Female figurine characteristics: Widespread distribution over Europe and Russia; except Spain where no evidence of Venuses Parietal art characteristics: From 21 sites, a list of 47 animals identified: Dangerous animals (rhinoceros, bear, lion) depicted during the Gravettian do not constitute more than 11% of determinable animals: Strong preponderance of hunted animals, with horse very widely dominant Late Upper Paleolithic Cultures. Solutrean. Artistic expression. Types: Characteristics: Magdelenian. Artistic expression. Types:
529
Physics Study Guide/Electronics. Electronics is the application of electromagnetic (and quantum) theory to construct devices that can perform useful tasks, from as simple as electrical heaters or light bulbs to as complex as the Large Hadron Collider. =Electronics= Introduction. To discuss electronics we need the basic concepts from electricity: charge, current which is flow of charge, and potential which is the potential energy difference between two places. Please make sure these concepts are familiar before continuing. Circuits. The interest of electronics is circuits. A circuit consists of wires that connect components. Typical components are resistors, voltage sources. In order to maintain a constant electrical current a continued exhaustion of chemical or mechanical energy is required. A voltaic cell is a common electromotive power source for a circuit. It consists primarily of two plates, a positive copper plate and a negative zinc plate. One must note that the plates are placed in dilute sulfuric acid. Whence a key is switched or the circuit is closed the zinc reacts with the sulfuric in which it is placed in. One may discern that the electromotive force which is then applied to the circuit is believed to be converted from chemical to electrical energy upon the surface of the zinc cell. Whence the E.M.F is converted from chemical to electrical energy the electromotive force travels through the sulfuric acid. However disadvantages occur whence utilizing a voltaic or galvanic cell. Whence the circuit is closed, simultaneously hydrogen bubbles accumulate upon the copper plate surface hence decreasing the total electromotive power source applied to the circuit in question. The resistance with which the electromotive source encounters as it flows form the zinc plate to the copper plate through the dilute sulfuric acid is distinguished as internal resistance. Hence the voltaic cell uses diluted rather than concentrated sulfuric acid so as to reduce the internal resistance present inside of the component which applies the electromotive force to the circuit. A circuit can be open, when there is a break so that no current can flow, or it can be closed, so that current can flow. These definitions allow us to discuss electronics efficiently. Direct current and alternating Current. When the Electrons moves through a particular medium in straight without oscillating on their mean position then it is Direct Current. This DC is used in almost every electronical component. Alternating Current on the other hand as the name suggest, the Electrons alter back and forth and hence produces much electrical energy. AC is mainly used to reduce the wastage of electrons and passing much energy at one time. Ohm's law. If 'V' is Potential Difference applied at two ends of conductor and 'I' is Current flowing through the conductor then 'I' is directly proportional to its 'V'. V = I x R Kirchoff's laws. Kirchoff's laws generally hold for direct current (DC) circuits, but fail when dealing with changing electric current and voltage such as alternating current (AC) or signal processing in combination with capacitors, inductors, and antennas. Kirchoff's current law. The sum of all the currents entering and leaving any point in a circuit is equal to zero. It is based on the assumption that current flows only in conductors, and that whenever current flows into one end of a conductor it immediately flows out the other end. Kirchoff's voltage law. The sum of all the voltages around the circuit loop is equal to zero. It is based on the assumption that there is no fluctuating magnetic field linking the closed circuit loop. Power. p=work done/time taken p=I*V (current * Voltage) Other Equations for P can also be derived using Ohm's Law, such as P = IR^2, because V=IR, which can be plugged in to get P= I * (IR) and another equation being P = V^2/R. Resistors in series. In a series circuit, resistance increases as more resistors are added.Each resistor makes an addition to the restriction of the flow of charge.The current through the battery is inversely proportional to the total resistance. The equivalent(total) resistance is calculated by adding the resistances together. Rformula_3=Rformula_4+Rformula_5+Rformula_6+-------- Resistors in parallel. 1/R = 1/R1+1/R2+1/R3+... Semiconductors. Current is the rate of flow of charge. formula_7 = Current [amperes - A] formula_8 = Charge [coulombs - C] formula_9 = Time [seconds - s] howstuffworks.com Voltage is equal to current multiplied by resistance Power is equal to the product of voltage and current Electronics is the flow of current through semiconductor devices like silicon and germanium. Semiconductor devices are those which behave like conductors at higher temperature. Transistor, diode, SCR are some electronic devices.
1,152
Buddhist Philosophy/Esoteric Buddhism. Introduction. Esoteric Buddhism is generally classified under the Great Vehicle (Mahayana) Buddhism. There are two parts to it: the Exoteric and the Esoteric Buddhism. In order for us to understand Esoteric Buddhism, we need to explore the origins as well the constituent philosophy and historical context of how the respective schools were established. Exoteric Buddhism is based on Madhyamika ('middle way') of Nagarjuna. Esoteric Buddhism requires the study of Exoteric Buddhism as the foundation. Esoteric Buddhism is taught to practitioners as an 'advanced' dharma. Doctrine. The doctrine of the Esoteric (or Tantric) Buddhism is based on "Mahavairocana-sutra", and "Kalacakraindriya-sutra". The following may best explain the doctrine: Reference. Lecture delivered by Master Sheng-yen on Esoteric and Exoteric Buddhism Article by A. P. Sinnett Esoteric Buddhism
265
Curl/Interviews. Interviews Please feel free to add your own questions and answers! =Interview with "The Fridge"= Are you known as 'the Fridge' because you are a kewl programmer? When last time I had a temperature I wasn't so kewl. I think Fridge is the transcription of the German word "Fritte" (like potato chips) which is my nickname in Germany. Is Curl evolutionary or revolutionary? This binary question is not so easily answered. Curl is of course evolutionary because there have been other programming languages before. It wouldn't be the MIT if they hadn't learnt from previous mistakes. On the other hand Curl comes with a revolutionary license policy. Don't ask for details the answer might take months. What is your favourite piece of Curl programming? Hint: try and include curlchat. I would say the "Hello World" program, as it nicely shows how markup language is integrated in the object-oriented programming language. There are also some really good applications around, most of them are not available on the net but have a look a www.curlchat.info for example and you will know what I mean. Can Curl be used for programming mobile phones or PDA's? No, not yet. If the Asian market produces enough revenue there might be a possibility that Curl is ported to these devices. Why is your baby son a worshipped deity? I don't understand the question. How does Curl fit in with the Semantic Web? Oh, a technical question. Curl is great in collection information and preparing them in a sensible way. Yes, Curl programs can give meaning to a arbitrary sets of data retrieved from different sources. What is your connection to alchemy and time travel? Yesterday, I was working with Curl version 4.2 that was released in 1992. But unfortunately, I was already so rich (look at all these gold nuggets on my desk) that I couldn't to see the ingenuity of the release. Why are you so keen on Curl? Because. Do you really want a personal answer?
479
Ruby Programming/GUI Toolkit Modules/Tk. Of Ruby GUI bindings, the Tk binding is the oldest; it is widely available and still more or less the default toolkit for GUI programming with Ruby. Nevertheless, currently there exists no comprehensive manual for Ruby/Tk; the Ruby book recommends inferring Ruby/Tk usage from the Perl/Tk documentation. The current Ruby "PickAxe book" has a chapter on Ruby/Tk. Documentation. You may also be able to comfortably read PerlTk’s documentation as the Ruby bindings are said to be modeled after Perl’s. Look and Feel. The “look and feel” of your Tk application depends on the version of the Tk library your Ruby interpreter is linked against: Availability. If the Tk toolkit isn’t already installed on your system, you’ll have to install it. You may use your system’s “package manager” for this. If you built Ruby before having installed the Tk dev package, it’s likely that it was built without Tk built in. For 1.9 versions you might be able to get away with installing it as a gem but your best bet is to install the tk dev package and reinstall Ruby so that it builds with the Tk bindings. Windows. By default the “old” one click installer has the Tk binaries, however you’ll still need to install the Tk toolkit from ActiveState . If you’re using the new “rubyinstaller” then for 1.8.6 this might help or for 1.9 this might help There’s a precompiled gem which should work out-of-the-box: tk-win. It includes sources and libraries directly from Hidetoshi NAGAI. It’s Ruby 1.9 only. You could also try the ffi-tk gem, or download this for 1.9 mingw users. The hope is that future versions of rubyinstaller will come with the binaries built in, then the above work-arounds won’t be necessary. <hr>
482
Perl Programming/Simple examples 2. Hi-Lo: A simple game written in perl that asks you for a guess between 1 and 100 and tells you if you are too high or low. use warnings; use strict; $| = 1; print "Enter number of games to play: "; chomp(my $Num_Games = <STDIN>); my $Num_Guesses = 0; for my $gameno (1 .. $Num_Games) { my $number = 1 + int rand 100; my $guess; do { print "Enter guess from 1 to 100: "; chomp($guess = <STDIN>); ++$Num_Guesses; if ($guess < $number) { print "Higher!\n"; } elsif ($guess > $number) { print "Lower!\n"; } until $guess == $number; print "Correct!\nAverage guesses per game: ", $Num_Guesses / $gameno, "\n\n"; print "Games played: $Num_Games\n";
287
Voter's Guide. The Wikibooks Voter's Guide is a project designed to educate potential voters on how to vote, as well as the possible benefits and disadvantages of voting in particular ways. Though this is the English Wikibooks project and all Voter's Guides contained herein should be in English, there should be intense cooperation and interlinking to ensure that voters can access a Voter's Guide for their country and in their native language. To this end, the English project should have English language guides to elections in countries like the United States, Canada, Australia, South Africa, United Kingdom and Jamaica, guides should also be made available in English for any country, even non-Anglophone lands like China, Japan and France. Each Voter's Guide should have several sections (using the US as an example):
182
Canadian History/Introduction. ‹ An Introduction to Canada Canada is a land of vast distances and rich natural resources. Canada was founded in 1867 and retains ties to the United Kingdom through the Commonwealth agreement. Canada has strong historic ties with both France and the United Kingdom and has traditionally supported and sided with these nations during international conflicts. Recently however, Canada tends to side with the United States of America when international issues arise. Such a relationship with the United States results from the strong economic ties between these two neighboring nations. Major social and political issues within the nation include funding of the armed forces, taxation levels, healthcare and education funding and the relationship between the province of Quebec, known nationwide for its uneasy political relationship with the federal government.
161
Voter's Guide/United States. 2014 elections. The next election in the United States will be the biennial election for Senators (one third are up for election) and all Representatives. The next Presidential election will occur in 2016. The last election in the United States was the 2012 presidential elections. The race attracted attention world-wide. Here are some guides to the issues likely to be of importance to voters in forthcoming elections. And here is some information on the political parties that will likely play a role in the 2010 elections.
133
Computer Programming/OS Programming. Operating System (OS) Programming. A computer is a series of electrical pathways some of which can be changed by use of some of its own electrical pathways. A computer instruction is a series of electrical signals that make use of a computer. A computer program is a series of computer instructions. An operating system (OS) is a computer program with the purposes of: OS programming is ... Many people are dissatisfied with currently-available operating systems, and research ways to improve them—or introduce entirely new replacements. Some develop entirely new computing hardware, and need to write a new OS for it (typically by adapting parts of some pre-existing OS).
154