text
stringlengths
8
1.28M
token_count
int64
3
432k
Oracle Database/SQL Cheatsheet. This "cheat sheet" covers most of the basic functionality that an Oracle DBA needs to run basic queries and perform basic tasks. It also contains information that a PL/SQL programmer frequently uses to write stored procedures. The resource is useful as a primer for individuals who are new to Oracle, or as a reference for those who are experienced at using Oracle. A great deal of information about Oracle exists throughout the net. We developed this resource to make it easier for programmers and DBAs to find most of the basics in one place. Topics beyond the scope of a "cheatsheet" generally provide a link to further research. Other Oracle References SELECT. The SELECT statement is used to retrieve rows selected from one or more tables, object tables, views, object views, or materialized views. SELECT * FROM beverages WHERE field1 = 'Kona' AND field2 = 'coffee' AND field3 = 122; SELECT INTO. Select into takes the values "name", "address" and "phone number" out of the table "employee", and places them into the variables "v_employee_name", "v_employee_address", and "v_employee_phone_number". This "only" works if the query matches a single item. If the query returns no rows it raises the codice_1 built-in exception. If your query returns more than one row, Oracle raises the exception codice_2. SELECT name,address,phone_number INTO v_employee_name,v_employee_address,v_employee_phone_number FROM employee WHERE employee_id = 6; INSERT. The INSERT statement adds one or more new rows of data to a database table. insert using the VALUES keyword INSERT INTO table_name VALUES ('Value1', 'Value2', ... ); INSERT INTO table_name( Column1, Column2, ... ) VALUES ( 'Value1', 'Value2', ... ); insert using a SELECT statement INSERT INTO table_name( SELECT Value1, Value2, ... from table_name ); INSERT INTO table_name( Column1, Column2, ... ) ( SELECT Value1, Value2, ... from table_name ); DELETE. The DELETE statement is used to delete rows in a table. deletes rows that match the criteria DELETE FROM table_name WHERE some_column=some_value DELETE FROM customer WHERE sold = 0; UPDATE. The UPDATE statement is used to update rows in a table. updates the entire column of that table UPDATE customer SET state='CA'; updates the specific record of the table eg: UPDATE customer SET name='Joe' WHERE customer_id=10; updates the column invoice as paid when paid column has more than zero. UPDATE movies SET invoice='paid' WHERE paid > 0; SEQUENCES. Sequences are database objects that multiple users can use to generate unique integers. The sequence generator generates sequential numbers, which can help automatically generate unique primary keys, and coordinate keys across multiple rows or tables. CREATE SEQUENCE. The syntax for a sequence is: CREATE SEQUENCE sequence_name MINVALUE value MAXVALUE value START WITH value INCREMENT BY value CACHE value; For example: CREATE SEQUENCE supplier_seq MINVALUE 1 MAXVALUE 999999999999999999999999999 START WITH 1 INCREMENT BY 1 CACHE 20; ALTER SEQUENCE. Increment a sequence by a certain amount: ALTER SEQUENCE <sequence_name> INCREMENT BY <integer>; ALTER SEQUENCE seq_inc_by_ten INCREMENT BY 10; Change the maximum value of a sequence: ALTER SEQUENCE <sequence_name> MAXVALUE <integer>; ALTER SEQUENCE seq_maxval MAXVALUE 10; Set the sequence to cycle or not cycle: ALTER SEQUENCE <sequence_name> <CYCLE | NOCYCLE>; ALTER SEQUENCE seq_cycle NOCYCLE; Configure the sequence to cache a value: ALTER SEQUENCE <sequence_name> CACHE <integer> | NOCACHE; ALTER SEQUENCE seq_cache NOCACHE; Set whether or not to return the values in order ALTER SEQUENCE <sequence_name> <ORDER | NOORDER>; ALTER SEQUENCE seq_order NOORDER; ALTER SEQUENCE seq_order; Generate query from a string. It is sometimes necessary to create a query from a string. That is, if the programmer wants to create a query at run time (generate an Oracle query on the fly), based on a particular set of circumstances, etc. Care should be taken not to insert user-supplied data directly into a dynamic query string, without first vetting the data very strictly for SQL escape characters; otherwise you run a significant risk of enabling data-injection hacks on your code. Here is a very simple example of how a dynamic query is done. There are, of course, many different ways to do this; this is just an example of the functionality. PROCEDURE oracle_runtime_query_pcd IS TYPE ref_cursor IS REF CURSOR; l_cursor ref_cursor; v_query varchar2(5000); v_name varchar2(64); BEGIN v_query := 'SELECT name FROM employee WHERE employee_id=5'; OPEN l_cursor FOR v_query; LOOP FETCH l_cursor INTO v_name; EXIT WHEN l_cursor%NOTFOUND; END LOOP; CLOSE l_cursor; END; String operations. Length. Length returns an integer representing the length of a given string. It can be referred to as: length b, length c, length 2, and length 4. length( string1 ); SELECT length('hello world') FROM dual; this returns 11, since the argument is made up of 11 characters including the space SELECT lengthb('hello world') FROM dual; SELECT lengthc('hello world') FROM dual; SELECT length2('hello world') FROM dual; SELECT length4('hello world') FROM dual; these also return 11, since the functions called are equivalent Instr. Instr (in string) returns an integer that specifies the location of a sub-string within a string. The programmer can specify which appearance of the string they want to detect, as well as a starting position. An unsuccessful search returns 0. instr( string1, string2, [ start_position ], [ nth_appearance ] ) instr( 'oracle pl/sql cheatsheet', '/'); this returns 10, since the first occurrence of "/" is the tenth character instr( 'oracle pl/sql cheatsheet', 'e', 1, 2); this returns 17, since the second occurrence of "e" is the seventeenth character instr( 'oracle pl/sql cheatsheet', '/', 12, 1); this returns 0, since the first occurrence of "/" is before the starting point, which is the 12th character Replace. Replace looks through a string, replacing one string with another. If no other string is specified, it removes the string specified in the replacement string parameter. replace( string1, string_to_replace, [ replacement_string ] ); replace('i am here','am','am not'); this returns "i am not here" Substr. Substr (substring) returns a portion of the given string. The "start_position" is 1-based, not 0-based. If "start_position" is negative, substr counts from the end of the string. If "length" is not given, substr defaults to the remaining length of the string. substr( "string", start_position [, length]) SELECT substr( 'oracle pl/sql cheatsheet', 8, 6) FROM dual; SELECT substr( 'oracle pl/sql cheatsheet', 15) FROM dual; SELECT substr('oracle pl/sql cheatsheet', -10, 5) FROM dual; Trim. These functions can be used to filter unwanted characters from strings. By default they remove spaces, but a character set can be specified for removal as well. trim ( [ leading | trailing | both ] [ trim-char ] from string-to-be-trimmed ); trim (' removing spaces at both sides '); this returns "removing spaces at both sides" ltrim ( string-to-be-trimmed [, trimming-char-set ] ); ltrim (' removing spaces at the left side '); this returns "removing spaces at the left side " rtrim ( string-to-be-trimmed [, trimming-char-set ] ); rtrim (' removing spaces at the right side '); this returns " removing spaces at the right side" DDL SQL. Tables. Create table. The syntax to create a table is: CREATE TABLE [table name] ( [column name] [datatype], ... ); For example: CREATE TABLE employee (id int, name varchar(20)); Add column. The syntax to add a column is: ALTER TABLE [table name] ADD ( [column name] [datatype], ... ); For example: ALTER TABLE employee ADD (id int); Modify column. The syntax to modify a column is: ALTER TABLE [table name] MODIFY ( [column name] [new datatype] ); ALTER table syntax and examples: For example: ALTER TABLE employee MODIFY( sickHours s float ); Drop column. The syntax to drop a column is: ALTER TABLE [table name] DROP COLUMN [column name]; For example: ALTER TABLE employee DROP COLUMN vacationPay; Constraints. Displaying constraints. The following statement shows all constraints in the system: SELECT table_name, constraint_name, constraint_type FROM user_constraints; Selecting referential constraints. The following statement shows all referential constraints (foreign keys) with both source and destination table/column couples: SELECT c_list.CONSTRAINT_NAME as NAME, c_src.TABLE_NAME as SRC_TABLE, c_src.COLUMN_NAME as SRC_COLUMN, c_dest.TABLE_NAME as DEST_TABLE, c_dest.COLUMN_NAME as DEST_COLUMN FROM ALL_CONSTRAINTS c_list, ALL_CONS_COLUMNS c_src, ALL_CONS_COLUMNS c_dest WHERE c_list.CONSTRAINT_NAME = c_src.CONSTRAINT_NAME AND c_list.R_CONSTRAINT_NAME = c_dest.CONSTRAINT_NAME AND c_list.CONSTRAINT_TYPE = 'R' Setting constraints on a table. The syntax for creating a check constraint using a CREATE TABLE statement is: CREATE TABLE table_name column1 datatype null/not null, column2 datatype null/not null, CONSTRAINT constraint_name CHECK (column_name condition) [DISABLE] For example: CREATE TABLE suppliers supplier_id numeric(4), supplier_name varchar2(50), CONSTRAINT check_supplier_id CHECK (supplier_id BETWEEN 100 and 9999) Unique Index on a table. The syntax for creating a unique constraint using a CREATE TABLE statement is: CREATE TABLE table_name column1 datatype null/not null, column2 datatype null/not null, CONSTRAINT constraint_name UNIQUE (column1, column2, column_n) For example: CREATE TABLE customer id integer not null, name varchar2(20), CONSTRAINT customer_id_constraint UNIQUE (id) Adding unique constraints. The syntax for a unique constraint is: ALTER TABLE [table name] ADD CONSTRAINT [constraint name] UNIQUE( [column name] ) USING INDEX [index name]; For example: ALTER TABLE employee ADD CONSTRAINT uniqueEmployeeId UNIQUE(employeeId) USING INDEX ourcompanyIndx_tbs; Adding foreign constraints. The syntax for a foregin constraint is: ALTER TABLE [table name] ADD CONSTRAINT [constraint name] FOREIGN KEY (column...) REFERENCES table [(column...)] [ON DELETE {CASCADE | SET NULL}] For example: ALTER TABLE employee ADD CONSTRAINT fk_departament FOREIGN KEY (departmentId) REFERENCES departments(Id); Deleting constraints. The syntax for dropping (removing) a constraint is: ALTER TABLE [table name] DROP CONSTRAINT [constraint name]; For example: ALTER TABLE employee DROP CONSTRAINT uniqueEmployeeId; INDEXES. An index is a method that retrieves records with greater efficiency. An index creates an entry for each value that appears in the indexed columns. By default, Oracle creates B-tree indexes. Create an index. The syntax for creating an index is: CREATE [UNIQUE] INDEX index_name ON table_name (column1, column2, . column_n) [ COMPUTE STATISTICS ]; UNIQUE indicates that the combination of values in the indexed columns must be unique. COMPUTE STATISTICS tells Oracle to collect statistics during the creation of the index. The statistics are then used by the optimizer to choose an optimal execution plan when the statements are executed. For example: CREATE INDEX customer_idx ON customer (customer_name); In this example, an index has been created on the customer table called customer_idx. It consists of only of the customer_name field. The following creates an index with more than one field: CREATE INDEX customer_idx ON supplier (customer_name, country); The following collects statistics upon creation of the index: CREATE INDEX customer_idx ON supplier (customer_name, country) COMPUTE STATISTICS; Create a function-based index. In Oracle, you are not restricted to creating indexes on only columns. You can create function-based indexes. The syntax that creates a function-based index is: CREATE [UNIQUE] INDEX index_name ON table_name (function1, function2, . function_n) [ COMPUTE STATISTICS ]; For example: CREATE INDEX customer_idx ON customer (UPPER(customer_name)); An index, based on the uppercase evaluation of the customer_name field, has been created. To assure that the Oracle optimizer uses this index when executing your SQL statements, be sure that UPPER(customer_name) does not evaluate to a NULL value. To ensure this, add UPPER(customer_name) IS NOT NULL to your WHERE clause as follows: SELECT customer_id, customer_name, UPPER(customer_name) FROM customer WHERE UPPER(customer_name) IS NOT NULL ORDER BY UPPER(customer_name); Rename an Index. The syntax for renaming an index is: ALTER INDEX index_name RENAME TO new_index_name; For example: ALTER INDEX customer_id RENAME TO new_customer_id; In this example, customer_id is renamed to new_customer_id. Collect statistics on an index. If you need to collect statistics on the index after it is first created or you want to update the statistics, you can always use the ALTER INDEX command to collect statistics. You collect statistics so that oracle can use the indexes in an effective manner. This recalcultes the table size, number of rows, blocks, segments and update the dictionary tables so that oracle can use the data effectively while choosing the execution plan. The syntax for collecting statistics on an index is: ALTER INDEX index_name REBUILD COMPUTE STATISTICS; For example: ALTER INDEX customer_idx REBUILD COMPUTE STATISTICS; In this example, statistics are collected for the index called customer_idx. Drop an index. The syntax for dropping an index is: DROP INDEX index_name; For example: DROP INDEX customer_idx; In this example, the customer_idx is dropped. DBA Related. User Management. Creating a user. The syntax for creating a user is: CREATE USER username IDENTIFIED BY password; For example: CREATE USER brian IDENTIFIED BY brianpass; Granting privileges. The syntax for granting privileges is: GRANT privilege TO user; For example: GRANT dba TO brian; Change password. The syntax for changing user password is: ALTER USER username IDENTIFIED BY password; For example: ALTER USER brian IDENTIFIED BY brianpassword; Importing and exporting. There are two methods of backing up and restoring database tables and data. The 'exp' and 'imp' tools are simpler tools geared towards smaller databases. If database structures become more complex or are very large ( > 50 GB for example) then using the RMAN tool is more appropriate. Import a dump file using IMP. This command is used to import Oracle tables and table data from a *.dmp file created by the 'exp' tool. Remember that this a command that is executed from the command line through $ORACLE_HOME/bin and not within SQL*Plus. The syntax for importing a dump file is: imp KEYWORD=value There are number of parameters you can use for keywords. To view all the keywords: imp HELP=yes An example: imp brian/brianpassword FILE=mydump.dmp FULL=yes PL/SQL. Operators. Arithmetic operators. Examples. gives all employees from customer id 5 a 5% raise UPDATE employee SET salary = salary * 1.05 WHERE customer_id = 5; determines the after tax wage for all employees SELECT wage – tax FROM employee; Comparison operators. Examples. SELECT name, salary, email FROM employees WHERE salary > 40000; SELECT name FROM customers WHERE customer_id < 6; String operators. create or replace procedure addtest( a in varchar2(100), b in varchar2(100), c out varchar2(200) IS begin C:=concat(a,'-',b); Types. Basic PL/SQL Types. Scalar type (defined in package STANDARD): NUMBER, CHAR, VARCHAR2, BOOLEAN, BINARY_INTEGER, LONG\LONG RAW, DATE, TIMESTAMP and its family including intervals) Composite types (user-defined types): TABLE, RECORD, NESTED TABLE and VARRAY LOB datatypes : used to store an unstructured large amount of data %TYPE – anchored type variable declaration. The syntax for anchored type declarations is <var_name> <obj>%type [not null][:= <init-val>]; For example name Books.title%type; /* name is defined as the same type as column 'title' of table Books */ commission number(5,2) := 12.5; x commission%type; /* x is defined as the same type as variable 'commission' */ Note: Collections. A collection is an ordered group of elements, all of the same type. It is a general concept that encompasses lists, arrays, and other familiar datatypes. Each element has a unique subscript that determines its position in the collection. --Define a PL/SQL record type representing a book: TYPE book_rec IS RECORD (title book.title%TYPE, author book.author_last_name%TYPE, year_published book.published_date%TYPE); --define a PL/SQL table containing entries of type book_rec: Type book_rec_tab IS TABLE OF book_rec INDEX BY BINARY_INTEGER; my_book_rec book_rec%TYPE; my_book_rec_tab book_rec_tab%TYPE; my_book_rec := my_book_rec_tab(5); find_authors_books(my_book_rec.author); There are many good reasons to use collections. References. Stored logic. Functions. A function must return a value to the caller. The syntax for a function is CREATE [OR REPLACE] FUNCTION function_name [ (parameter [,parameter]) ] RETURN [return_datatype] IS [declaration_section] BEGIN executable_section return [return_value] [EXCEPTION exception_section] END [function_name]; For example: CREATE OR REPLACE FUNCTION to_date_check_null(dateString IN VARCHAR2, dateFormat IN VARCHAR2) RETURN DATE IS BEGIN IF dateString IS NULL THEN return NULL; ELSE return to_date(dateString, dateFormat); END IF; END; Procedures. A procedure differs from a function in that it must not return a value to the caller. The syntax for a procedure is: CREATE [OR REPLACE] PROCEDURE procedure_name [ (parameter [,parameter]) ] IS [declaration_section] BEGIN executable_section [EXCEPTION exception_section] END [procedure_name]; When you create a procedure or function, you may define parameters. There are three types of parameters that can be declared: Also you can declare a DEFAULT value; CREATE [OR REPLACE] PROCEDURE procedure_name [ (parameter [IN|OUT|IN OUT] [DEFAULT "value"] [,parameter]) ] The following is a simple example of a procedure: /* purpose: shows the students in the course specified by courseId */ CREATE OR REPLACE Procedure GetNumberOfStudents ( courseId IN number, numberOfStudents OUT number ) IS /* although there are better ways to compute the number of students, this is a good opportunity to show a cursor in action */ cursor student_cur is select studentId, studentName from course where course.courseId = courseId; student_rec student_cur%ROWTYPE; BEGIN OPEN student_cur; LOOP FETCH student_cur INTO student_rec; EXIT WHEN student_cur%NOTFOUND; numberOfStudents := numberOfStudents + 1; END LOOP; CLOSE student_cur; EXCEPTION WHEN OTHERS THEN raise_application_error(-20001,'An error was encountered – '||SQLCODE||' -ERROR- '||SQLERRM); END GetNumberOfStudents; anonymous block. DECLARE x NUMBER(4) := 0; BEGIN x := 1000; BEGIN x := x + 100; EXCEPTION WHEN OTHERS THEN x := x + 2; END; x := x + 10; dbms_output.put_line(x); EXCEPTION WHEN OTHERS THEN x := x + 3; END; Passing parameters to stored logic. There are three basic syntaxes for passing parameters to a stored procedure: positional notation, named notation and mixed notation. The following examples call this procedure for each of the basic syntaxes for parameter passing: CREATE OR REPLACE PROCEDURE create_customer( p_name IN varchar2, p_id IN number, p_address IN varchar2, p_phone IN varchar2 ) IS BEGIN INSERT INTO customer ( name, id, address, phone ) VALUES ( p_name, p_id, p_address, p_phone ); END create_customer; Positional notation. Specify the same parameters in the same order as they are declared in the procedure. This notation is compact, but if you specify the parameters (especially literals) in the wrong order, the bug can be hard to detect. You must change your code if the procedure's parameter list changes. create_customer('James Whitfield', 33, '301 Anystreet', '251-222-3154'); Named notation. Specify the name of each parameter along with its value. An arrow (=>) serves as the association operator. The order of the parameters is not significant. This notation is more verbose, but makes your code easier to read and maintain. You can sometimes avoid changing code if the procedure's parameter list changes, for example if the parameters are reordered or a new optional parameter is added. Named notation is a good practice to use for any code that calls someone else's API, or defines an API for someone else to use. create_customer(p_address => '301 Anystreet', p_id => 33, p_name => 'James Whitfield', p_phone => '251-222-3154'); Mixed notation. Specify the first parameters with positional notation, then switch to named notation for the last parameters. You can use this notation to call procedures that have some required parameters, followed by some optional parameters. create_customer(v_name, v_id, p_address=> '301 Anystreet', p_phone => '251-222-3154'); Table functions. CREATE TYPE object_row_type as OBJECT ( object_type VARCHAR(18), object_name VARCHAR(30) CREATE TYPE object_table_type as TABLE OF object_row_type; CREATE OR REPLACE FUNCTION get_all_objects RETURN object_table_type PIPELINED AS BEGIN FOR cur IN (SELECT * FROM all_objects) LOOP PIPE ROW(object_row_type(cur.object_type, cur.object_name)); END LOOP; RETURN; END; SELECT * FROM TABLE(get_all_objects); Flow control. Example. IF salary > 40000 AND salary <= 70000 THEN() ELSE IF salary>70000 AND salary<=100000 THEN() ELSE() If/then/else. IF [condition] THEN [statements] ELSEIF [condition] THEN ELSEIF [condition] THEN ELSEIF [condition] THEN ELSEIF [condition] THEN ELSEIF [condition] THEN ELSEIF [condition] THEN ELSEIF [condition] THEN ELSE END IF; Arrays. Example. DECLARE -- Associative array indexed by string: -- Associative array type TYPE population IS TABLE OF NUMBER INDEX BY VARCHAR2(64); -- Associative array variable city_population population; i VARCHAR2(64); BEGIN -- Add new elements to associative array: city_population('Smallville') := 2000; city_population('Midland') := 750000; city_population('Megalopolis') := 1000000; -- Change value associated with key 'Smallville': city_population('Smallville') := 2001; -- Print associative array by looping through it: i := city_population.FIRST; WHILE i IS NOT NULL LOOP DBMS_OUTPUT.PUT_LINE ('Population of ' || i || ' is ' || TO_CHAR(city_population(i))); i := city_population.NEXT(i); END LOOP; -- Print selected value from a associative array: DBMS_OUTPUT.PUT_LINE('Selected value'); DBMS_OUTPUT.PUT_LINE('Population of'); END; -- Printed results: Population of Megalopolis is 1000000 Population of Midland is 750000 Population of Smallville is 2001 DECLARE -- Record type TYPE apollo_rec IS RECORD commander VARCHAR2(100), launch DATE -- Associative array type TYPE apollo_type_arr IS TABLE OF apollo_rec INDEX BY VARCHAR2(100); -- Associative array variable apollo_arr apollo_type_arr; BEGIN apollo_arr('Apollo 11').commander := 'Neil Armstrong'; apollo_arr('Apollo 11').launch := TO_DATE('July 16, 1969','Month dd, yyyy'); apollo_arr('Apollo 12').commander := 'Pete Conrad'; apollo_arr('Apollo 12').launch := TO_DATE('November 14, 1969','Month dd, yyyy'); apollo_arr('Apollo 13').commander := 'James Lovell'; apollo_arr('Apollo 13').launch := TO_DATE('April 11, 1970','Month dd, yyyy'); apollo_arr('Apollo 14').commander := 'Alan Shepard'; apollo_arr('Apollo 14').launch := TO_DATE('January 31, 1971','Month dd, yyyy'); DBMS_OUTPUT.PUT_LINE(apollo_arr('Apollo 11').commander); DBMS_OUTPUT.PUT_LINE(apollo_arr('Apollo 11').launch); end; -- Printed results: Neil Armstrong 16-JUL-69 APEX. aka APEX, is a web-based software development environment that runs on an Oracle database.
7,225
Conart/Music/Tuning. Getting rid of some preconceptions. Shocking as it might seem to some people, the twelve tones you find in an octave on the piano don't actually reflect anything but an approximation of a certain selection of tones, that for some reason during the last 300 years became the backbone of western music. There is nothing inherently natural about the 12 tone system, and it is a mistake to think that music only can be made in it, or using parts of it (like major and minor scales, or pentatonic scales). In fact, a culture won't develop the 'intonation' that we use until it has figured out quite a bit about how frequencies work, and a certain bit of math. The Chinese are as far as I know the only culture out of Europe to have developed the same scale, and they did it way before us, but didn't use it as extensively. I will explain why, eventually. In the world, there are several different systems with all kinds of foreign tunings: Indian and Arabic which come quite close to having an additional step between our semitones, Thai which comes quite close to having seven equal steps to the octave, Balinese scales, which have somewhat unequal steps with a twist. But more on those, and the whys and hows of them. If you want to write songs for your conculture, yet maintain a certain depth and uniqueness, copying the western system may not be a good idea. If you must use our scale, then it's possible to use chords in unusual ways or even constructing unusual chords. If you are interested in a truly strange scale, take a look at this site which describes the Bohlen Pierce 13 note scale, which is very different from any scale attested in any culture.
395
Tcl Programming/TCL and ADP. Tcl and ADP Reference This is a reference for using Tcl with ADP pages, for people setting up a site with the ArsDigita Community System (see OpenACS for details). The first half is just a Tcl overview; the second half deals with .adp pages, custom tags, and AOLServer resources and parameters. Tcl Overview. Basic Tcl Language Features. <br>\ statement continuation if last character in line <br># comment out rest of line (if 1st non−whitespace char) <br>var simple variable <br>var(index) associative array variable <br>var(i,j) multidimensional associative array variable <br>$var variable substitution (also ${var}xyz) <br>[expr 1+2] command substitution <br>\char backslash substitution <br>"hello $a" quoting with substitution <br>{hello $a} quoting with no subst (deferred substitution) <br>"The only datatype in Tcl is a string. Some commands interpret arguments as numbers/booleans; those formats are:"' <br>Integer: 12 0xff(hex) 0377(octal) <br>Floating Pt: 2.1 3. 6e4 7.91e+16 <br>Boolean: true false 0 1 yes no Backslash Substitutions. \a audible alert (0x7) <br>\b backspace (0x8) <br>\f form feed (0xC) <br>\n newline (0xA) <br>\r carriage return (0xD) <br>\t horizontal tab (0x9) <br>\v vertical tab (0xB) <br>\space space <br>\newline space <br>\ddd octal value (d=0−7) <br>\xdd hex value (d=0−9,a−f) <br>\c replace ’\c’ with ’c’ <br>\\ backslash Operators and Math Functions. "The expr command recognizes these operators, in decreasing order of precedence:" "All operators support integers. All support floating point except : ~, %, «, », %, ^, and |. Boolean operators can also be used for string operands, in which case string comparison will be used. This occurs if any of the operands are not valid numbers. The following operators have "lazy evaluation" as in C:"<br>&&, ||, ?:<br>"The expr command recognizes the following math functions:"<br>abs hypot int double floor ceil fmod round<br>cos sin tan acos asin atan atan2 cosh sinh tanh<br>log log10 exp pow sqrt Lists. Note: list indices start at 0 and the word end may be used to reference the last element in the list. The Tcl <--> ADP interface. Including TCL code in ADP pages. "Inline replacement:" <%= tcl code that evaluates to the text you want embedded %> "State changing Code:" <% tcl commands to change tcl environment (set vars, do db queries, etc) %> "Example:" Two plus two is <%= [expr 2+2] %> Defining custom tags. ns_register_adptag "codeexample" "/codeexample" tcl_adp_codeexample proc tcl_adp_codeexample {string tagset} { return "<blockquote> codice_1 </blockquote> Resourcing tcl in AOLserver. "source−file.tcl − to source a particular file." "Note: this does not work for registered tag changes. To cause a reload of everything:" Using ns_set. "When one queries a database, the variable that is set is an ns_set:"
1,075
Conlang/Intermediate/Irregularities. If you look at any natural language you will quickly find that its grammar is not completely regular. Almost any language will have some words that don't fit in to the usual pattern. Adding irregularities to your conlang is a must if you want it to feel more 'naturalistic'. Types of irregularities. Despite their name, most kinds of irregularities can be classed into groups. Sound change. "You should be familiar with sound changes from previous chapters (History and Common sound changes). This section aims to show how well-chosen sound changes can affect the regularity of a conlang." This is probably the most common cause of irregularities. Sound changes cause irregularities by changing a word differently depending on what form it is in. For example, if you have a noun ono and its plural is formed by suffixing a t to the end then you have a perfectly regular relationship. ono > onot However, if you make a sound change rule which states that o becomes u at the end of a word, then the relationship is no longer so obvious because the o changes into a u in the singular but doesn't in the plural because the o is no longer at the end of the word. onu > onot Using the right sound changes for your conlang is a good way of adding irregularity to it. Suppletion. Suppletion is a slightly rarer phenomenon so if you decide to use it in your conlang then you should do so sparingly. But, of course, before you can use it, you need to know what it is: Suppletion is when a completely separate word replaces one part of a paradigm. For example, the English word "went" is completely unrelated (etymologically speaking) to "go". "Went" is actually the past tense of a mostly forgotten verb "wend" which means more or less the same thing as "go". Because the verbs came to mean roughly the same thing, most people stopped using "wend" and used "go" instead. However, the past tense of "wend" survived and was used as the past tense of "go". Thus instead of the paradigm "go/goes", "is going", "goed" ("ging"), "has/have gone", we now have "go/goes", "is going", "went", "has/have gone". Suppletion also affected the English verb "to be" ("am/is/are", "am/is/are being", "was/were", "has/have been").
574
Electronics/Digital Circuits. Overview. In Digital circuits there are two parts one is Digital the other is circuits. Digital Circuit can be any electric circuit and Digital circuit is a circuit which is designed to implement the description of digital circuit. Digital circuits are developed using a special type of mathematics called 'Boolean Algebra'.
83
JavaScript/Introduction. __NOEDITSECTION__ JS is a programming language that implements the international standard ECMAScript. It is based on the following concepts. Dynamic data types. JS knows some "primitive data types" (Number, String, Boolean, BigInt, Symbol, Undefined, Null) and diverse derivates of the data type "object" (Array, Date, Error, Function, RegExp). If a variable exists, its type is clearly defined. But the type can be changed at any time by assigning a value of a different type to the variable, e.g.: the code fragment codice_1 is perfectly correct. It will not create a compile-time or run-time error. Only the type of the variable codice_2 changes from "Undefined" to "String" to "Number" and lastly to "Object/Array". Functional programming. Functions are "first-class citizens" similar to variables. They can be assigned to variables, passed as arguments to other functions, or returned from functions. The code fragment codice_3 creates a function "sayHello", assigns it to the variable "x", and executes it by calling "x()". Object-orientated programming. JS supports object-oriented programming and inheritance through prototypes. A prototype is an object which can be cloned and extended. Doing so, a "prototype chain" arises. This differs from other OO-languages, e.g. Java, which uses "classes" for object-oriented features like inheritance. Nevertheless, at the syntactical level, "classes" are available in JS. But this is only 'syntactical sugar'. Under the hood, JS uses the prototype mechanism. C-like syntax. The JS syntax is very similar to that of C, Java, or other members of the C-family. But we must always consider that the concepts and runtime behavior are distinctly different. Relation to Java. JS has no relation to Java aside from having a C-like syntax. To avoid possible confusion, we would like to highlight some distinctions between JS and Java clearly. In the beginning, "Netscape" developed JavaScript, and "Sun Microsystems" developed Java. Java includes classes and object instances, whereas JavaScript uses prototypes. In Java, variables must be declared before usage, which is unnecessary (but not recommended) in JS. In Java, variables have an immutable static type (codice_4 or codice_5, for example) that remains the same during the complete lifespan of a running program. In JS they also have a type (codice_6 or codice_5, for example), but this type can change during the lifespan of a running program. The type is detected from the environment. Therefore it's not necessary and not possible to define the type explicitly. int x = 0; // Java: 'name of type', 'name of variable', ... let x = 0; // JS: 'let' or 'const', 'name of variable', ... // The type will be 'Number' because of the right side of the equal sign. let x = String (0); // JS: explicit change from 'Number' to 'String' BEFORE assignment to x // The type will be 'String'. Test it with: alert(typeof x) JS engines. JS can run on the client-side as well as on the server-side. First versions of JS have run in Browsers that acted as mere interpreters. Today, the language is handled by just-in-time compilers (JIT). They parse the script, create an Abstract Syntax Tree (AST), optimize the tree, generate a JIT-specific bytecode out of the AST, generate hardware-specific machine code out of the bytecode, and bring the machine code to execution. Such just-in-time compilers exist not only in Browsers. They can also be part of other applications, e.g.: "node.js " which is written mainly in C++. Widely used JS engines are:
883
JavaScript/JavaScript within HTML. The language JavasScript was originally introduced to run in browsers and handle the dynamic aspects of user interfaces, e.g., validation of user input, modifications of page content (DOM) or appearance of the user interface (CSS), or any event handling. This implies that an interconnection point from HTML to JS must exist. The HTML element codice_1 plays this role. It is a regular HTML element, and its content is JS. The codice_1 element may appear almost anywhere within the HTML file, within codice_3 as well as in codice_4. There are only a few criteria for choosing an optimal place; see below. Internal vs. external JavaScript. The codice_1 element either contains JS code directly, or it points to an external file resp. URL containing the JS code through its codice_6 attribute. The first variant is called "Internal JavaScript" or "Inline JavaScript", the second "External JavaScript". In the case of "Internal JavaScript" the codice_1 element looks like: <script> // write your JS code directly here. (This line is a comment in JS syntax) alert("Hello World!"); </script> "Internal scripting" has the advantage that both your HTML and your JS are in one file, which is convenient for quick development. This is commonly used for temporarily testing out some ideas, and in situations where the script code is small or specific to that one page. For the "External JavaScript" the codice_1 element looks like: <!-- point to a file or to a URL where the code is located. (This line is a comment in HTML syntax) --> <script src="myScript.js"></script> <script src="js/myScript2.js"></script> <script src="https://example.com/dist/js/externallib.js"></script> <script src="https://example.com/dist/js/externallib.min.js"></script> <!-- although there is nothing within the script element, you should consider that the HTML5 spec --> <!-- doesn't allow the abbreviation of the script element to: <script src="myScript.js" /> --> Separate Files for Javascript Code. Having your JS in a separate file is recommended for larger programs, especially for such which are used on multiple pages. Furthermore, such splits support the pattern of Separation of Concerns: One specialist works on HTML, and another on JS. Also, it supports the division of the page's content (HTML) from its behavior (JS). Overall, using "External scripting" is considered a best practice for software development. Remote Code Injection vs. Local Library. With the example <script src="https://example.com/dist/js/externallib.min.js"></script> you can inject remotely maintained code from the server codice_9 in your local web project. Remote code updates may break your local project or unwanted code features may be injected into your web project. On the other hand, centralized maintained and updated libraries serve your project due to bugfixes that are automatically updated in your project when the library is fetched again from the remote server. Minified vs. Non-Minified Code. Minified Javascript code compresses the source code e.g. by shorting comprehensive variables like vImage into a single character variable a. This reduces significantly the size of the library and therefore reduces network traffic and response time until the web page is ready. For development and learning it might be helpful to have the uncompressed libraries locally available. External JavaScript. For more detailed information you can refer to MDN . The codice_6 attribute. Adding codice_11 to the opening codice_12 tag means that the JS code will be located in a file called "myScript.js" in the same directory as the HTML file. If the JS file is located somewhere else, you must change the codice_6 attribute to that path. For example, if it is located in a subdirectory called "js", it reads codice_14. The codice_15 attribute. JS is not the only scripting language for Web development, but JS is the most common one on client-side (PHP runs on server-side). Therefore it's considered the default script type for HTML5. The formal notation for the type is: codice_16. Older HTML versions know a lot of other script types. Nowadays, all of them are graded as "legacy". Some examples are: codice_17, codice_18, codice_19, or codice_20. In HTML5, the spec says that - if you use JS - the codice_15 attribute should be omitted from the script element , for "Internal Scripting" as well as for "External Scripting". <!-- Nowadays the type attribute is unnecessary --> <script type="text/javascript">...</script> <!-- HTML5 code --> <script>...</script> The codice_22 and codice_23 attributes. Old browsers use only one or two threads to read and parse HTML, JS, CSS, ... . This may lead to a bad user experience (UX) because of the latency time when loading HTML, JS, CSS, images, ... sequentially one after the next. When the page loads for the first time, the user may have the impression of a slow system. Current browsers can execute many tasks in parallel. To initiate this parallel execution with regards to JS loading and execution, the codice_1 element can be extended with the two attributes codice_22 and codice_23. The attribute codice_22 leads to asynchronous script loading (in parallel with other tasks), and execution as soon as it is available. <script async src="myScript.js"></script> codice_23 acts similar. It differs from codice_22 in that the execution is deferred until the page is fully parsed. <script defer src="myScript.js"></script> Location of codice_1 elements. The codice_12 element may appear almost anywhere within the HTML file. But there are, however, some best practices for speeding up a website . Some people suggest to locate it just before the closing codice_32 tag. This speeds up downloading, and also allows for direct manipulation of the Document Object Model (DOM) while it is rendered. But a similar behavior is initiated by the above-described codice_22 and codice_23 attributes. <!DOCTYPE html> <html> <head> <title>Example page</title> </head> <body> <!-- HTML code goes here --> <script src="myScript.js"></script> </body> </html> The codice_35 element. It may happen that people have deactivated JS in their browsers for security or other reasons. Or, they use very old browsers which are not able to run JS at all. To inform users in such cases about the situation, there is the codice_35 element. It contains text that will be shown in the browser. The text shall explain that no JS code will be executed. <!DOCTYPE html> <html> <head> <title>Example page</title> <script> alert("Hello World!"); </script> <noscript> alert("Sorry, the JavaScript part of this page will not be executed because JavaScript is not running in your browser. Is JavaScript intentionally deactivated?"); </noscript> </head> <body> <!-- HTML code goes here --> </body> </html> JavaScript in XHTML files. XHTML uses a stricter syntax than HTML. This leads to small differences. First, for "Internal JavaScript" it's necessary that the scripts are introduced and finished with the two additional lines shown in the following example. <script> // <![CDATA[ alert("Hello World!"); </script> Second, for "External JavaScript" the codice_15 attribute is required.
2,040
Conart/Music/Difference. So, what is different in these other scales? Except that the pianos will have different layouts for their keys, there will be certain differences: However, there also is an opposite effect:
50
Geodesy. What is ? Webster defines geodesy as "that branch of applied mathematics which determines by observation and measurement the exact positions of points and the figures and areas of large portions of the earth's surface, the shape and size of the earth, and the variations of terrestrial gravity." It is a specialized application of several familiar facets of basic mathematical and physical concepts. In practice, geodesy uses the principles of , and , and applies them within the capabilities of modern engineering and technology. In the past, military geodesy was largely involved with the practical aspect of the determination of exact positions of points on the Earth's surface for mapping or artillery control purposes. The determination of the precise size and shape of the earth had a purely scientific role. Modern requirements are for answers to problems in , global and defensive missile operations.
177
Quantum Field Theory. This book is about Quantum Field Theory, a theoretical framework for constructing quantum mechanical models of subatomic particles.
28
Amateur Radio Manual/What is Voltage. So far we have learned that valence electrons will flow in a conductor if we provide enough energy or force so that the electrons will leave the atom and jump to the next atom. We can cause a net movement of electrons through a conductor by attaching one end to a negative source and the other to a positive source. Because like charges repel and unlike charges attract, electrons will move from the negative source to the positive end. It just sounds wrong to have something move from negative to positive but remember that negative and positive are charge types not amounts. The applied force that causes the electrons to flow is called voltage (after the scientist Alessandro Volta) or electromotive force (emf). We give it the symbol formula_1 or ℰ in equations. Voltage is measured with a voltmeter or multimeter and is the potential difference between two points in a circuit. The basic unit is the volt (V). Consider a battery. The chemical process in the battery creates a surplus of electrons at the negative terminal and a deficiency of electrons at the positive end (due to the fact that absence of electrons does not automatically indicate presence of positive charge, unless we consider this terminal charged with positive "ions"). The potential difference or voltage between these two points is the "push" that the battery can supply to a conductor that may connect the two terminals.
306
Amateur Radio Manual/What is Current. To make electrons flow in a conductor, we first must supply some form of electromotive force (potential difference). This can be achieved attaching the conductor to some source (generator), like a battery or a power supply. Current, then, is the "rate of flow" of electrons through the conductor (with unitary diameter), measured in amperes (A) after the French scientist André-Marie Ampère. In formulae, current is represented by the letter I. That's a capital "eye". The ampere is a large measurement so we usually refer to milliampere (mA), being 1/1000 of an ampere, in small measurements. One ampere is equivalent to formula_1 charged particles (or 1 coulomb) moving through a unitary section conductor in one second. Current is measured with an ammeter.
214
Perl Programming/Exercise 1 Answers. A. Getting started, displaying text. #!/usr/bin/perl print "Hello World!\n"; B. Displaying Numbers. #!/usr/bin/perl # Begin variable definitions $numerator = 4000; # This is the numerator $denomenator = 7; # This is the denominator $answer1 = $numerator / $denomenator; # Answer to part i $dec_places=3; printf ("%.2f",$answer1); # End variable definitions print "$numerator divided by $denomenator is: $answer1\n"; $mod_num = $numerator * 10**$dec_places; # Modified Numerator $remainder = $mod_num % $denomenator; $num_div_denom = ($mod_num - $remainder); # $num_div_denom is divisible by denomenator $no_dec_places = $num_div_denom / $denomenator; # This number has no decimal places $answer2 = $no_dec_places / 10**$dec_places; # Answer to part ii print "$numerator divided by $denomenator to 3 decimal places is: $answer2\n"; # Rounds up if the remainder / denominator is > 0.5 $round = $remainder / $denomenator; if ($round > 0.5) { $no_dec_places += 1; $answer3 = $no_dec_places / 10**$dec_places; # Answer to part iii print "$numerator divided by $denomenator to 3 decimal places, rounded is: $answer3\n"; print "The number with three leading zeros: "; print "0" x 3 . "$answer2\n"; # Answer to part iv if ( $answer1 >= 0 ) { print "The number is positive (+): +$answer\n"; # Answer to part v, first part else { print "The number is negative (-): $answer\n"; # Answer to part v, second part C. Functions. The function: sub evaluate_delta_and_answer { my($x,$y,$z) = @_; if ($x != 0) { $delta = ($y**2 - (4 * $x * $z)); if ($delta < 0) { print "b^2-4ac is less than zero. Both roots undefined.\n\n"; print "Program Terminated. Goodbye, Dave.\n\n" elsif ($delta == 0) { $root = (0 - $y) /(2 * $x ); print "b^2-4ac = 0. There will be only one root: " . $root . "\n\n"; print "Goodbye, Dave.\n\n"; elsif ($delta > 0) { print "b^2-4ac > 0. There will be two roots.\n\n"; $root1 = ((0 - $y) - ($delta)**(0.5)) / (2 * $x); $root2 = ((0 - $y) + ($delta)**(0.5)) / (2 * $x); print "The first root, x1 = " . $root1 . "\n\n"; print "The second root, x2 = " . $root2 . "\n\n"; print "Goodbye, Dave.\n\n"; else { print "a = 0. This is not a quadratic function.\n"; print "Goodbye, Dave.\n"; The rest of the program: print "This program takes three numbers (a, b and c) as coefficients\n"; print "of a quadratic equation, calculates its roots, and displays them\n"; print "on the screen for you.\n\n"; print "Please enter the value of a and press <ENTER>: "; $a = <STDIN>; print "\n"; print "Please enter the value of b and press <ENTER>: "; $b = <STDIN>; print "\n"; print "Please enter the value of c and press <ENTER>: "; $c = <STDIN>; print "\n"; evaluate_delta_and_answer($a,$b,$c);
1,062
Object Oriented Programming. Object-Oriented Programming (OOP) is a model of programming that uses Objects as representation of data and the data's properties. Objects can be defined as fields of data with unique properties, or attributes and methods (functions). At its heart, object-oriented programming is a mindset which respects programming as a problem-solving dilemma on a grand scale that requires careful application of abstractions and subdividing problems into manageable pieces. Compared to procedural programming, a superficial examination of code written in both styles would reveal that object-oriented code tends to be broken down into vast numbers of small pieces, with the hope that each piece will be trivially verifiable. OOP was one step towards the holy grail of software re-usability, although no new term has gained widespread acceptance, which is why "OOP" is used to mean almost any modern programming distinct from systems programming, assembly programming, functional programming, or database programming. Modern programming would be better categorized as "multi-paradigm" programming, and that term is sometimes used. This book is primarily aimed at modern, multi-paradigm programming, which has classic object oriented programming as its immediate predecessor and strongest influence. Historically, "OOP" has been one of the most influential developments in computer programming, gaining widespread use in the mid 1980s. Originally heralded for its facility for managing complexity in ever-growing software systems, OOP quickly developed its own set of difficulties. Fortunately, the ever evolving programming landscape gave us "interface" programming, design patterns, generic programming, and other improvements paving the way for more contemporary Multi-Paradigm programming. While some people will debate endlessly about whether or not a certain language implements "Pure" OOP—and bless or denounce a language accordingly—this book is not intended as an academic treatise on object oriented programming or its theory. Instead, we aim for something more pragmatic: we start with basic OO theory and then delve into a handful of real-world languages to examine how they support OO programming. Since we obviously cannot teach each language, the point is to illustrate the trade-offs inherent in different approaches to OOP. Although OOP is quite complex to beginners it becomes easy when you first fully understand what pillars the concept of OOP is built on.
520
Swedish/Lesson 2. Dialogue. Hans - a German tourist - is driving through Skåne in southern Sweden with his caravan looking for a place to stay the night. He meets Johan - a local Swedish farmer. Hans: Ursäkta mig. Vet du var husvagnscampingen ligger? <br> "English: Excuse me. Do you know where the caravan camp is located? "<br> Johan: Ja, det är bara att följa den här vägen 5 kilometer och sedan svänga åt vänster. <br> "English: Yes, just follow this road for 5 kilometers and then turn left. "<br> Hans: Tack så mycket! Jag har letat i flera timmar. <br> "English: Thank you very much! I have been looking for several hours. "<br> Johan: Det är ingen fara. Fin husvagn förresten. Varifrån kommer du? <br> "English: Don't worry about it. Nice caravan by the way. Where are you from?" <br> Hans: Jag kommer från Tyskland och är här i Sverige på semester över sommaren, som du kanske förstod. <br> "English: I'm from Germany and I'm here in Sweden on vacation during the summer, as you might have imagined. "<br> Johan: Jo, det är ju många tyskar som kommer hit under sommaren. <br> "English: Yes, there are many Germans coming here during the summer. "<br> Hans: Kanske för att Sverige är så underbart. Hur som helst måste jag åka nu. Tack så mycket för hjälpen igen! <br> "English: Maybe because Sweden is so wonderful. Anyhow I have to go now. Thanks so much for the help again! "<br> Johan: Ha en trevlig semester. Hej då! <br> "English: Have a pleasant vacation. Good bye! "<br> Travelling. Travelling in Sweden is easy. Most people understand and speak good English. Some people are also skilled in German, French, Spanish, and Finnish. Most Swedes can easily understand Norwegian and well-enunciated Danish. The biggest railway company is Statens Järnvägar (State Railroads). The dominating airline company is (SAS). SAS is a member of . There are also buses travelling all over the country. The biggest provider of long distance bus travel is Swebus Express. Driving. In Sweden, driving is done on the right side of the road. For some rules and regulations, see Vägverket (Swedish Road Administration). You can drive freely for one year on a valid non-Swedish driving licence, and forever if you have a driving licence from an country. Swiss or Japanese drivers’ licences can be exchanged for a Swedish licence for permanent residents. Use of public transportation (like bus or train) is encouraged. In Stockholm, there is a new charge for motorists to reduce congestion on the streets (see The Local). "Allemansrätten" - All People's Right. In Sweden, one is allowed to walk in forests and fields and pick berries, mushrooms and flowers, even if it is on private property. This is in Swedish called 'allemansrätten'. There are some restrictions, which mostly anyhow fall under rules of politeness. The right only extends to "the wild" (as forests and meadows), not to obviously planted gardens. The area closest to a house also is exempted. Thus, for instance, one should not camp directly in another's front yard, nor can one light a fire if fire restrictions are in effect. You should not pick flowers that are rare or planted. This also extends to branches and twigs on living trees; the owners of a forest may have planted them, and probably plans to sell or let their children sell the trees as wood to the Swedish forest industry, Thus, you only may take fallen wood for a fire. when they are full-grown. You can camp one night on another's land. This right also does not extend to littering or bothering the animals, and you must pack out everything you take in, Currency. The currency in Sweden is the Swedish krona (SEK) and öre, where 100 öre is 1 SEK. Öre are no longer used if paying cash. The exchange rate is about 8-9 SEK for 1 €. Many Swedes will use the word "crowns" for their currency, since that is the literal translation for "kronor". To get cash in Sweden is easy, visit the closest ATM (called Bankomat or Minuten) with your VISA, MasterCard or similar. The coins of less value than the 1 SEK are no longer used, however, you will still see prices such as 11 kronor 90 öre, but these prices are rounded to the closest amount in "kronor", if you pay cash. Vocabulary. As you can see, words can quite easily be put together to form new words. One difference from English is that in Swedish you don't separate the words, you write "en bilkarta", not "en bil karta". Writing it with a space is called "särskrivning", literally "separate writing", and should be avoided as it's incorrect. Grammar. This is some short introductory grammar to the definite form in Swedish. For a more complete guide, look at the page about nouns. Singular. To make a word into definite form, you add letters to the end of the word and remove the "en" or "ett" from before it. To do this, first you have to know if it's "en" or "ett". In most cases, not always though (I'll come back to this), you simply add a "n" to "en"-words, and a "t" to "ett"-words if they end with a vowel. If they end with a consonant, you do the same thing, but add an "e" between the word and the ending. Plural. Plural isn't as easy, as there's five "declensions". These you simply have to learn, but when you've read and spoken Swedish for a while, you won't have any trouble with it, you'll simply know when to use a certain ending. For now, ask a Swede, or look the word up in a dictionary. More on the declensions here (link to wikipedia).
1,582
Swedish/Vocabulary. How to use this vocabulary. All the conjugation forms of verbs, nouns and adjectives have been given here. If the ending of the word has been, at some point, divided with a slash (/), it means that the ending has to be attached to the word by dropping the following letter(s) separated by the slash from the rest of the word. E.g. "en abborr/e, -en, -ar, -arna" should become "en abborre, abborren, abborrar, abborrarna". Notice how the "e" in abborre is dropped, leaving the stem "abborr", to which the endings "-en", "-ar" or "-arna" are attached. If there is no dividing slash at some point in the end of a noun or a verb, the ending is directly attached to the word. E.g. "en arm, -en, -ar, -arna" becomes "en arm, armen, armar, armarna". Also notice that "en" and "ett" are both articles and are dealt with in the grammar section. Ä. (skol)Ämne -- subjekt
274
Quantum Field Theory/QFT Schwinger-Dyson. In quantum field theory, action is given by the functional S of field configurations (which only depends locally on the fields), then the time ordered vacuum expectation value of polynomially bounded functional F, <F>, is given by In fact, on shell equations for the classical case usually have their quantum analog because, in a hand wavy way, when integrating over regions of the configuration space which are significantly off shell, the rapidly oscillating phases would tend to produce "destructive interference" wheareas for regions close on shell, we tend to have "constructive interference". For example, what is the analog of the on shell Euler-Lagrange equations, formula_2? If the functional measure formula_3 turns out to be translationally invariant (we'll assume this for the rest of this article, although this does not hold for, let's say nonlinear sigma models) and if we assume that after a Wick rotation which now becomes for some "H", goes to zero faster than any reciprocal of any polynomial for large values of φ, integrate by parts (after a Wick rotation, followed by a Wick rotation back) to get the following Schwinger-Dyson equations: for any polynomially bounded functional "F". These equations are the analog of the on shell EL equations. If J (called the source field) is an element of the dual space of the field configurations (which has at least an affine structure because of the assumption of the translational invariance for the functional measure then, the generating functional Z of the source fields is defined to be: formula_7 Note that formula_8 where formula_9 Basically, if formula_10 is viewed as a functional distribution (this shouldn't be taken too literally as an interpretation of QFT, unlike it's Wick rotated statistical mechanics analogue, because we have time ordering complications here!), then formula_11 are its moments and Z is its Fourier transform. If F is a functional of φ, then for an operator K, F[K] is defined to be the operator which substitutes K for φ. For example, if formula_12 and G is a functional of J, then formula_13. Then, from the properties of the functional integrals, we get the "master" Schwinger-Dyson equation: formula_14 If the functional measure is not translationally invariant, it might be possible to express it as the product formula_15 where M is a functional and formula_3 is a translationally invariant measure. This is true, for example, for nonlinear sigma models where the target space is diffeomorphic to Rn. However, if the target manifold is some topologically nontrivial space, the concept of a translation does not even make any sense. In that case, we would have to replace the S in this equation by another functional formula_17 If we expand this equation as a Taylor series about J=0, we get the entire set of Schwinger-Dyson equations. Now how about the on shell Noether's theorem for the classical case? Does it have a quantum analog as well? Yes, but with a caveat. The functional measure would have to be invariant under the one parameter group of symmetry transformation as well. Let's see how it goes. Let's just assume for simplicity here that the symmetry in question is local (I don't mean local in the gauge sense. I mean local in the sense that the transformed value of the field at any given point under an infinitesimal transformation would only depend on the field configuration over an arbitrarily small neighborhood of the point in question.). Let's also assume that the action is local in the sense that it is the integral over spacetime of a Lagrangian and that formula_18 for some function f where f only depends locally on φ (and possibly the spacetime position). If we don't assume any special boundary conditions, this would not be a "true" symmetry in the true sense of the term in general unless f=0 or something. Here, Q is a derivation which generates the one parameter group in question. We could have antiderivations as well, like for example BRST and supersymmetry. Let's also assume formula_19 for any polynomially bounded functional F. This property is called the invariance of the measure. And this does not hold in general. See anomaly (physics) for more details. Then, formula_20, which implies formula_21 where the integral is over the boundary. This is the quantum analog. Now, let's assume even further that Q is a local integral formula_22 where q(x)[φ(y)]=δ(d)(x-y)Q[φ(y)] so that formula_23 where formula_24 (this is assuming the Lagrangian only depends on φ and its first partial derivatives! More general Lagrangians would require a modification to this definition!). Note that we're NOT insisting that q(x) is the generator of a symmetry (i.e. we're NOT insisting upon the gauge principle), but just that Q is. And let's also assume the even stronger assumption that the functional measure is locally invariant: formula_25. Then, we'd have formula_26 Alternatively, formula_27 The above two equations are the Ward-Takahashi identities. Now for the case where f=0, we can forget about all the boundary conditions and locality assumptions. We'd simply have <Q[F]>=0. Alternatively, formula_28 An example: φ4. To give an example, suppose formula_29 for a real field φ. Then, formula_30. The Schwinger-Dyson equation for this particular example is: formula_31 Note that since formula_32 is not well-defined (formula_33 is a distribution in x1, x2 and x3), this equation needs to be regularized!
1,372
Aros/User/Docs. What is AROS. Google translation German, Dutch, French, Italian, Danish, Spanish, Hindi, Chinese, Russian, Polish, Japanese, Korean, Portuguese, AROS is one of the intermediate levels between the computer hardware and the user. It is an open-source, clean-room implementation of AmigaOS 3.x that can be run on many different computer architectures. It runs primarily on x86 32bit and 64bit (as 32bit) hardware but also on motorola 68k and compatibles, AMD/Intel x86_64bit (work in progress), ARM and PowerPC. This page will cover enough to be able to write the downloaded image to your preferred media, to run a LiveUSB, LiveCD or LiveDVD on your office/home PC (Live meaning you can test without changing your existing setup) and, ultimately, to use it. Intel / AMD hardware support mostly covers the years 2000 to 2010, AROS has diminished support afterwards especially for SATA and USB3 (2014 onwards) which may prevent successful booting. At the moment, AROS is not recommended to be installed on a working vital data holding machine. Instead, installing to its own separate hard disk or USB stick is a much better option. AROS is an hobby OS and can co-exist with Windows(TM), MacOSX(TM), Android(TM) or Linux(TM) and act as an alternative. Unfortunately, Aros has few developers so upgrades and improvements can take time to appear. AROS core is now ~80% finished and is usable, so keep in mind that the software is still considered ALPHA/BETA and in development. Currently AROS is fun to play with on a curiosity level, but it is also interesting to program. AROS has some multimedia features and has internet access. Most importantly, use AROS to its maximum potential as it stands now, find ways to have fun with it and share your experiences. Good Sites to visit Distributions aka Distros. For end users there are distributions (ready made with many apps to be easy to use), mostly created and maintained by one person in their own workflow/style. AROS Native is the term coined to describe AROS being run without any OS underneath it. It runs alone just like AmigaOS(TM) did. As this version does not benefit from "Hosted" drivers, dedicated ones have to be ported/written. Hence the smaller range of supported hardware peripherals. We have other pages highlighting this support AROS was originally developed on Linux running on an Intel-based computer, but now can be run as an app on many more operating systems (FreeBSD, Linux and Windows). This may sound strange: an OS running on top of another OS. Basically, this is to take advantage of existing Linux or Windows drivers (audio, internet, graphics, etc) and compiler environments with which people may be already familiar. The term we use for what AROS does is "Hosted". AROS is open source so basically everyone can take part. The source is public and there are daily new commits and so based on these commits AROS is automatically compiled daily, result are the nightly builds which you can see and download. The nightly builds NB are only used for testing changes, testing software and the starting point for distribution maintainers or even your own distribution. They are very basic, miss some functionality and apps and are not suited for end users. There are two standards ABIv0 (old) and ABIv1 (newest). Media. Tiny AROS 22bc993625b7c75b17263c0cc7e7baaa *Tiny Aros_copy.vhd (March 2024) There can be a vhd image inside the zip that can be written to usb sticks which is much faster than the old ISO method This .vhd can be written to an usb with these pieces of software Windows - RPi Raspberry Imager, use custom and see all files, Etcher, [], Rufus up to version 3.20 may work with VirtualBox HD vhd images with Win7 but not new 4.x versions, Linux - Raspberry Imager Ubuntu, Suse Image / Multi writer, dd, Mac - Icaros 2.3 USB image needs an header stripped so it can work correctly dd bs=512 skip=1 status=progress if=icaros_light_2-3-0_pendrive.bin of=icaros_light_2-3-0_pendrive_OK.bin && sync You can use a virtual emulator like VirtualBox, VMWare to mount the iso image which can then be used to boot and install to USB. Previously the only installation option was CD-RW or DVD-RW, since the whole system can be burnt onto a single disk and can be reused when the next version is released. Good branded discs like Taiyo Yuden (JVC) or Verbatim should be used to reduce frustration later. The days for this media is gone but kept here for information Since nobody currently sells AROS on any other media, you will need access to a CD/DVD burner to create the installation disk yourself. After it is on a CD or DVD, then access and writing to USB pendrives becomes available (this should be viewed as an alternative method now), as well as using good USB manufacturers like Sandisk, Kingston, etc rather than some other no-names. Try burning it to a CD-RW or DVD-RW using your CD/DVD burning program (most burning software have a burn iso option). The ideal writing speed is 2x or 4x, higher speeds can give errors and problems. Check the writing integrity of your CD or DVD if your software has an option to do so before going any further. For ARM Pi Aros, copy the files onto a FAT32 formatted SD card. Booting. The LiveUSB, and in the past LiveCD LiveDVD, is designed to trial (test drive) various operating systems without having to install them to your working system You may have to press F9, F10 or F12 or p on boot up to present a device boot options like usb or cd/dvd The boot should be fully automatic, and if everything works you should see a multiple choice graphic card screen after a little while (USB seconds, CDs and DVDs can be a little more than a minute). Since 2011, UEFI introduction to replace the BIOS made booting more confusing. Some changes needed If you're having boot issues and have a null modem cable and a spare pc, a boot log is always useful. Edit your grub line to include debug=serial but would try with an with sysdebug=all in the line later as it can cause issues booting on machines with sysdebug=all enabled (corrupts the cpu initialization). For Virtual machines VMWare VirtualBox, etc, attach and press play to start the ISO image If booting hasn't worked then it could be down to Bios/UEFI settings or USB3 (2014 onwards) PCITool can show if the motherboard chipset is in IDE mode. Class = 0x01 means STORAGE, Subclass = 0x01 means IDE. Also ProductID 0x3a20 resolves to non-AHCI mode in Intel ICH10 documentation. nvme.device. in development Since 2018, nvme drives are standard on most machines JMS583 Realtek rtl9210B AHCI. Starting taking over since 2011 on a lot of machines - editing these settings should be avoided until AROS has better AHCI sata support AHCI sata can be very difficult to get working Most Windows installs are already set to AHCI sata, changing this to a legacy IDE mode setting will not boot again until changed back Now as far as hardware goes on a newer machine with an NVME drive you may need to add NVME=disable as the NVME driver could potentially cause lockups. If you have later USB3 only, these machines with USB3 only chipsets won't enable legacy ports (USB2 USB1 etc) without the XHCI drivers. You could buy a newer USB2 card like 4 port one Moschip MSC9990 PCIe version, but it's got flakey initialization so works best if you use with an external hub. With a 16C/32T chip machine, disable SMT and it should boot. ata.device for old BIOS's. Pre 2010 this was the defacto standard method of providing settings to the computer at a lower level Some adjustments to the BIOS setup options are necessary (usually by pressing a key like DEL, F1, F2, F12 or ESC, p on the very early boot up of the computer). Save options changed at the end. Advice for various machines. Some of the stages involved and shown on the display in a typical AROS boot start up AROS's native SATA/AHCI driver doesn't always work. If you get errors related to ahci.device, try disabling it. At your chosen boot entry in the GRUB menu, Press E, scroll down to the ahci.device entry, and add a # or ; at the start of that line or delete it with Ctrl-K. Then press Ctrl-X or F10 to boot. If your disk isn't accessible at all with this change, you might need to change the SATA controller to IDE legacy mode in the BIOS: however, making this change will likely cause problems booting Windows on the same machine (if it's already installed). To disable ahci.device permanently, edit the text file "SYS:Arch/pc/grub/grub.cfg", and remove the ahci.device line from all boot entries you intend to use. SATA AHCI Timeout while waiting for device to complete operations with BIOS SATA entry set to AHCI mode stops at "waiting for bootable media" screen, changing BIOS SATA setting back to IDE mode may allow it to continue booting The ATA driver doesn't always work. If you get errors related to ata.device, try using the alternative in sys:devs/alt which is an older version. Press E when your chosen boot entry is highlighted in the GRUB menu, scroll down to the ata.device entry, and change it to read "module /Devs/Alt/ata.device". Then press Ctrl-X to boot. To make this change permanent, edit the text file "SYS:Arch/pc/grub/grub.cfg", and change the path to ata.device in all boot entries you intend to use. Further options (removing the " ") to add to the GRUB menus to disable certain other components for debugging: Other useful grub command line options - nomonitors, noacpi, vesahack, nopoll Press Ctrl and X together (or F10) to exit and boot with the new options. Just experiment with different variations until successful. Those working options will need to be reused with every reboot of AROS until you can edit the grub.cfg and make it permanent ie. install to hard disk or USB. If all else fails, try a nightly iso build and add the option sysdebug=all to the grub line as that is able to report more feedback However, if you feel you have found a genuine bug/fault in AROS that needs attention, please use the bug submission form to record as much information about what happened, why, and what hardware etc. you have so that people may try to assist you Installing. We have a separate section here We have a specific section for each CPU platform under the Specific platforms in the NavBar navigation bar on the right hand side menu error code (-6) when using the ahci.device (has writing to disk problem but not reading) is enabled. change this line in your grub and reboot File structure overview. AROS' directory structure is mostly identical to AmigaOS directory structure, with some additions. AROS: or SYS: also known as DH0: (i.e. the drive partition with AROS system) has the following simplified list of the main drawers (Amigas term for directories/folders). See DOS manual: Drives, Files, Assigns, Directories Filesystem. Whilst the kernel is the heart, the filesystem is the blood of the system... Filesystem options for AROS to install Other filesystems for storage purposes PFS *minimises* the amount of fragmentation, but does not automatically defrags as it saves files to the drive SFS tries to do exactly the same thing, but in certain cases it doesn't do as well as PFS. But since you can defrag SFS The only filesystems that really NEED defragging are from Microsoft(TM) - exFAT/VFAT/NTFS The Smart File System (SFS) is a journaling filesystem used on Amiga computers and AmigaOS-derived operating systems. It is designed for performance, scalability and integrity, offering improvements over standard Amiga filesystems as well as some special or unique features. SFS is written in C and was originally created and released as freeware in 1998 by John Hendrikx. After the original author left the Amiga scene in 2000, the source code to SFS was released and its development continued by Ralph Schmidt in MorphOS. Its development has now forked; as well as the original Amiga version, there are now versions for MorphOS, AROS, AmigaOS 3, and a version for AmigaOS 4, which have different feature sets but remain compatible to each other. Versions for AROS, AmigaOS and MorphOS are based on different branches. In addition, there is a driver for Linux to read Amiga SFS volumes, GRUB natively supports it and there are free drivers to use it from UEFI. The Linux version is independent code. SFS (Smart File System) partially defragments itself while the filesystem is in use. The defragmentation process is almost completely stateless AROS SFS version has a 120GB partition size limit on hard disks and DVDs current 4gig size limit. The sources for the MorphOS 64-bit version of SFS were available but no porting to AROS has happened so far due to endian issues, etc. SFS Tools sfscheck dh0: seek purge fraglist defragment If there are two simultaneous file writes in progress and you reboot machine (or it locks up or crashes) you may end up with a corrupted filesystem. Although arSFSDoctor may help, you might have to copy the files to another partition, format the partition with the errors on and copy the files back. A bit error on the harddisk would give this error. PFS / SFS are way more advanced and much much faster than the FFS. FFS is supported for legacy reasons only. The Professional File System (PFS) is a filesystem originally developed commercially for the Amiga and now distributed on Aminet with a 4-clause BSD license. It is a compatible successor of AmiFileSafe (AFS), with an emphasis on added reliability and speed compared to standard Amiga filesystems. It also features multi-user abilities like the older MuFS. PFS has so many advantages including the important things, speed, the ability to recover all deleted files even simply same name by typing the command ". Deldir" convenient if done in Directory Opus, virtually deleted files are copied normally as if they had never been deleted, other convenience is to not ever invalidate the filesystem, just put it on top of the startup-sequence command "diskvalid", which automatically corrects any irregularities in the system startup; PFS also provides a device for floppy which makes them very fast and takes advantage of the full capacity of the floppy including the area dedicated to the bootloader. The device is split into two main areas. At the beginning of the device is the metadata section, which consists of a root block, and a generic array of blocks that can be allocated to store metadata. The rest of the device is another contiguous generic array of blocks that can be allocated to store data. The metadata section usually uses a few percent of the device, depending on the size of the device. The metadata is stored as a tree of single blocks in the metadata section. The entire directory structure is recorded in the metadata, so the data section purely contains data from files. The metadata describes the location of data in files with extents of blocks, which makes the metadata quite compact. When a metadata update occurs, the system looks at the block containing the metadata to be changed, and copies it to a newly allocated block from the metadata section, with the change made, then it recursively changes the metadata in the block that points to that block in the same way. This way, eventually the root block needs to be changed, which causes the atomic metadata update. The filesystem is reasonably good at keeping files unfragmented, although there is a defragmentation tool available which will work on an online filesystem ie whilst being used. It was the first filesystem to introduce the concept of the Recycle Bin natively at filesystem-level to the Amiga, holding the last few deleted files in a hidden directory on the disk root. PFS version 5.3 was developed in C and a small portion of assembly code by Michiel Pelt. There are endian issues to be overcome and adapting the small amount of m68k to C before use on intel based machines, etc. Autoupdate of files in a directory is already implemented in Wanderer, but not all file systems handle dos.library/StartNotify() in its full extent. It seems to work correctly in Ram Disk (thanks to AmberRAM handler), and it also works on SFS formatted devices. Other file systems might not yet have it implemented correctly though. The PC equivalent of the Amiga's RDB is the master boot record (MBR). Installing Applications. The typical means to install applications under AROS/AmigaOS involves simply copying/extracting the archive (.zip .lha .rar .tar.gz) file containing the applications files to your own desired location i.e. drawer/folder. Once extracted, launching it by double clicking on an icon (recommended) or using the shell (alternative). Generally, this is on a separate partition from your AROS system files, however in reality it can be any location - including RAM: if you don't want it staying around too long especially when you switch off. At some time in the future it may be desirable for AROS to have a package-manager like subsystem able to retrieve information online about packages available for AROS and whether they update anything you currently have installed, however at the moment no such ability exists. User Data files. AmigaOS has no notion of a default location to store user data files, and presently neither does AROS - though it may be desirable at some time to provide a common start location. For most people, extra small FAT32 NTFS partition(s) as well as the usual Sys: (DH0:) and Work: (DH1:) / Briefcase (DU1:) partitions to store data seems preferable. Especially if a reinstall is ever needed. User Environment configuration files. AmigaOS/AROS stores persistent system configuration data in directory assigned to ENVARC:. This, by default, points to SYS:Prefs/EnvArc. During boot a copy is made to another assign, ENV:, which is for runtime usage. Changes to the files here will not survive a reboot. Setting the env variables is generally done by applications themselves, or when neccessary by the user using the SetEnv command. SetEnv has a SAVE switch to force the persistent copy in ENVARC: to be written also for when you are sure the change should be permanent. Under the standard installation of AmigaOS style OSs, ENVARC: is copied to ENV: upon startup, which, if you have a hard drive installation, is in RAM:, hence, ENV: ends up being RAM:Env. ENVARC: is the Environment Archive, which is the permanent copy of ENV:, which is the Environment. It's roughly like the Registry in Windoze. Most programs do (and all should) store their settings in ENVARC: somewhere, and load them from ENV:. The effect of this can be seen in the Preference editors. If you Save your preferences, they go in ENVARC: and ENV:. If you click Use, they only go in ENV:. If you reboot, normally, anything saved to ENV: is lost, and is replaced with a copy of what is in ENVARC:. Drivers. All hardware support is placed in the Devs drawer (folder/directory). The network drivers <something.device> go in the Networks sub-drawer. Audio drivers <something.audio> are put in the AHI sub-drawer. Graphics drivers <something.hidd> are put in the Drivers sub-drawer. Configuring. AROS has mainly decided on a MUI-like requester/menu/ clone so changing the background, icons, font, menus can be done with SYS:Prefs/Zune AROS has several desktop GUI front ends like File / Directory managers like Dopus4, MCAmiga, App Launch Shortcuts like FKey, Amistart, BoingIconBar, right mouse click on magellan, wanderer desktop, etc General usability decisions - Prefs/IControl, Most apps can be autostarted by copying into SYS:WBStartup directory folder e.g. WeatherBar.zip can be downloaded, unzip and the contents of the zip copied to wbstartup folder ClicktoFront and .info to SYS:WBStartup so always be activated when turning on the computer or add a text line to user-startup is SYS:S (scripts version of wbstartup) e.g. standard Amiga / AROS does not allow clicking of background windows to come to the front to make it easy to get to the window you need but it has the ability if these apps are copied again to WBStartUp or are added to SYS:S/user-startup script run QUIET sys:Tools/Commodities/ClickToFront >Nil: run QUIET sys:Tools/Commodities/DepthMenu >Nil: run QUIET sys:Tools/Commodities/Blanker seconds=300 >Nil: Exchange controls Commodities and can be opened with alt, ctrl, h Although there are heaps of docks, menus and other launcher programs on the Amiga like OSs, FKey has got to be one of the quickest and less complicated ways to launch programs, and it comes with the OS. In SYS:Tools/Commodities, the FKey commodity (Ctrl Alt F) allows you to make actions assigned to some combinations of keys e.g. If your FKey GUI pops up when you start your Workbench up and you don't want it to, click once on the icon, go to the Icons-Information in the menu and make sure it has the tooltype set "CX_POPUP=NO". Now let's launch it and assign the locale switching. After you double-click on FKey icon, launch the Exchange, choose the FKey from list and click the Show button. This will invoke the FKey window. You can see the ALT TAB in list assigned to window switching. Now enter the first key combination, say, ALT Z and go to the right panel. Choose Launch the program from pulldown menu and enter SYS:Prefs/Input as an argument. Append the USE switch and english preset name to the string as shown: SYS:Prefs/Input USE SYS:Prefs/Presets/english Click on the New Button to add the another combination. Now set the combination for your locale as shown above, replacing English name with your preset name. Click New button again and then Save Settings. Now you can use defined combinations to switch the layouts. Although not needed by most users, the system wide ARexx script capability can manage many file manipulation task(s) but this would work only with those program that support ARexx like the shell can be modified with escape strings but not needed in most cases Common Keyboard Shortcuts LAmiga and together with arrow keys - shift as well at the same time as well to move faster LAmiga and LAlt to select LAmiga and M or N Can sometimes be mapped to F11 but can be changed via FKey DOpus 5 Directory Magellan. Dopus 5.x is a whole desktop replacement on the Amiga Workbench (Desktop). Left mouse button clicked twice on the desktop background brings up the Device List window. Green strip notifies SRCE (source) and if another is open it may be red for DEST (destination). clicking on the red strip changes to green Word list of actions with a left mouse click on the DOWN Arrow and directory stuff with < button next to it single-key hotkeys? exactly the same as in dopus4, edit your functions (button bank, toolbar, menus etc.) and under the flags gadget is a key gadget, just click in it and press the key you want to use. As for the extra text field... try turning off Extended lister key selection in environment / miscellaneous. Settings (Right Win key together with 4) -> Themes i.e. assign D5THEMES: DOPUS5:Themes Each Dopus5 theme are stored in a separate directory, named appropriately, which contains further sub directories Shift and click on the icon - runs the icon arcdir arexx / dopus5 scripts see dopus5/arexx/ folder Just use wildcards in background filenames and you get different pics in reboots! For example, configure in Environment -> Backgrounds -> Desktop something like this: If you want change the bg backdrop pic in runtime after some time, an arexx-script for it (paste it into a text file called dopusrandbg.rexx or dopusrandbg.dopus5 If you don`t want to use/open rexxsupport.library just for DELAY() then use the DOS Wait command It's WB ARexx interface, you could enter a cli command as a menu item to open a WB drawer like this... RX "address WORKBENCH;WINDOW 'device:drawer' OPEN" Where device:drawer is replaced by the path of the drawer to open. The ARexx script would be capable to manage such a task but this would work only with those program that support ARexx DirOpus 5 Magellan, discussions, src code, Wanderer. Backgrounds icon text sizes, colors, etc with wanderer prefs in the prefs drawer but cannot use #? or *.* in the backgrounds file entry to randomly choose pictures Provides a way to hide the old Workbench 3.1 style of windows and screens. Themes - SYS:Prefs -> Appearance The default content of Prefs/Env-Archive/SYS/themes.var should be "themes:ice" but can be changed via the theme prefs, please do NOT click the Use button. Its useless. As you know, it will ask for the theme volume. Just pick the theme you want, click on Save, then reboot. You could check if you find SYS:System/Themes or if it is missing. Then you could open startup-sequence which you can find in drawer "S". There should be a line: Assign THEMES: SYS:SYSTEM/THEMES >Nil: This does the trick. Open a shell and run: Assign THEMES: SYS:SYSTEM/THEMES Than start the Theme prefs again... this should work ALua/Zulu script built for faster Wanderer skin management. You can modify config files, install new (wdz format/zipped skin files) and delete skins via the Theme Manager. Global.Prefs Scalos. AROS One may have it under SYS:System/ as part of Deadwood contrib builds, deadwood github src, very old version. Please run Prefs:Scalos_Menu first and Save settings ManualScalos expects Prefs to be in ENVARC: Please ensure you copy Scalos:Prefs Scalos:Storage/Envarc to SYS:Prefs/Scalos and copy which language you need to SYS:S/user-startup Scalos's Prefs (right mouse button, Drawers, Scalos Prefs) and double click Scalos_Prefs app icon the other prefs - Scalos_Menu, Scalos_FileTypes, Scalos_Palette, Scalos_Pattern - are smaller parts of this one preference app Prefs cover these subject areas Scalos_Prefs - Pattern - Minimum options to changed are Pattern List tab Page - Allows you to compile a list of pictures (one at a time rather than a whole folder eg with #? or *.*), assigning a number Nr to one or more of them for easy reference. Using this number you will be able to assign the pictures to specific windows on the Defaults tab Page. If multiple pictures have the same number, one of the pictures will be chosen randomly. This will allow you to have random desktop pictures, random window backdrops etc. Defaults tab Page - Here you can set the defaults for the background pictures in Scalos. Randomize every time [check box] - Usually Pictures with the same number will be randomly selected as soon as the configuration loads. If this option is set, the picture will be selected as soon as a window with the same number assignment is opened. Popup Menu preferences fully configurable menus (includes ToolsDaemon and ParM launch apps import), including support for context-sensitive Popup menus configs for top pull down menus for apps, etc. right mouse button - Scalos_Prefs, Menu, New Menu, New Item, New Command add name at top then in Command Properties eg. add Workbench then underneath add app location e.g. DOpus:DOpus4 Each theme drawer (folder) has further folders Shutdown -> right mouse button Scalos, About, Reboot, Shutdown ToolTypes can be added to the Scalos.info icon like For the RAM Icon, to obtain this you have only to copy the icon in the Icon Path as "RAM.info" or "Ram Disk.info". All functions will automaticallly be performed Scalos works also as a Workbench replacement. In this case the 'emulation mode' has to be set. Changes if 'Emulation mode' is on: dimension of the new window.You *MUST* have set GUIGfx on. asyncron layout: Pictures will be loaded and rendered while the windows opens (Like original Workbench). If this function is 'off', pictures will always be loaded before opening the windows. memory for best speed. This option has no effect if V43 picture.datatype or GUIGfx are used. Always relayout: If "Fit size" is set, the picture will be scaled everytime the window's dimensions change. number will be randomly selected as soon as the If more pictures have the same number, one of them will be chosen randomly. Patternlist New/Delete : Add a new picture. After that you should assign a number to it. The picture will be rendered as tiles. configuration loads. If this option is set, the picture Asyncron-Task priority: You can set the CPU prioriry for the Task if "asyncron layout" is set. Desktop: Number of the Picture for the main window. Screen: Number of the picture for the Scalos-Screen. Window: Number of the picture for the Scalos-windows. TextMode: Number of the picture for the Scalos-windows in Text Mode. The Program wil be started from the Shell. If "WB Args" is set, with the Argument "%p" will be replaced by the path of the activated Icons. The Program will started with the specified Stack value. IconWindow: Scalos opens the window of the specified path. PlugIn: Starts a Scalos Menu-PlugIn. If a Menu Item with empty name is specified, Scalos displays a separator line. It's possible to Drag&Drop an Icon in the Configuration Window. All values will be set accordingly. Entries may be dragged across the list. Mac-like selection : This function aktivates a multiselection method used on MacOS or Win95. If you've selected multiple icons you don't have to hold down shift to drag them. Clicking on an already activated icon will not deselect all other icons. MMB move: The window contents may be moved using the middle mouse button. WindowPopup title only: PopupMenu for windows can be opened only on window's title bars. FullBench: Screen-Titles removed and Main Window set Full Size. Default Icons saveable : The icons which Scalos generates if "show all files" is enabled, can now be saved using "snapshot" menu option. load DefDisk first : Try to read the icons first from the DefIcons Path before using disk info. Hide hidden files : If this function is activated all files or directories where the "hide" flag is set will not be shown. Many of my Icons display more than once on the screen, while on the workbench all seems ok. The Workbench filters double displayed icons, Scalos does not. Solution: please edit the ".backdrop" file and clear double lines. Background images not scaled. GUIGfx option not set or guigfx.library and/or render.library not installed. If working with CD's causes crashs or Scalos doesn't work correctly. Most Filesystems doesn't support the ExAll function correctly. Disable "Use ExAll" in Scalos prefs. Scalos needs much chip ram for every window. Scalos needs normally more chip RAM than the WB, but IPrefs loads it's patterns too. Remove all pictures in WBPattern. Scalos doesn't start any program in the WBStartup. WBStartup Path may be set wrongly or Scalos not started in Emulation Mode On Scalos x86 native doesn't start any programs from the WBStartup drawer at the moment With the help of the wbrexx.plugin Scalos gains support for more of the compatible arexx API If an arexx command produces an error you will find the error code placed in the WORKBENCH.LASTERROR variable. ACTIVATEWINDOW CHANGEWINDOW need a def_ icon with the same name predefined, then create an appropriate entry in the list and rename it, if def icon exists it is shown. Below this can define how files are identified. Then click on the shown icon and define in it what program is used when you double-klick on it and save it. On the tab action you can define popup menu for it. All in all handling is of course different to magellan but can do similar DOpus 4 Directory Opus. Copy DOpus4 app to WBStartup directory folder so it starts on boot up each time Another method is add the below to the bottom of the user-startup script in S: drawer/directory makes DOpus starts up in Iconified state at the top of Wanderer's screen. Left click on this to highlight and right mouse click to open. Just click on the sides of either outer edges of DOpus windows and it will display the parent device/volume list. DOpus saves it features in a CFG file which can be edited to suit anyones' needs by reading the Dopus Manual which is in Guide format. AmiStart. Auto generates the apps menu but scans the drive each time - AmiStart can choose apps you are not interested try a right-click on AmiStart and release on Global settings. Then click on the bubbles gadget. Move the Show Bubbles slider all the way to the left. BoingIconBar. User chooses the apps to add to the dock at the centre bottom of the screen but has to be done manually, please use Save afterwards right mouse click on bottom edge of screen where boingiconbar shows - select settings which opens BoingIconBar Preferences to add apps If no dock showing Add, to add apps click Add Program and search for the executable another method is to drag icons to ends of the bar and move them on the Bar using the Prefs/BoingIconBar Icons. Icons are typically now .png pictures renamed as .info e.g. so Office application name would have a Office.png renamed as Office.info or MyApp.png as MyApp.info, etc. Leave Out menu option to leave app icon on desktop To select multiple icons and save their positions, click on the first icon and after while you hold the Shift key down select further icons and don't release it before SnapShot is finished. You can also select a whole group of icons by pressing the LMB at the top left of the icons and while keeping the LMB down moving the power towards the bottom right. A expanding bounding box will appear and all the icons within it will be selected. Clean Up menu option (right mouse button -> Icons) rearranges icons in a drawer or disk window into a neater condition. To use, open the window to rearrange and select Clean Up. To keep the icons in the new positions, select all the icons (shift key or mouse selection) and select 'Snapshot' and then Window and then again with All. In DOpus5, Saclos, wanderer, most files have a icon file associated with it. To change the default tool, select Icon menu, Information, and change the default tool string. For example, you could use Multiview, Editor and so on for most text, graphics and some sound files as long as the appropriate Datatype classes are installed. For scripts, set the tool to C:IconX C:Join Image1.png Image2.png TO MyFile.info is enough to make a dual state icon from two png images. You can then use Wanderer's menu Icon/Information on it to edit its fields and tooltypes. Amiga OS 3.x AfA icons thread, Later DualPNG and OS4 icons thread and Alternative Icons sets like ClassicWB AISS toolbar images unpack unarc them into RAM: and copy Images directory to SYS:Prefs/Presets/ AISS icons are looked for in PROGDIR:, PROGDIR:Images, SYS:Prefs/Presets/Images and then in TBImages: according to Open Amiga guidelines. there is Demos/iconscale which could be launched from S:User-Startup with two arguments, telling it the horizontal and vertical size. IE something like Demos/iconscale 40 40 It will shrink icons... not sure if it will be very nice though. it doesn't work for the icons on the main desktop. there is an option to scale an icon to a bounding box afair, try iconsize followed by two numbers, like: iconsize 32 32 Is there any way in AROS to change an icon type from Project to Tool or vice versa? Either the SIT option of ProcessIcon, or the TYPE option of HandleInfo (not sure if this one works at all, please test with care). processicon sys:pathoftheicon SIT=Project SIT Set type of ICON. Allowed types are: "Disk", "Drawer", "Tool", "Project", "Garbage", "Device", "Kick" and "AppIcon". Btw, are your icons, the #?.info files, writable, is the W flag set ? Fonts. Install the #?.ttf files to SYS:Fonts/TrueType. Use SYS:System/FTManager to "Install Font" each #?.ttf file which will generate associated #?.otag and #?.font in SYS:Fonts. Use SYS:Prefs/Fonts to change system fonts and SYS:Prefs/Zune to change others. To achieve our goal we will use the Setup Locale, Input, Zune and Fonts, as well as The FTManager. Begin The first step you should do is to get the system to know that we speak and write in another language. What you need to do is to open the setup program and choose Locale country, and list "preferred languages" to put it first and then English. If you want the tab "Time Zone" and select city of residence to set the clock correctly. Of course we save our changes and continue opening the setup program Input. This sets the keyboard language as our beginning. When the language layout was created there was no option to switch to Aros keyboard (layout switching), so to write in the language you had to hold down Alt, something you encounter in other functions, such as AmigaOS 4 and MorphOS. This time working with the team of Aros to create a new keyboard layout to replace the old so we can get rid of the button Alt. For now though let only selected this layout and do not turn the switch on the keyboard. On the occasion of the article, write in the comments below if you'd prefer to keep both layouts, the new words that are in development and this requires the Alt button pressed. Installing fonts In this step you need to download some fonts that can support the Greek encoding in our system. The easiest way is to run the script "Download Fonts" you'll find in the folder AROS: Utilities / OWB. This script downloads from the Internet, and unpacks some fonts for OWB, which is placed under the folder Fonts: TrueType. But as these can only be used by OWB and not the system, which unfortunately does not see. To make them available to the rest of the system, open the program FTManager, you will find the folder AROS: System /. From there select the field "Codepage" option "ISO-8859-7" and list the font "Arial" and "Regular" form in which you must double-click with the mouse. In the window that appears, select the bottom right the checkbox "Anti-aliasing" button and then "Install". Immediately folder Fonts: created files "arialregular.font" and "arialregular.otag", which are necessary in order to see the system font. Do the same steps if you wish for other fonts. Final stages After completing the above, open the folder AROS: Prefs / and run the program settings Fonts. In the new window, select the fields "Icons" and "Screen" as the font "ArialRegular" to the size you want. In the field "System" to give "s_courier", which, however, because it is not True Type Font support Antialising, and may seem a little broken. You can also use the CourierNew, if you have installed the above procedure. After you save the changes and open the Zune program settings. In this set the "ArialRegular" font fields in tabs "Windows" and "Groups", and save the changes. Reboot the system. To make sure that the above worked properly run NoWinED, which you will find under the folder AROS: Tools /. If that everything is working correctly you will see the menu and the settings window with Greek letters. You can also write in the language using the button Alt. Second program that you can try, which is fully localized, is WookieChat, which you will find in the folder AROS: Extras / Networking. And in this place all the menu and settings window works. Windows. The window you position and resize, you right click on that windows title bar and in the dropdown menu you snapshot from there. Right click to show menu -> Window -> Snapshot Windows or All but it will NOT work if that folder has no icon (e.g. Disk.nfo) attached to it. You need a folder icon. The window information gets saved in it. As for maximising the window using a shortcut key - Alt and up arrow key The AROS-Shell windows can be moved, resized by editing sys:s/icaros-sequence run QUIET c:newshell con:0/150//300/ >NIL: run QUIET c:newshell con:600/150//300/ >NIL: Magic Menu type functionality is implemented in IControl preferences editor: in the frame called Menus, switch type from Pull-Down to Pop-Up and/or iControl just tick the sticky menu option. Windows outside screens causing a problem either uncheck "Offscreen move" for windows in IControl prefs editor. Or use FKey commodity and define two key shortcuts: Now you can cycle windows until the one you want to rescue, and then "rescue" it: it will move back inside your screen. How to save the window size on wanderer (snapshot all, snapshot windows) Same for icon position on wanderer, can't save the position. Icon position cannot be saved yet, but you should be able to save the window position and size. sys:prefs - wanderer icon has option to save window size on exit but just for dh0. To get saving working on (DH1: Extras:) partitions try deleting the dh1 disk.info file, then reboot. The system should create a new dh1 icon. As for viewing all files, removing disk.info for that disk did the job sys:Extras/System/Scout can kill apps sys:Tools/Commodities/Exchange can remove available commodities If you're using Icaros, go to the theme prefs and make sure that decoration is checked. Also, some themes do not use a parent button, so try another theme. You may have to restart Aros before the theme will change. yes you can turn off the computer IF none of the drives are in progress (i.e. writing). Best to use Wanderer menu option Quit otherwise Printing. This is still work in progress print from my AROS box! It's a bit complicated but it works! Best to set Printer Prefs in the Prefs drawer to print-to-file or parallel/USB port Save document in postscript or convert picture/text to postscript Print using compatible Ghostscript printer or Postscript printer Some work has been done Files. File endings and datatypes. For instance, to open PDFs with arospdf not localised in the default drawer of Icaros (Work:Extras/Applications/arospdf) but localised in a custom drawer in AROS. The default tools are defined in the icons in sys:prefs/env-archive/sys e.g. def_PDF. File type identification is done by datatype descriptors which you can find in Devs/Datatypes. The AROS build system has a tool which creates such datatype descriptors. Changing of default tools of existing icons is easy as shown above. Adding of new file types is not hard, but needs knowledge of the AROS build system. The enduser way would be to download the attached file, which contained two executables: 1) createdtdesc, to make a new datatype description 2) examinedtdesc, to read/show existing datatype descriptions use 2 to get an idea on how it things are currently done in aros by providing this executable a file from the drawer sys:devs/datatypes/ (alternatively you can find the original .dtd files here). use 1 to make your new datatype. Use the accompanied FORMAT file (also here) to read how to make your own datatype descriptor. use 2 to get hints from other datatype descriptors. Note: When creating a new descriptor would advise against using the pattern property, but instead use the default pattern of #? and create a Mask that matches your filetype. This requires some research in order to discover how your filetype can be recognized properly. Of course with making something like a descriptor for an ascii textfile, you would fallback to using the pattern (e.g. #?.text as the filetype cannot be determined easily otherwise). To enter data your Country or City, ist with city_id numbers can be found here or you need to go to BBC Weather, once you type the name of your city or town in the appropriate tab, and give a enter, the 7 numbers to be added in the "WeatherBar" will appear on the Browser address bar next to the link
11,423
Aros/Developer/Docs. A technical overview of AROS. Google translation German, French, Italian, Spanish, Hindi, Chinese, Russian, Polish, Portuguese AROS, like AmigaOS (TM), is a message-passing, preemptive multitasking OS. It uses re-entrant shared libraries to save memory space. AROS is based around an executive library kernel (Exec) and two other libraries: The design philosophies of AmigaDOS and Intuition are rather different, the former adopting a C-like API and the latter creating an object-oriented, message passing aware environment for the programmer. The system base is the only absolute address in AmigaOS (located at 0x00000004) this does differ with AROS as AROS SysBase is automatically provided (no $4) but everything else is dynamically loaded. The OS is well known for delivering high performance due to its close connections with the hardware, while simultaneously having the flexibility to support re-targetable graphics (Cybergraphics) and retargetable audio subsystems (AHI). Remember, AROS is a research operating system, and while all contributions to the base AROS code are welcome, please contact the dev list first for any core changes. Writing applications for AROS does not have this requirement. While AROS appears and feels almost feature complete, it is still missing a small number of functions from the Amiga API - as well as a few implementations of core functionality. There are also a number of user applications that, while not strictly part of the AmigaOS 3.1 environment, need to be written. This thread provides information for setup, documentation and whether you are interested in core OS changes and/or writing/porting software apps The repository for the current development version ABIv1, the repository for current stable PC version ABIv0 with backported ABIv1 features is located which is used on AROS One and Icaros x86 based distros Any bugs / issues can be added We have a slack here, discord on Discord@AmigaDev, also the AROSExec discord server is available Software Development for AROS. Programming languages. Apart from 'The Developer Environment', which primarily supports C/C++ code, there are other programming languages available for AROS: Scripting Basic Misc Where to get the C/C++ Environment. If you want to develop for AROS, its generally easier to be running linux hosted AROS development environment especially for C++, cross compiling the code. That's how most developers now are doing it. g++ is used to compile owb web browser as well as some other AROS software. If you were hoping for a rich set of C++ libraries or classes defined for the OS feature set, you might be disappointed. AROS Native compiling is possible, but you're much more likely to run into the odd bug(s) in the dev environment since it gets little testing and fixing by other developers. Is there a sftp software or scp over ssh available? Nope. There isn't. Would be easier if at least the security part would be handled, porting amissl Cross compilers from other OSs. Pre compiled versions of AROS ABIv0 Linux hosted can be found here And as always has been the case you can use the contrib archive to 'obtain' the development directory which contains the /native/ AROS gcc compiler and tools. That compiler is used to build AROS itself but can be used outside the AROS build process by providing --sysroot with indicated directory to cross compile for AROS. you want to build AROS. No problem. Here are instructions for 64-bit: https://github.com/deadw00d/AROS/blob/master/INSTALL.md Here are instructions for 32-bit: https://github.com/deadw00d/AROS/blob/alt-abiv0/INSTALL.md 32-bit Linux hosted AROS and compiling AROS apps for Linux which runs without problems on 64-bit linux distributions and all currently available software will run on it as well Please install these packages before moving to next step. Below is a reference list for Debian-based distributions. Reference build system is Ubuntu 18.04/20.04 amd64. subversion git-core gcc g++ make gawk bison flex bzip2 netpbm autoconf automake libx11-dev libxext-dev libc6-dev liblzo2-dev libxxf86vm-dev libpng-dev gcc-multilib libsdl1.2-dev byacc python-mako libxcursor-dev cmake zsh mingw64 Do all of these operations under home directory of your user or another directory where your user has write permissions. Specifically, in a section "Linux-i386", be sure first to build the cross-compiler (toolchain-alt-abiv0-i386) and only then AROS itself (alt-abiv0-linux-i386). Clone & build Now to the build selection below - Linux-i386 Select toolchain-alt-abiv0-i386 - Select alt-abiv0-linux-i386 (DEBUG) Start AROS by: Pc-i386 Select toolchain-alt-abiv0-i386 (if not built yet) - Select alt-abiv0-pc-i386 ISO image available in alt-abiv0-pc-i386/distfiles Now that we have linux-hosted build, we can resume native (option 2). Run ./rebuild.sh and selection option 2. Wait until it finished, then: Now Now, your compiler is located in toolchain-alt-abiv0-i386 directory and named i386-aros-gcc. Includes are in alt-abiv0-linux-i386-d/bin/linux-i386/AROS/Development/include and libraries are in alt-abiv0-linux-i386-d/bin/linux-i386/AROS/Development.lib. This is how then can be passed to the compiler: /home/xxx/toolchain-alt-abiv0-i386/i386-aros-gcc --sysroot /home/xxx/alt-abiv0-linux-i386-d/bin/linux-i386/AROS/Development -L/home/xxx/alt-abiv0-linux-i386-d/bin/linux-i386/AROS/Development/lib ../toolchain-alt-abiv0-i386/i386-aros-gcc --sysroot bin/linux-i386/AROS/Development local/helloworld/helloworld.c -o local/helloworld/helloworld Slack Dev Forum Another Linux option is to use cross-compilers for Linux. Another is using Debian like distros and download the gimmearos.sh script (on aros-archives) to setup the developer environment by downloading necessary packages ... The gimmearos script is a good start in that direction, building the cross compilers and hosted AROS environment, but gimmearos.sh might be out of date or not be completely compatible with any given linux distro. In order to do that, you have to compile AROS yourself. Download AROS source archive not contrib. Compile AROS by entering the main directory ./configure make (More on compiling AROS). The result will be a basic AROS system without development tools. To compile C++ on Linux, type 'make gnu-contrib-crosstools', creating the cross-compilers in ./bin/linux-i386/tools/, named i386-aros-gcc, etc. Note: Currently, to make the cross compilers usable copy 'collect-aros' from tools/ to tools/i386-aros/bin/. At the moment the cross compilers if used from the Linux command line will only find it when it's there. If you want to compile native compilers (the Developer Environment), type 'make contrib-gnu-gcc', creating native compilers in AROS' System:Development/bin directory. When the output needs to be stripped codice_1 The Obj-C backend should build out of the box. Open contrib/gnu/gcc/mmakefile.src and search for the line which contains "--enable-languages" and add "objc" to the list of languages that follows it. --enable-languages=c,c++,objc Better make it—enable-languages=c,c++,objc,obj-c++ ObjC++ is broken as soon as you try to use exceptions, but that might change in future GCC versions and it does not hurt having it there already. Do you need a cross-compiler or a real compiler? In the first case you can get away with just downloading the proper gcc archive, apply the patch and proceed with the normal gcc build. In the case of a real cross-compiler then when downloading the contrib sources, also need to download the normal sources, place the contrib sources into a directory called contrib, need to install autoconf+automake+perl+python, call ./configure, cd into the subdirectory and type make. to rebuild GCC with host == build == target == i386-pc-aros. So just get the vanilla sources and apply the patches without bothering about the build system? pre-configured VM environment vmware virtual machine to develop AROS and AROS software 64 bit First thing you need is Linux system, suggest Ubuntu, with installed cmake and x86_64-aros cross-compiler. In order to get the cross-compiler you need to build AROS (either linux-x86_64 or pc-x86_64, I suggest linux-x86_64) locally on that machine. The build instructions for AROS are available here: http://aros.sourceforge.net/documentation/developers/compiling.php Once you have the cross compiler and make available, we can continue. How to create a AROS dev environment if you want to do that by yourself A hub page to start with Icaros 64 https://vmwaros.blogspot.com/p/64-bit.html 32 bit A good option for multiple OS is AxRuntime lets developers compile their Amiga API-based applications as Linux binaries being able to utilize modern development tools available on Linux, like IDEs, debuggers, profilers, etc Native compilers for AROS. Namely gcc for C or g++ for C++ are supplied with the Developer Environment, which is already setup and part of any current AROS distribution like AROS One, Icaros or the nightlies but not Icaros Lite Currently, the developer environment consists of the following software components. GNU GCC 4.x GNU BinUtils, GNU Fileutils 4.x, GNU Textutils and others. On single partition systems and the Boot ISO, the AROS Developer environment is installed under "SYS:Development/". Systems with multiple partitions - such as a Work: partition - tend to install it to there instead, however it can be installed manually to any location. Please remember, if moving, that you will need to correct the Development packages 'install location' env variable to point to the new locations root - look in SYS:S/startup-sequence. In the aros build instructions. you need to check out contrib and/or ports into your AROS source directory, as subdirs. then, assuming you are building in an external build dir, as you should, you simply configure and "make contrib" for instance or whatever submodule you might want to build. Beginners Tutorials in C C++. As AROS is C based API compatible to AmigaOS 3.x, so most of the information on programming C on the Amiga applies to AROS as well. Please note that there is a lot of AOS 1.3 (not so useful) and AmigaOS AOS 2.x information around as well. Curly brackets missing - try SHIFT + ALT + 7 or 0. Brief overview of what is required to write AROS Applications Writing native games require this extra information Additional features that could be added later A good introduction to Amiga programming in general, though a little outdated, is Rob Peck's book "Programmer's Guide to the Amiga". The Amiga ROM Kernel manuals: Libraries (3rd edition), and The AmigaDOS manual (3rd edition) are the most generally useful books, the other RKMs, including the Style Guide also have their uses. There were many reference examples from AmigaMail and Devcon notes (available again on an Amiga developers CD). If using Arexx then "The Amiga Programmer's Guide to ARexx" by Eric Giguere, which was published by Commodore is useful as well as an "Arexx Cookbook". When you upload your builds, please write the architecture (like i386-aros, x86_64-aros, etc) in the archive name and it is also advisable to write in the field "Requirements" what ABI (ABIv0 or ABI-WIP or ABIv1) Compiling C/C++ Code. Native, although we have a IDE Integrated Development Environment (Murks), it does lack a debugger. Whilst others use a combination of a text editor and shell to edit code. Most though use an AROS hosted on Linux to take advantage of the better GCC tools like GDB and various IDEs. Open shell - its a menu option at the top left of Wanderer (desktop). Or by using the right Win key and w (or F12 and w) within the directory with the source code. Type in sh to change the amiga shell into a unix shell. You can then type in ls (unix equivalent to amiga dir). Take a look here for more commands. For a single file program-name.c or program-name.cpp gcc -o program-name program-name.c or g++ -o program-name program-name.cpp or g++ -o test -Wall -g main.cc texturelib.cpp xmodelib.cc -lsdl -lgl To close the shell, click on the top left-hand corner to close (twice). Once to get back the aros shell and then again to close finally. Use cksfv as a test. Some source code requires the addition of Amiga API libraries, like dos, which you can flag at the compile time as gcc -o julia.exe julia.c -ldos For DOS use -ldos as example and if you are compiling mui codes it will be -lmui or intuition -lintuition. Other missing symbols are due to linker libraries being necessary for linking in functions that aren't in the standard C libraries. For example some source code would need added -lz or -lm or -lpng or -larosc etc. use this in unix line command mode to search for 'search-item' in many .c files (*.cpp for c++, etc.) grep -l 'search-item' *.c If the program is not executable, try using parameter fno-common How to make Apps have AROS 64-bit specific support code. Portable code AROS64 already uses 64bit addressing, it just doesn't setup the MMU for more than 4GB physical memory currently. When porting software to AROS64 it is "mostly" a case of converting ULONG's that are used to store pointers, into IPTR's instead, etc. Another quirk, is making sure items on the stack are the correct size by using the STACKED attribute for them. compiling mui stuff for aros setting -std=gnu99 is necessary, had -std=c99 usually A MUI application most likely needs only the HOOKPROTOxxx SDI macros. They are compatible with AROS, only the attributes (hook, object attribute) must be given in the right order. Coding conventions. As the AROS core source is a shared developer experience, there are rules regarding structure and style. When it comes to your creating your own app and coding, the structure and style should be your own, i.e. you should enjoy what you do and do it so that you can understand what is going on. Layout. static void 1st_function() program exit(0); int main(void) 1st_function(); 2nd_function(); 3rd_function(); return 0; struct Screen * openscreen(void); struct Window *openwindow(struct Screen *screen, const char *title, LONG x, LONG y, LONG w, LONG h); VOID 1st_function(); VOID 2nd_function(); int main(int argc, char **argv) program return 0; } /* main */ VOID 1st_function() VOID 2nd_function() General style. This code is used by many people and therefore you should keep some things in mind when you submit source code: Comments. AROS uses some of the comments in the source to generate the documentation. Therefore it's necessary to keep a certain format so the tools can find their information. Other comments are ignored but they should explain what you thought when you wrote the code. If you really can't think of an explanation, then don't write the code a second time like this: What we think of is this: Formatting. This is only IMPORTANT if you are going to work on the core AROS code or contrib but not applications which may reside outside like on AROS Archives or other websites. /* a */ struct RastPort * rp; int a; /* b */ rp = NULL; a = 1; /* c */ if (a == 1) printf ("Init worked\n"); /* d */ if !(rp = Get_a_pointer_to_the_RastPort some , long , arguments ) a <= 0 { printf ("Something failed\n"); return FAIL; /* e */ a = printf ("My RastPort is %p, a=%d\n" , rp , a return OK; Looks ugly, eh ? :-) Ok, here are the rules: Before committing please normalize the indentation - if you have a mixture of tabs and spaced - please always use spaces, 1 tab = 4 spaces. The reasons for this are: If you have a function with many arguments (d, e) you should put the parentheses in lines of their own and each argument in one line (d) or put the first argument behind the opening parentheses (e) and each following argument in a line of its own with the comma in front. The closing parentheses is in a line of its own and aligned with the beginning of the expression (i.e. the a and not the opening parentheses or the printf()). Use a single blank line to separate logical blocks. Large comments should have a blank line before and after them, small comments should be put before the code they explain with only one blank line before them. If you see any TABS in AROS core sources then the suggestion is to "detab the file and commit that separately" either before or afterwards from making functionality changes. Make two commits instead of one. This makes it easier for others to see the real changes instead of having to dig through multiple lines of irrelevant diffs. Eliminating Global Variables. i.e. pass variables to functions (local scope) or classes making it easier to track and debug your code. Any time you find that you need a particular thing in 'a lot of different places', chances are that all those places are conceptually related, and so you can create a class, a namespace, a function, or some other higher-level organizational unit to represent that relationship. This makes the program easier to understand. Bad Designs Good Designs This way it is very easy to replace the list with new list for debugging purposes, or replacing the methods without replacing the list, when you want different results. You only have to replace the content of the local variables. So create the structure that matches your data (linked lists, trees, arrays, etc) and what to do with them (sorting, searching, etc) One way to look at it is that menu headings act as the .c file and sub-menu headings as functions. When you start a project, you place a couple of declarations in the include file. As the project continues, you place more and more declarations in the include file, some of which refer to or contain previous declarations. Before you know it, you have a real mess on your hands. The majority of your source files have knowledge of the data structures and directly reference elements from the structures. Making changes in an environment where many data structures directly refer to other data structures becomes, at best, a headache. Consider what happens when you change a data structure. Use good variables names to help clarify code and only comment when you need to explain why a certain programming approach was made. You're Refactoring Legacy Code, you see a global, you want to get rid of it. How do you do this? Exactly what to do depends on how the global is used. The first step is to find all uses of the global throughout the code, and get a feel for what the significance of the variable is and how it relates to the rest of the program. Pay particular attention to the "lifetime" of the variable (when it gets initialized, when it is first used, when it is last used, how it gets cleaned up). Then, you will probably make the global a data member of a class (for OO languages), or you will write some get/set functions. Converting to the Singleton Pattern is common, but you may discover that it makes more sense for the data element to be a member of an existing singleton, or maybe even an instance variable. As returning variables by "passing by value" are forgotten, so "passing by reference" is often used instead. The reference is a pointer to the variable so the value is remembered when returned. Alternatives AROS/AmigaOS APIs and Docs. And, being Amiga OS-compatible, there are exceptions to all of these. System Libraries. The AROS Guide To Libraries can be used as a guide to individual commands and Old Dev Docs are used in application programming. The generated Aros AutoDocs HTML Read or download docs-html.bz2 from here. Amiga/Aros styles libraries are very different from windows and linux libs. Typical .so/dll libraries are foreign to most Amiga-like OS AROS Subsystems. HIDDs. HIDD are used for device/peripheral low level hardware support drivers. The HIDD system is split up into a collection of classes with a strict inheritance hierarchy. A HIDD class implements a device driver for a single device or in rare cases a group of devices and provides an interface for other programs and devices to access. In order to maintain portability of interfaces across a wide range of hardware this interface will in general not present the raw interface to the underlying hardware. Instead it will present a generic interface that describes many different hardware implementations. This allows for the best reuse of both interfaces and code. HIDD API is heavyweight though. You need to open a HIDD library, open oop.library, instantiate an object (even if there's no object); and object calls are more costly compared to plain library calls. Basically your task is to implement a subclass of hidd.ata.bus for your hardware. just implementing the XXXATA__Hidd_ATABus__xxxxxx methods for the Amiga chipset - and appropriate versions of the interface_xxx.c file(s). pretty much everything in probe.c could be ignored - just write a replacement scan for relevant amiga devices and store whatever info you need in the bus data? only the "SUPPORT_LEGACY" blocks might be related. You do not need to depend on PCI API. PCI is just a way to discover the hardware on PCs, etc. <hidd/pci.h> includes (at some depth) <interface/HW.h>, which defines IID_HW. This comes from the 'generic' HIDD class in: rom/hidds/hidd/hiddclass.conf did not split up HIDD and HW because they are always used in pair. It's the same as hidd/pci.h bringing definition for: PCI, PCIDriver and PCIDevice. PCI is actually PCIHW, just the name was not changed for backwards compatibility reasons. HW is a 'hub' where HIDD instances plug in. ATA HIDD aoHidd_ATABus_Use32Bit value is completely ignored unless ata.device first detects correct command line parameter. Yes. Unfortunately I was unable to find any comment in code or svn history with explanations. Looked at Linux source, there 32-bit PIO is also controller driver's property. Some of them enable it, some don't. Actually, switching the default to ON should be safe. ata.device is fail-safe at this because during IDENTIFY command it validates upper 16 bits, and if they appear to be zeroes in all 128 longwords, then 32-bit mode is switched off. But, nevertheless, I know how tricky hardware can be, so I decided not to change original behavior. If you think it's wrong in some cases, then it's possible to add one more attribute like aHidd_ATABus_Default32Bit. If set to YES, then this means that 32-bit PIO is safe to use by default. Devices. The Amiga used Devices to communicate with additional hardware. AROS has replaced these hardware devices with hidd equivalents but some are still retained for backwards compatibility. Local libraries/devices/handlers,etc are supposed to override the ones in ROM if their version is higher than the one in ROM. Here is the list of commands exec default: While most other "stuff you communicate with" in AmigaOS are devices that share a common base interface. OS 1.3 Device Drivers. Handlers. filesystem handlers have their own separate system consisting of completely differently structured messages that dos.library use to pass requests (for things like reading, writing, getting directory contents etc.) to them. AROS originally went with implementing filesystem handlers as devices, which might arguably be more consistent with the rest of the AmigaOS API but which is quite incompatible with AmigaOS itself. However, it made it far harder to port filesystems and the gains were comparatively small, and so there's been a long standing goal of fixing this incompatibility. It has now, June 2011, been reintroduced to all AROS flavors. are argstr and argsize valid for the handler startup environment? DOS/RunHandler() calls DOS/CreateNewProcTags(), and then CallEntry() (in rom/dos/exit.c) to start the handler, so yes, argstr and argsize are *present* in the call signature of the handler. Granted, argstr will be NULL and argsize 0, but those values *are* passed to the handler function using: Creating your own [without the whole build tree http://pagesperso-orange.fr/franck.charlet/temp/radeon.zip] and then SFS has two Root blocks, one at the start and one at the end of the disk. The Root blocks both contain the same information. They hold various information about the disk structure and have the locations of some important blocks used by the filesystem. The Root ObjectContainer contains the Root directory Object. The name of this Object is the name of the volume. It is identical to a normal directory Object. The Bitmap is used to keep track of free space. Each bit in a bitmap represents a single block. A set bit indicates a free block and a cleared bit a used block. AdminSpaceContainers are used to keep track of space which has been reserved for storing administration blocks. Only the Bitmap, the Root blocks and the actual data stored in files aren't stored in administration space. Administration space is allocated in chunks of 32 blocks at a time. A single AdminSpaceContainer can hold information about a large number of such areas each of which has its own little bitmap of 32 bits. Extents are stored in a B-Tree. The Root block holds a pointer to the root of the Extent B-Tree. Extents keep track of space in use by a specific file. Each fragment a file consists of has its own Extent. Extents are in a double linked list. The list can be used to locate the next or previous fragment of a file. Below is the standard block header. This header is found before EVERY type of block used in the filesystem, except data blocks. The id field is used to check if the block is of the correct type when it is being referred to using a BLCK pointer. The checksum field is the SUM of all LONGs in a block plus one, and then negated. When applying a checksum the checksum field itself should be set to zero. The checking a checksum the checksum is okay if the result of the checksum equals zero. The ownblock BLCK pointer points to the block itself. This field is an extra safety check to ensure we are using a valid block. Field Type Description id ULONG The id field is used to identify the type of block we are dealing with. It is used to make sure that when referencing a block we got a block of the correct type. The id consist of 4 bytes and each blocktype has its own unique foure letter code. checksum ULONG This field contains the sum of all longs in this block, plus one and then negated. The checksum can be used to check if the block hasn't been corrupted in any way. ownblock BLCK Points to itself, or in other words, this field contains the block number of this block. This is yet another way to check whether or not a block is valid. The algorithm to calculate the checksum of a block: A Root block contains very important information about the structure of a SFS disk. It has information on the location and size of the disk, the blocksize used, locations of various important blocks, version information and some filesystem specific settings. A SFS disk has two Root blocks; one located at the start of the partition and one at the end. On startup the filesystem will check both Roots to see if it is a valid SFS disk. If either one is missing SFS can still continue (although at the moment it won't). A Root block could be missing on purpose. For example, if you extend the partition at the end (adding a few MB's) then SFS can detect this with the information stored in the Root block located at the beginning (since only the end-offset has changed). Same goes for the other way around, as long as you don't change start and end point at the same time. When a Root block is missing because the partition has been made a bit larger, then SFS will in the future be able to resize itself without re-formatting the disk. Field Type Description bheader struct fsBlockHeader Standard block header. version UWORD The version of the filesystem block structure. You can check this field to identify what version of the filesystem your dealing with it and to see if you can handle this structure correctly. Don't try to interpret the disk's structure when this field contains an unknown version number! sequencenumber UWORD Used to identify which Root block was written last in case the sequencenumber on both Root blocks don't match. datecreated ULONG Creation date of this volume. This is the date when the disk was last formatted and will never be changed. bits UBYTE Various settings, see below. pad1 UBYTE Reserved, leave zero. pad2 UWORD Reserved, leave zero. reserved1 ULONG[2] Reserved, leave zero. firstbyteh ULONG High 32-bits of a 64-bit number. This is the first byte of our partition relative to the start of the disk. firstbyte ULONG Low 32-bits of a 64-bit number. lastbyteh ULONG High 32-bits of a 64-bit number. This is the last byte (exclusive) of our partition relative to the start of the disk. lastbyte ULONG Low 32-bits of a 64-bit number. totalblocks ULONG The total number of blocks this partition consists of. blocksize ULONG The size of a block of this partition. reserved2 ULONG[2] Reserved, leave zero. reserved3 ULONG[8] Reserved, leave zero. bitmapbase BLCK Block number of the start of the Bitmap. adminspacecontainer BLCK Block number of the first AdminSpaceContainer. rootobjectcontainer BLCK Block number of the ObjectContainer which contains the root of the disk (this is where the volume name is stored). extentbnoderoot BLCK Block number of the root of the Extent B-Tree. reserved4 ULONG[4] Reserved, leave zero. AdminSpaceContainers are used to store the location and bitmap of each administration space. The AdminSpaceContainers are located in a double linked list and they contain an array of fsAdminSpace structures. There is one fsAdminSpace structure for every administration space on disk. Field Type Description bheader struct fsBlockHeader Standard block header. next BLCK The next AdminSpaceContainer, or zero if it is the last in the chain. previous BLCK The previous AdminSpaceContainer, or zero if it is the first AdminSpaceContainer. bits UBYTE The number of bits in each in the bits ULONG in the fsAdminSpace structure. pad1 UBYTE Reserved, leave zero. pad2 UWORD Reserved, leave zero. adminspace struct fsAdminSpace An array of fsAdminSpace structures. The size of the array is determined by the current blocksize. Field Type Description space BLCK The first block of an administration space. bits ULONG A small bitmap which is used to determine which blocks in an administration space are already in use. The number of bits in this bitmap is determined by the bits field in the AdminSpaceContainer. The fsBitmap structure is used for Bitmap blocks. A bitmap block is used to keep track of which space is in use and which isn't for a particular area of a disk. All bitmap blocks together keep track of the free space for an entire disk. The location of the first bitmap block is known and all other bitmap blocks are stored in order after the first one. Field Type Description bheader struct fsBlockHeader Standard block header. bitmap ULONG An array of ULONG's. These hold the actual information on which blocks are in use and which aren't. Each bit in a bitmap block (except for the block header) represents a single block. If the bit is set than the block is free, and if the bit is clear then it is full. The first ULONG in the bitmap area of the first bitmap block represents blocks 0 through 31 on the disk. Bit 31 of this ULONG is block 0, 30 is block 1, and so on. Bit 0 of the first ULONG represents block 31. Below is a table to clarify how bitmaps work even further. The first column is the bitmap block number, the second column is the number of the ULONG in the bitmap array. The third column is the bit number in this ULONG, and the last column is the block which this specific bit, in this specific bitmap block represents. We'll assume here that a bitmap block has room for 120 ULONG's (meaning there is room for storing 32 * 120 bits). The last bitmap block doesn't need to be completely used. The unused bits (which belong to blocks which do not exist) all have to be clear, to indicate that these blocks are in use. The fsObjectContainer structure is used to hold a variable number of fsObjects structures (Objects) which have the same parent directory. Each ObjectContainer must contain at least one Object. If there is space in the ObjectContainer not used by the variable number of Objects then that space is zero filled. Objects always start at 2-byte boundaries, which means sometimes a padding byte is inserted between two Objects. Field Type Description bheader struct fsBlockHeader Standard block header. parent NODE The node number of the parent Object, or 0 if this object has no parent (which is only the case for the Root directory). next BLCK The next ObjectContainer belonging to this directory, or zero if it is the last in the chain. previous BLCK The previous ObjectContainer belonging to this directory, or zero if it is the first ObjectContainer in this directory. object struct fsObject A variable number of fsObject structures. The number of structures depends on the individual sizes of each fsObject structure and the blocksize. These structures are located directly after each other with at most 1 byte of padding between them to get the structures aligned on a 2 byte boundary. fsHashTable is the structure of a HashTable block. It functions much like the hash table found in FFS user directory blocks, except that it is stored in a separate block. This block contains a number of hash-chains (about 120 for a 512 byte block). Each hash-chain is a chain of Nodes. Each Node has a pointer to an Object and a pointer to the next entry in the hash-chain. Using such a hash-chain you can locate an object quickly by only knowing its name. Field Type Description bheader struct fsBlockHeader Standard block header. parent NODE The node number of the directory Object this HashTable block belongs to. hashentry NODE An array of Nodes. Each Node represents the start of a hash-chain (singly linked). A hash-value is calculated using the name of a file or directory, and this value determines in which chain the Object is linked. If there are no entries in a hash-chain then the hashentry value is zero. To calculate the hash-value using a name of an Object as input use these routines: The BNodeContainer is used to store B-Trees. Currently only one B-Tree is in use by this filesystem and it is used to store the location of file data. The fsBNodeContainer structure contains two other structures. The fsBlockHeader structure and the BTreeContainer structure. Field Type Description bheader struct fsBlockHeader Standard block header. btc struct BTreeContainer Contains information about the B-Tree and its nodes contained in this block. First try and locate the Root block. It should start with "ROOT". SFS has two of these, one at the start of the partition and one at the end. One of the fields contains the block size, which will be the size of all important SFS blocks. The root block has the root object container, which contains information about files and directories in the root directory. The object containers basically hold one or more smaller structures that represent files and directories. Scanning them all should give you a list of files and directories. The root block also has the root of the Extent B-Tree. This is a standard B-Tree structure (not a binary tree) that is used commonly in all kinds of system, you can read about how they work on Wikipedia if needed. The B-Tree holds the information about *where* all the data is located for your files. To recover your files, I'd do this: looks like an ObjectContainer (check the fsBlockHeader's ID, check if the ownblock number is equal to the block you are currently scanning, and check its checksum). So if you currently have block 12, and you see a block with the correct id, and ownblock = 12 and its checksum is good, then that's probably a valid ObjectContainer. block (in the field data) and the file size. For small files (less than blocksize) this data block will be enough to recover the data. For larger files, you might be lucky and all the remaining blocks are found after the first one (if the file was defragmented). You can't be sure of that though so... blocks with the correct id, ownblock number and checksum). -- the B-Tree consists of non-leaf nodes (blocks that only contain pointers to other B-Tree blocks), the isLeaf flag indicates this. Or it can be a B-Tree leaf block. The leaf blocks contain extra information per entry (see https://hjohn.home.xs4all.nl/SFS/extents.htm) The key should be a block of a file (the first of a range), that is 1 to 65535 block long (depending the "blocks" field). If the file is split up into more parts, then "next" will contain block number of the next range of blocks. You need to look this up again in the B-Tree structure to find out how large it is. You can for the most part ignore the other structures (bitmap, admin containers). The fsObjects and B-tree containers is what you'll need to recover the data. Debugging Code. Please use the AROS Bug Tracker if any issues are found. GRUB Command line list How do I get debugging out of InitResident ? If running i386 hosted on linux sysdebug=initresident on command line. This way you can enable any of listed flags. sysdebug=all stands for "everything" Have an executable (crosscompiled C++ code) which has 6 MB size on disk, but after loading it in memory, 250 MB RAM is taken. Any software that would split AROS executable into ELF part which would show actual size values? readelf -S executable it will show you all sections in elf file, including sizes and requested alignment. objdump -h filename That's will give you a quick overview of the sections and sizes. Ignore all the .debug.* sections. Would hazard a guess that you have a large .bss section. That's pretty common in C++. Next step: nm—size-sort filename | grep ' [bB] ' The last few will be your biggest consumers. Would suggest -C to demangle the symbols... ;) Suggest profiling the program (just use some printf's in the main loop for time spent in each part), it usually is quite easy to spot slow parts in games or apps. If someone has '#define IPTR ULONG' somewhere. To see where that define is, redefine IPTR in the source code that fails, just above the line that fails, and the preprocessor will tell you where it was defined first. How to setup gdb with AROS/hosted. Download AROS sources (AROS-xxxxxxxx-source.tar.bz2, where xxxxxxxx is the current date) and AROS contrib sources (AROS-xxxxxxxx-contrib-source) from aros.sourceforge.net/download.php Untar-bzip2 and cd to the unpacked archive directory. > tar -xvjf AROS-xxxxxxxx-source.tar.bz2 > cd AROS-xxxxxxxx-source Check the link to "contrib" (contrib-source) inside directory, e.g. correct like this: > rm contrib > ln -s ../AROS-xxxxxxxx-contrib-source.tar.bz2 contrib Make sure you have the correct locale setting, otherwise compilation will fail at some point. See here (or link below) for more on that. You might have to enter this: > export LANG="en_US.ISO-8859-1" Now configure for a debug build - see "./configure --help" for more - here are two examples: > ./configure—enable-debug=stack,modules,symbols > ./configure—enable-debug=all You may "make" now, or choose a separate directory for your build (e.g. for easy removal), for example if compiling for i386 architecture you could create a directory like this: > mkdir linux-i386 > cd linux-i386 > ../AROS/configure—enable-debug=stack,symbols,modules When done configuring you're ready to go: > make Building AROS takes some time - minutes on fast machines (e.g. 2.5 GHz quadcore), up to hours on slower machines. The result will be AROS Linux hosted with gdb debugging enabled. See aros.org documentation for more on compiling AROS, including more --enable-debug options. When finished, enter bin/linux-i386/AROS directory (replace "linux-i386" with your compilation target platform, e.g. linux-x86_64, etc.) inside the unpacked archive directory. This directory contains the required .gdbinit file for properly running AROS inside gdb. > cd bin/linux-i386/AROS Run AROS (here: with 128MB of memory) from gdb: > gdb—args boot/aros-unix -m 128 or > gdb—args boot/arosboot -m 128 (gdb) r Watch the shell output - in case AROS complains about "LoadKeyCode2RawKeyTable: Loading "DEVS:Keymaps/X11/keycode2rawkey.table" failed!" you should also see some instructions on how to create a keymap table. (see link above "more on compiling", too.) Quit gdb, and try default keymap table: (gdb) q The program is running. Quit anyway (and kill it)? (y or n) y > cd ../../.. > make default-x11keymaptable Re-run AROS, as described above. Try e.g. RAros (= right windows key) + W to open a shell. If this doesn't work you have to create a keymap table yourself, quit gdb again, and make a new keytable: > make change-x11keymaptable A window will open. Watch the window's title bar, and follow the instructions. When done, re-run AROS. RAros + W should now open a shell. Next, compile your program with gdb support. When you start GDB is there a warning which says warning: File "<whatever>/.gdbinit" auto-loading has been declined by your `auto-load safe-path' set to ... If so start gdb with "-ix .gdbinit" How to use gdb. In AROS open a shell, then (in host shell) use CTRL-Z to go into gdb. Use "b Exec_CreatePool" (one of the functions used early on by startup code in programs) to add a breakpoint, then "cont" and gdb will interrupt somewhere early during startup of "program". Use "bt" to show backtrace and "loadseg" for "??" entries. One of them will be for "program". After that you can use "disassemble program". One thing you need to make sure is that .gdbinit you have in your build directory is the same as in source tree. It has been modified some time ago, but the build system does not refresh it - you need to copy it manually To recap, please read our debugging manual: To detect segfaulting when loading, try... ./configure—enable-debug—with-optimization="-O2" Because crash or no crash may depend on optimization. For newer compilers maybe this helps... --with-optimization=-"-O2 -fno-strict-aliasing" One way to make crashes less random (more easily reproducible) is to activate the munging of free memory in rom/exec/freemem.c which is normally commented out: Mungwall can be turned on at runtime. Currently this works in all hosted versions. Just specify "mungwall" on kernel command line and it works. It can work on native too. In order to enable it you need to parse kernel command line, and if "mungwall" is present, set EXECF_MungWall bit in IntExecBase.IntFlags. This needs to be done before the first AllocMem() for obvious reasons. And never reset back this flag! If you change it on a working system, you are doomed. Hosted ports do the processing in rom/exec/prepareexecbase.c --enable-debug=mungwall option in configure still works but is going obsolete. A kludge in rom/exec/allocmem.c is responsible for this and it needs to be removed when the transition is done. BTW, on i386-pc port it can be activated by "mungwall" argument on command line, you don't need to rebuild AROS. New mungwall affects not only AllocMem()/FreeMem(), but also pools. I also tested it with AllocAbs(), seems to work correctly. Runtime mungwall works on: Works on all hosted ports, if the port itself is working. When starting my freshly rebuilt i386-linux-aros which was compiled with full debugging support I get sometimes the error "Program exited with code 0377". Add the following to your .gdbinit: set follow-fork-mode child Here are some of the custom AROS gdb functions (defined in ".gdbinit" file) to resolve "in ?? ()" entries in backtrace: You can use loadseg 0xb7c63900 loadframe 2 or loadbt and some others. Use "help " for a little help text. If the commands do not work try "loadkick" first. Use "thistask", "taskready", "taskwait" to get list of AROS tasks. "bttask " shows backtrace of a task which is in ready or in wait queue and "loadseg" to resolve "??" entries in it's backtrace ("loadframe" would not work as it assume current running task). Native debugging tools for AROS. to enable debugging at boot time entering the GRUB menu editing line (E key) and adding "debug=memory" to your boot line, then press Ctrl+X to complete booting. SYS:Tools/Debug/Bifteck Open a shell and enter the line below to run Biftek and grab the debug messages collected in RAM into a text file. tools/debug/bifteck > ram:debug.txt and certainly does not open a window. It is a shell tool and only dumps data located from the debug location. It is therefore important to 'catch' that debug data as soon as possible (before it gets overridden). You should invoke bifteck at the first opportunity before doing anything else. You can use the TO option to store bifteck output to a file or you can pipe it manually to a file. SYS:Tools/Debug/Sashimi - displays error messages One suggestion is to do a bug() debugging. Each time bug() is executed it will be output on sashimi. You include <aros/debug.h> and place bug("something\n"); in your source code at location though which control passes. To get the output - open an aros shell SYS:Tools/Debug/sashimi > RAM:out.txt Ctrl C to end the output to the RAM Disk. SYS:Utilities/Snoopy - monitors OS function calls, run "Sashimi" to see Snoopy's output SYS:Tools/WiMP - the Window (and Screens) Manipulation Program You can use the -E option of gcc to find out how preprocessor macros are expanded. Errors. crash in strcasecmp usually means that one of its arguments is NULL. empty space between these two names, prossibly some invisible character Old Amiga Guru Codes If the crash is in intuition. Sometimes, if it relates to text, a null pointer sets it off. an uninitialised pointer can have any address (this is a common fault). Compiling on 64bit, Many old code would not properly typecast when doing pointer-integer conversions and thus at least throw a warning. This can easily be located and fixed. Portable code Then again, many current compilers also throw a warning when you try to assign a pointer value to an integer and the integer is possibly too small. This happens under .NET for example when a 64 bit pointer is assigned to something like an ULONG - so exactly the case which you described. Linux. /* 1. Header for your name,date,purpose of program. 2. Pre-processor directives. This will include the #includes for files you want to add. 3. Includes for function prototypes if necessary. 4. Main() Create Pointers for Libraries and any Window you want to open. 5. Open necessary libraries. 6. Check if open exit program if fail. 7. Open a window exit program if fail. 8. Add your program 9. Close Window 10 Close Libraries. 11 End Program. */ /* standard os included headers <.h> */ /* define as unresolved external references (proto/xxx.h) and compiler will link to auto(matically) open library */ /* my own headers ".h" */ int main(void) return retval; } /* main */ If you used c++, there is not yet c++ support in our shared library system. The easiest way to create / compile a shared library would be to use the AROS build system but the libraries can be created manually. You have to create a ROMTAG structure and some header files. A shared library is built with the %build_module macro with a line like this: %build_module mmake=MetaTarget modname=mylib modtype=library files=SourceFiles This macro can build different AROS module types, like devices, Zune classes, HIDDs, etc. There is more here Alternatively, Makefile Porting UNIX library to AROS - dealing with static variables which would make it easy to port such libraries to AROS, keeping the benefits of sharing them on disk, but losing the benefit of actually sharing them in memory. Our problem arises by the fact we want to share the actual code (the .text section of the library) and constant data, but we need to have per-task .bss and .data sections. If we get rid of our intention to share the .text and .rodata sections, things get quite easy: just load and relocate the library whenever it's open, by whoever it's open. It's like statically linking the library into the executable, except that the final linking is done at runtime. In the V0 branch, in workbench/hidds/hidd.nouveau was committed pcimock.hidd. This is a pci driver that allows mocking real PCI devices under linux-hosted. The main idea is to be able to run the real hardware driver under linux-hosted with as little changes as possible (some changes will always be needed though unless someone wants to write complete device simulator) so that driver's code paths can be executed and debugged using gdb. This was a very helpful capability when porting nouveau. Now it is externalized from nouveau.hidd and can be used by other people porting drivers. The pcimock.hidd can currently mock 4 different nvidia cards, 1 AGP bridge and also mock irq.hidd. What's the difference between this driver and the pcilinux.hidd? I used that one to develop many different HW drivers for aros. As far as I understood the intention of pcilinux.hidd it is supposed to get access to real hardware that is running under linux. The pcimock.hidd goal is to mock the hardware. For example my dev box is a PCIE system, but I still would like to run the AGP codes paths in nouveau under linux-hosted to check if they don't seg fault. The other case would be to run codes paths for hardware that the developer does not have (Fermi cards in my case). In the case of pcimock.hidd, the AROS driver's code paths will execute as long as you add proper mocking (for example fill in PCI config area or values for registers in BARs). This is an advantage for ported drivers - the code should already work (since it worked on another system) but there might have been mistakes made during porting which can be detected easily with gdb. In case you are writing your driver from scratch, pcilinux.hidd hidd will give you more advantage, since you can actually access the real hardware from linux-hosted. Misc. APL, MPL, BSD, GPL and LGPL Licences. The majority of AROS sources in licensed under AROS Public License (APL) which (to a degree) protects us from someone taking AROS sources and not contributing improvements back (for example MorphOS took some AROS source and then contributed changes back) It is written to allow the use of AROS code in other open source or commercial projects without exception whilst providing a mechanism so that improvements/additions can find their way back to the original source in one form or another. There are "3rd" party applications used by AROS that do not fall under this license, which are an extra "Contrib" download for convenience. Anyone can port GPL-ed network and sound drivers as AROSTCP and AHI are GPLed. Direct using (porting) GPL-ed code in other parts of AROS (gfx, sata, usb) is not possible because AROS license is not compatible with GPL. You need to utilize permissive licensed code like BSD or MIT/X11. BSD and MPL license are the closest to APL. APL however is not so compatible with LGPL/GPL. LGPL case - you cannot statically combine APL code with LGPL. You can, however thank to LGPL being "lesser" restrictive, use LGPL dynamically loaded libraries in APL codes. GPL case - you cannot combine APL code with GPL in any way if there is no explicit clause by GPLed code authors allowing that. If you do combine APL with GPL in "bad" ways described above - you have a problem (you violate GPL). This problem might result in everything in AROS becoming GPL or everything running or AROS becoming GPL (here I'm not sure really). The other scenario is that you are not allowed to legally distribute such code at all. To be honest I have grasped how to violate GPL, but I'm still no exactly sure what happens when you violate it (but I'm sure it's not anything nice) GPL software can run on top of non-GPL "system components" (see system components exception of GPL), but the other way around (non-GPL using GPL) leads to problems. This means applications like scout, or Quake III are ok (in the majority of cases). Theres no reason GPL drivers cannot be ported - but they cant be in AROS's ROM (requires linking APL code with GPL), nor can AROS depend on them (e.g. they must use existing apis). If they are launched (dynamically linked) by a user action that is allowed. It is also allowed to distribute such binaries together for convenience. GPL is not about statical or dynamic linking but is about executing process and function calls. These components - SFS, isapnp, Zune texteditor, AHi, network drivers, freetype, openuirl, BHFormat, Edit and (" dynamically loaded libraries") are LGPL, not GPL. Mesa/Nouveau stuff is MIT. Some user tools are GPL though. AROS (system) Contrib: About stealing code: The chances of this happening is exactly the same whether we are APL or GPL. If any closed-source option wanted to do it, there is no one that can validate otherwise. MorphOS has used some AROS codes, but contributed changes back. The rationale behind APL is that while it guarantees that the original developer will get the improvements back (to a certain degree - file based), the person who uses the codes does not have to open his original codes. BSD does not guarantee that the original developer gets improvements. GPL requires the person using the codes to open his codes as well. The copyright holders needs to stay - we just need information from them that the codes are available under APL (for example a checked-in file like in case of Poseidon). We don't do transfer of copyrights. AROS source code tree. Found this interesting (non-GPL) licensing 'anomaly' - to keep in mind for distributors. Programs that lose their license if sold ("non-profit only" licensed): contrib/aminet/comm/term/TinyTerminal contrib/aminet/dev/basic/bwBASIC contrib/aminet/text/edit/xdme contrib/fish/aroach contrib/fish/lotto contrib/fish/shuffle contrib/fish/touch + cdvdfs. Here is a list of all the GPL/GPLv2/GPLv3 licenses fossology found, what have explicit licenses in their comments. excluded LGPL, BSD/GPL dual licensed and programs (such as Prefs/Edit and BHFormat) AHI: it has special provisions (COPYING.DRIVERS). The library is LGPL, preferences software is GPL and drivers can be anything without breaking GPL/LGPL. Network stack: well, we are long overdue for a new, IPv6 enabled network stack anyway, anyone interested? ;) Seriously though it seems like the glue code is GPL and as all the drivers. However some of the drivers are our own code, so they could be relicensed to LGPL. Same filter as the AROS trunk list. These should all be libraries or plugins - no programs. http://www.evillabs.net/AROS/Audit-2012-03-14/AROS-contrib.txt Types. On AROS following rules apply: point 4 is actually important. if you want to write really portable app, you may be interested in standard datatypes defined in C99: int8_t, uint8_t, int16_t, uint16_t, int32_t, uint32_t, int64_t, uint64_t, intptr_t, uintptr_t. They are all defined in inttypes.h include file. In exec/types.h the following short-cuts are typedef'd. They are used often in AROS, so you should nearly always include exec/types.h and soon only they will be removed from sys/_types.h include, all types are now defined in include files named aros/types/xxx.h. Compiler specific types, like int and long might change their size. In case of AROS, similar to linux, int remains 32 bit whereas long grows to 64 bits in size. If you use Amiga-like data types, i.e. BYTE/UBYTE, WORD/UWORD, LONG/ULONG and QUAD/UQUAD or the C99 standard types (uint8_t and so on, see stdint.h include) then you should have less issues to solve than by using types without size guarantee. Of course, all pointers grow to 64 bytes using 64bit cpu. Most of the code can be just recompiled and will work. In rare cases, where e.g. pointers are casted to integers, a special care must be taken. Especially in the cases, where pointer is casted to LONG/ULONG (this code will break on 64 bit AROS) e.g. '#define IPTR ULONG'. With compiler delint patches which the majority of them are simple casting issues to make the compiler happy. Notice some of the changes involve introducing double casts. In very recent versions of GCC. Yes, the bulk of the double casts are for converting 32 bit addresses (ie from a 32 bit PCI DMA address register) to a 64 bit pointer. First cast is to IPTR (to expand to 64 bits, and prevent sign extension if the address is above 0x7FFFFFFF), and then to APTR. ULONG != IPTR except on 32bit .. so if you need to store pointers make sure and use IPTR and not ULONG (which some old code does). For this reason things like Taglist elements are 64bit (since the tag data can be a pointer). If your passing items on the stack you should use the STACKED attribute to make sure they are correctly aligned (on 64bit all items on the stack are 64bit..) There is more issues like using "== 0L" causes problems. Endian. Use the macros from <endian.h> instead making a guess based upon architecture defines SVN and GIT. If you want to help develop AROS OS itself, you can If you have SVN access (early 2015 introduced a new SVN server, create a new account at trac aros org) and/or have obtained the source AROS site - you can compile the current build tools/environment using: > make development and follow this procedure or Guide https://trac.aros.org/trac#Developing If you plan on contributing back changes, please post information about such changes first on this mailing list for more experience developers can validate whether they are correct. Then there are the nightly build machines. They svn update before the build and run configure as one of the next steps. autoconf might be added to the nightly build scripts. Our build relies on packages downloaded from Internet (SDL for example) - it always worked this way. The minimal requirement (when just building core AROS) is binutils and gcc. If you build contrib as well, you need many more packages to be downloaded. https://gitorious.org/aros/aros/commits/crosstools-II git://gitorious.org/aros/aros.git Branch crosstools-II there is only one commit on top of ABI_V1 and build OK. SDI Calls. Integrate the 'SDI'-headers to allow easier porting to all amiga-like platforms. have "SDI_compiler.h" and "SDI_hook.h" included its more organized like #include SDI/SDI_hook.h than #include SDI_hook.h option 1 --- i also use when back porting from amiga's.. also you can add the -i include/sdi/ location if you do not want to add or edit any files. Defining HOOKPROTO to IPTR name(struct IClass * cl, Object * obj, Msg msg); solved the problem A MUI application most likely needs only the HOOKPROTOxxx SDI macros. They are compatible with AROS, only the attributes (hook, object attribute) must be given in the right order. examine compiler/include/aros/symbolsets.h (AROS_LIBREQ) compiling mui stuff for aros setting -std=gnu99 is necessary (i have had -std=c99 most of the time). Locale with Flexcat. Most languages have a locale, but not every app is localized, the only thing needed is to translate the "catalog" files. It is a case of locating the correct catalog and saving the translated version. For every app that lacks of your language catalog and is localized anyway, you should find (in the sources) files related to locale: Compare with other localized apps... Then, "make my_app-catalogs" should create and install your translated catalogs. ex : for, saying, sys:prefs/wanderer: on root of AROS sources, type: "make workbench-prefs-wanderer-catalogs" then (if you changed the .cd file): "make workbench-prefs-wanderer" For apps not localized, you have to adapt their code to support it, if it is possible... noticed the original .cd file has many (//) strings at the end of any voice, so added them also to the .ct file. That (//) is only for cd files. I'm highly recommending to use FlexCat for updating ct files, e.g. like this: flexcat app.cd deutsch.ct newctfile deutsch.ct You'll get error checking and new entries are marked in the resulting ct file. When editing .ct files, only change those lines containing translation and perhaps version string, nothing else. The rest is up to the relevant tool, flexcat. In order to update your translation, type in the following in your shell: flexcat xyz.cd xyz.ct NEWCTFILE xyz_upd.ct COPYMSGNEW This way you will not only make sure you have correct translation file but flexcat also pre-fills newly added strings with "*** NEW *** text. Even better tool for checking cd/ct/catalog files is catcheck, but this one is sadly only available for AmigaOS/68k... Some languages have variations, like portugues from portugal and portugues from brasil differs... This is the way to go. I will have a look at language files, but basically if those two languages differ you have to do two separated set of translation files, yes. Please, use Flexcat to generate CT files: FlexCat wanderer.cd NEWCTFILE=deutsch.ct Then fill the first 2 lines with something useful: You can even update the CT-File: (This adds the new strings) FlexCat wanderer.cd deutsch.ct NEWCTFILE=deutsch.ct To compile a catalog you only need the .cd file and your translation (.ct file): FlexCat multiview.cd deutsch.ct CATALOG=MultiView.catalog Linux version of FlexCat A script which compares the required version (i.e. the version which an application/module etc. tries to open) with the version of the existing CT files. The result is in this table: https://github.com/aros-translation-team/translations/wiki/Progress The following cases are highlighted: n/a i.e. CT misses at all version in existing CT file is lower than the required version It might be a bit difficult to participate if you haven't worked with Git before but alternatively you can send your CT files to our Slack channel. When the ct file has been generated via flexcat (flexcat keyshow.cd NEWCTFILE=spanish.ct) it has the following header: Those values <ver>.<rev> are the version and revision of the CT file for the languaje or are the values of the application being localized? The <ver> part must match with version which the application tries to open. You can find the value either in the column "Required Version" in the table which I've linked above, our you can look in the git repository. For keyshow it would be https://github.com/aros-translation-team/keyshow. You can find in the file "catalog_version.h" the right version number. The <rev> part starts for new CT files with 0 and should be increased every time the CT file is updated. Updated several files and created a few more that were missing on the spanish catalog. The catalogs are in Git repositories at https://github.com/aros-translation-team a) You tell me your Github user name. I'll invite you. You can work directly with the Git repositories. b) You create Github forks of the catalog repositories and create pull requests. c) You send the CT files to mrustler gmx de C Utils Misc. The AROS source uses at several places the __DATE__ macro to fill the date entry of a $VER tag. Problem is that c:version doesn't understand that date format (e.g. "May 21, 2011"). As a result the output of e.g. > "version c:shell full" contains "(null)". Is extending the version command to understand the format of __DATE__ the right solution for that problem? AmigaOs compilers should use __AMIGADATE__ macro or similar form, if it isn't implemented it could be emulated in makefile: -D__AMIGADATE__=\"$(shell date "+%d.%m.%Y")\" BTW. I think DD.MM.YYYY is better format than "Month DD YYY" because "Month DD YYY" is not localized in any way. "strnicmp" shouldn't work with NULL pointers The Situation: compiled a linklib using c++ object files (using the c++ cross compiler). compiled a C stub that uses the linklib (using the c++ cross compiler). Try to link them together (using the c++ cross compiler) with C object files (using the normal target c compiler) that need to use -nostartup = cant do because using the c++ files pulls in arosc (for stdio etc) - so wants to have the autoinit stuff present. What can I do about this?? If it is possible to manually open it then what do I need to do exactly? ENV. The philosophy behind ENV: is that keeping configuration files there allows you to 'Use' preferences by keeping a copy in ENVARC: intact. However in some cases (like this one) it is not required. 99% of the time that statement is true (not required) for pretty much every file in ENV: or do people change their default icons - and prefs settings - every boot? There seem to be a bad habit of late with developers changing things to reflect their own personal preference when the change isn't actually necessary - It would be nice if people could refrain from doing that in the tree without at least discussing it on the dev-list first (and with good reasoning unless they commited said work in the first place..) We're not keen on the pollution of the "S:" dir: it's meant to be for scripts. What's wrong with "ENV:"? Only the fact that it takes up RAM. I understand that for PCs with several gigabytes of RAM this is irrelevant. But let's remember about other machines. The philosophy behind ENV: is that keeping configuration files there allows you to 'Use' preferences by keeping a copy in ENVARC: intact. However in some cases (like this one) it is not required. How about implementing in the style of HappyENV then? RAM-disk handler that falls through to reading from ENVARC: if there is no such file stored in it already. Removes RAM usage for unchanged files, removes the need to copy ENVARC to ENV in startup-sequence. Shouldn't be too hard to make from AmberRAM, or even just extend AmberRAM to provide this service. Is it feasable to build a special version of AmberRAM handling ENV: that will try and copy the requested file from ENVARC: if it isnt found in ENV: ? Additionaly it could mark closed "files" as untouched - and expunge them from ENV: after a period of time to free up additional RAM:, or when the system is running low on free memory? Silenty disappearing files may not be a good plan. Would be nice if the following would work: ASSIGN :ENV SYS:Prefs/Env-Arc ADD ASSIGN :ENV RAM:ENV ADD Where new files put in ENV: end up in RAM:ENV, and opening files looks in RAM:ENV first, then SYS:Prefs/Env-Arc Well - that's essentially what im proposing but without the assigns - or need for a RAM:ENV directory. Adding it as a feature of AmberRAM sounds like the most memory efficient way (one handler to load in RAM) but that's only if it is possible to make it handle ENV: additionally to RAM:, and if it is even possible to add the proposed functionality (...and how to make it enable it when accessing ENV:). (AS)MP support. If one has to recompile software for SMP multi core, is there any thing special one has to do to get software to run? Use task.resource if you need to query information about what tasks are running, and clear msgports completely when they are allocated. Most code should not need Forbid. Use Semaphores, Messages etc to sync your own code. Accessing system structures is a different thing. Use the proper API whenever possible. How single structures will be protected in the future is still a moving target, at least it is not documented. And you should never use undocumented stuff Ideas on for SMP multi-core Another suggestion is ... Forbid/Permit function calls are meant to halt multitasking so as no other task could intervene with what ever the calling task is doing, e.g. setting semaphores. Disable/Enable calls are meant to halt interrupts and as a side effect they also halt task switching. One option is to make it compulsory to protect shared resources with semaphores and forbid the use of simple Forbid() calls as to protect something. Setting semaphore should be done if possible with atomic instructions (check and alter in one instruction). Or make the second concurrent ObtainSemaphore call halt the second calling task and force if possible a task switch which ever gives better results. Semaphores could store the owning tasks task pointer instead of boolean to make things easier. As long as the CPU initiates the DMA transfers through the OS, and the OS ensures that the transferred memory is within the region accessible to the user initiating the transfer, everything is fine. The CPU is the conductor, and the CPU by that has the control of which DMA transfer is initiated and which is not. All you need to do is to write device drivers reasonable. Hint: CachePreDMA and CachePostDMA exist. All the Os has to do is to verify that the memory regions to be transferred are valid, and prohibit direct access to the DMA control registers from user space. None of these algorithms imply huge costs. The current OS design doesn't really allow virtual memory in first place, Forbid() is again the problem. memory.library API seem to low level. IMHO the programs should not know how the swapping is implemented. I would just go for one new memory flag MEMF_SWAPPABLE that indicates that a certain memory region or a whole memory pool won't be accessed during Forbid()/Permit() etc. It only solves part of the problem, it only implements virtual memory and not memory protection. For the latter you need to be able make certain memory inaccessible by other programs, some memory read-only for one task and read-write for other tasks, etc. And I think this should be done in the same Address Space in order to avoid you constantly need to swap between different address spaces. So to summarize, if there are programs using this API we may provide a wrapper layer to get them working but I am not convinced this API should be the reference API with whom to provide VM to AROS programs. Variadic. variadic functions (i.e. functions with an arbitrary amount of arguments). Please keep with using stdarg rather than having va casted to a LONG * type and varargs handled manually. Doing so, prevents tons of casting, where a simple va_arg can be used. So, string = *((char **) args) instead of string=va_arg(va, char *). Couldn't find varargs.h or stdarg.h. and have no use for AROS_SLOWSTACKHOOKS or AROS_SLOWSTACKTAGS. GCC looks for stdarg.h in a different place: /bin/linux-i386/tools/lib/gcc/i386-aros/4.2.2/include/stdarg.h Here is a path for a "normal" header: bin/linux-i386/tools/lib/gcc/i386-aros/4.2.2/../../../../i386-aros/sys-include/aros/system.h The use of vararg.h isn't supported by newer gcc versions. If you want your code to run on architectures that pass part of variadic arguments in a number of registers you need to use AROS_SLOWSTACK macros. Otherwise your program will not work on powerpc and x86_64 ports. Of course the SLOWSTACK stuff is not needed in a function that can use va_list, va_start, va_arg and va_end. It's only needed if you want to write functions like DoMethod or similar. #include <stdarg.h> should be enough no matter if you do cross or native compiling. If it does not work, something is wrong and should be corrected. Stdarg.h is here, Development:lib/gcc/i386-aros/4.2.2/include/ ...which is part of the compiler's default include paths. In other words, #include <stdarg.h> works out of the box, indeed. (sorry, I should have just tried it before invoking "search" or "find"...) furthermore, myprintf() as shown above won't work, because... printf(format, args); ...is wrong - the second argument does not match printf() prototype, it expects a argument list, but args is of type va_list (obviously) - so one has to use... vfprintf(stdout, format, args); ...instead, just like in the original printf(), and add fflush(stdout). additionally, one could use... int myarg = va_arg(args, int); ...between va_start() and va_end() to access individual arguments, where each call to va_arg() returns an argument casted to the desired type (here: "int") from the list given (here: "args") and advances to the next one. wrapping up vfprintf() and modifying the format string now is a major speedup! no more backslash-n typing! this has been haunting me for years! On MOS and AmigaOS, the NewObject variadic function is kept in the static library. It takes most of the parameters on the stack - thanks to that the implementation of NewObject calls the NewObjectA function. Everything works perfect, and the typical MUI macros may be easily used. This, however, is not the case when you compile for AROS. Here, NewObject is a variadic macro, not a function. Thanks such approach we do not need any custom compiler in case of systems, where the arguments of variadic functions are passed partially through registers and partially through the stack (This is the case of PPC and x86_64, this is also the reason why both OS4 and MOS require specially patched compilers). Since NewObject is a macro, the gcc's preprocessor expects the list of macros arguments enclosed within parentheses. In MUI macros it is not the case. Imagine the following test code: This will compile and work, but the following piece of code: will fail with the error: unterminated argument list invoking macro "foo" There are two ways of fixing your issue. Either create your new objects outside this huge MUI constructions, and in there use just a pointer, or get rid of the "End" macro and exchange it with "TAG_DONE)". Badly written software is, for example, casting va_list to an APTR or even doing so as if va_list were a plain table of function arguments. Such code needs to be fixed because it has very few chances to work anywhere but on author's machine ;) The problem is nOt that they assume sizeof(APTR) == 4, its that they often do not use APTR, and use ULONG to store pointers exclusively. If the code used APTR/IPTR as it should - most of the "problems" wouldn't exist. It would also help if people would start using variadic arguments properly. Many coders do assumptions which shall never be made. Instead, they should consider using stdarg.h file and all the va_* functions :) ABI. In the head of our SVN repository there are now only 3 directories: We have added two extra dirs there: tags and imports As discussed when we branch ABI V0 and ABI V1 it would also be good to introduce tags. Normally this is done in a directory in the repository called tags. Currently we don't have this directory there. (We do have branches/tags that is a hack I have done because one doesn't have write access in the top directory. I think this directory is not clean and should be removed). The second directory I would introduce is an imports directory for implementing vendor branches as discussed in the svn book. Currently we use code from several different projects and that code is stored inside the AROS tree; we seem to have problems with keeping this code up to date and merge our changes upstream. Maintainers of up stream projects like the MUI classes etc. have complained about this (to put it lightly). Introducing these vendor branches would make it easier to see what changes we have made and make patches to be sent upstream and make it easier to import newer upstream versions of their code. Although can't "copy" the vendor branch into the main branch because it's already there, so start with a "merge". Yes, the first step to make the code already in the repository compatible with the vendor branches will be the most difficult. The best way to do it the following way: For example, place NList directly under vendor and not in a subdirectory like "contrib/zune/classes". Actually after we have a stable ABIv1 (2012 or later). We need to move away as much as possible from the contrib directory to some other repositories. The reasons are ... If there is really a need for a place for hosting AROS projects we may investigate setting up such a server but then including bug tracking, governance, maillist, etc. for each project separately. I personally think there are already enough places like sourceforge, google code, savannah, etc. where people can go for hosting such projects. Links. In the future...? What would you like to see implemented in AROS? ABIv1 completed, SMP (x86_64), SendMsg()/GetMsg() to support memory protection between target and destination, in that order. Michal Schulz and Jason McMullan have been toying with the question "What are the minimal changes needed to the AmigaOS 3.1 API to support SMP"? The answer so far seems to be "few, but subtle". For example, SysBase->ThisTask is no longer meaningful on SMP, but FindTask(NULL) is. Disable() and Forbid() are shockingly bad on performance, but adding a spinlock semaphore mode to SignalSemaphore will help new code on SMP. Leveraging a 'common' OS with a lot of machine support (Linux, MacOS, Windows, QNX, etc) is something that AROS has been doing for quite a long time, and it is the biggest strength of AROS. This AROS experience and programming model, in the same way the Google's Android layers on top of Linux, or MacOS X layers on top of the Darwin/BSD kernel, as a first step - This also allows every window to be on its own 3D surface with backing store, allowing Wanderer (or a Commodity) rearrange/zoom/animate app windows without having to send a pile or refreshes to them - Yes, it will require a lot of work in the libraries to make this transition - Yes, I do think it will be worth it in the end. So, what would this 'AROS of the future' look like? And why would anyone want to program on such a system? an option for mmake to dump its dependency of metatarget in a graphviz[*] input file. This should make it possible to visualize the dependencies and hopefully be inspiration for cleaning up some mess, circular or unneeded dependencies and so on. AmigaOS gcc 9 Old versions able to create binaries for AmigaOS and Upgrading gcc versions
20,586
Aros/Developer/Docs/Examples/HelloWorld. Description. This is an example 'Hello World' program to demonstrate the basics of using c under AROS The Code. int main(void) puts("Hello, world!"); return 0; Save this as a plain text file named 'helloworld.c'. Inside AROS you can do this by opening a shell (e.g. hit RightAROSKey* + w in Wanderer), typing 'edit helloworld.c', entering the code above, hit RightAROSKey + w to save it, hit RightAROSKey + q to quit editing. Making It Compile. If you've created 'helloworld.c' inside AROS, compile it with: gcc -o helloworld helloworld.c To cross-compile it from linux, provided you've got a proper SDK installed, making the executable helloworld program on AROS-linux-hosted takes this step: i386-aros-gcc -o helloworld helloworld.c In both cases, the result will be an executeable called 'helloworld', which should run under AROS. Running It. In case you've cross-compiled from linux, move the executable into the AROS-linux-hosted installation tree, e.g. like this (assuming 'AROS/' is the root directory of AROS-linux-hosted, which corresponds to the 'System' partition inside AROS): > mv helloworld AROS/ Inside AROS, find and run your executable via Wanderer, in our case: double-click 'System' icon on Wanderer desktop, double-click 'helloworld' icon (you might have to hold right mouse button and select 'Window -> View -> All Files' to see it). A window will be opened displaying the program's output. Or, to run it from AROS' shell, assuming you are still in the same directory where you compiled it, just type its name: System:> helloworld
491
Calculus/Hyperbolic functions. Theory. The independent variable of a hyperbolic function is called a hyperbolic angle. Just as the circular functions sine and cosine can be seen as projections from the unit circle to the axes, so the hyperbolic functions sinh and cosh are projections from a unit hyperbola to the axes. Definitions. The hyperbolic functions are defined in analogy with the trigonometric functions: The reciprocal functions csch, sech, coth are defined from these functions: Derivatives of hyperbolic functions. formula_11 formula_12 formula_13 formula_14 formula_15 formula_16 Principal values of the main hyperbolic functions. There is no problem in defining principal braches for sinh and tanh because they are injective. We choose one of the principal branches for cosh. Inverse hyperbolic functions. With the principal values defined above, the definition of the inverse functions is immediate: We can define formula_23 , formula_24 and formula_25 similarly. We can also write these inverses using the logarithm function, These identities can simplify some integrals. Derivatives of inverse hyperbolic functions. formula_29 formula_30 formula_31 formula_32 formula_33 formula_34 Transcendental Functions. Hyperbolic functions are examples of transcendental functions -- they are not algebraic functions. They include trigonometric, inverse trigonometric, logarithmic and exponential functions.
383
Calculus/Taylor series. Taylor Series. Here, formula_1 is the factorial of formula_2 and formula_3 denotes the formula_2th derivative of formula_5 at the point formula_6 . If this series converges for every formula_7 in the interval formula_8 and the sum is equal to formula_9 , then the function formula_9 is called analytic. To check whether the series converges towards formula_9, one normally uses estimates for the remainder term of Taylor's theorem. A function is analytic if and only if a power series converges to the function; the coefficients in that power series are then necessarily the ones given in the above Taylor series formula. If formula_12 , the series is also called a Maclaurin series. The importance of such a power series representation is threefold. First, differentiation and integration of power series can be performed term by term and is hence particularly easy. Second, an analytic function can be uniquely extended to a holomorphic function defined on an open disk in the complex plane, which makes the whole machinery of complex analysis available. Third, the (truncated) series can be used to approximate values of the function near the point of expansion. Note that there are examples of infinitely often differentiable functions formula_9 whose Taylor series converge, but are "not" equal to formula_9 . For instance, for the function defined piecewise by saying that formula_15 , all the derivatives are 0 at formula_16 , so the Taylor series of formula_9 is 0, and its radius of convergence is infinite, even though the function most definitely is not 0. This particular pathology does not afflict complex-valued functions of a complex variable. Notice that formula_18 does not approach 0 as formula_19 approaches 0 along the imaginary axis. Some functions cannot be written as Taylor series because they have a singularity; in these cases, one can often still achieve a series expansion if one allows also negative powers of the variable formula_7; see Laurent series. For example, formula_18 "can" be written as a Laurent series. The Parker-Sockacki theorem is a recent advance in finding Taylor series which are solutions to differential equations. This theorem is an expansion on the Picard iteration. Derivation. Suppose we want to represent a function as an infinite power series, or in other words a polynomial with infinite terms of degree "infinity". Each of these terms are assumed to have unique coefficients, as do most finite-polynomials do. We can represent this as an infinite sum like so: where formula_6 is the radius of convergence and formula_24 are coefficients. Next, with summation notation, we can efficiently represent this series as which will become more useful later. As of now, we have no schematic for finding the coefficients other than finding each one in the series by hand. That method would not be particularly useful. Let us, then, try to find a pattern and a general solution for finding the coefficients. As of now, we have a simple method for finding the first coefficient. If we substitute formula_6 for formula_7 then we get This gives us formula_29 . This is useful, but we still would like a general equation to find any coefficient in the series. We can try differentiating with respect to x the series to get We can assume formula_31 and formula_6 are constant. This proves to be useful, because if we again substitute formula_6 for formula_7 we get Noting that the first derivative has one constant term (formula_36) we can find the second derivative to find formula_37 . It is If we again substitute formula_6 for formula_7 : Note that formula_37's initial exponent was 2, and formula_43's initial exponent was 1. This is slightly more enlightening, however it is still slightly ambiguous as to what is happening. Going off the previous examples, if we differentiate again we get If we substitute formula_45 we, again, that By now, the pattern should be becoming clearer. formula_47 looks suspiciously like formula_1 . And indeed, it is! If we carry this out formula_2 times by finding the formula_2th derivative, we find that the multiple of the coefficient is formula_1 . So for some formula_31 , for any integer formula_53 , Or, with some simple manipulation, more usefully, where formula_56 and formula_57 and so on. With this, we can find any coefficient of the "infinite polynomial". Using the summation definition for our "polynomial" given earlier, we can substitute for formula_31 to get This is the definition of any Taylor series. But now that we have this series, how can we derive the definition for a given analytic function? We can do just as the definition specifies, and fill in all the necessary information. But we will also want to find a "specific" pattern, because sometimes we are left with a great many terms simplifying to 0. First, we have to find formula_61 . Because we are now deriving our own Taylor Series, we can choose anything we want for formula_9 , but note that not all functions will work. It would be useful to use a function that we can easily find the formula_2-th derivative for. A good example of this would be formula_64 . With formula_64 chosen, we can begin to find the derivatives. Before we begin, we should also note that formula_6 is essentially the "offset" of the function along the x-axis, because this is also essentially true for any polynomial. With that in mind, we can assume, in this particular case, that the offset is formula_67 and so formula_12. With that in mind, "0-th" derivative or the function itself would be If we plug that in to the definition of the first term in the series, again noting that formula_12 , we get where formula_72 . This means that the first term of the series is 0, because anything multiplied by 0 is 0. Take note that not all Taylor series start out with a 0 term. Next, to find the next term, we need to find the first derivative of the function. Remembering that the derivative of formula_64 is formula_74 we get that This means that our second term in the series is Next, we need to find the third term. We repeat this process. Because the derivative of formula_78 . We continue with The fourth term: Repeating this process we can get the sequence which simplifies to Because we are ultimately dealing with a series, the zero terms can be ignored, giving use the new sequence There is a pattern here, however it may be easier to see if we take the numerator and the denominator separately. The numerator: And for the formula_7 part of the terms, we have the sequence By this point, at least for the denominator and the formula_7 part, the pattern should be obvious. It is, for the denominator The formula_7 term: Finally, the numerator may not be as obvious, but it follows this pattern: With all of these things discovered, we can put them together to find the rule for the formula_2th term of the sequence: And so our Taylor (Maclaurin) series for formula_64 is List of Taylor series. Several important Taylor series expansions follow. All these expansions are also valid for complex arguments formula_7 . Exponential function and natural logarithm: Geometric series: Binomial series: Trigonometric functions: Hyperbolic functions: Lambert's W function: The numbers formula_115 appearing in the expansions of formula_116 and formula_117 are the Bernoulli numbers. The formula_118 in the binomial expansion are the binomial coefficients. The formula_119 in the expansion of formula_120 are Euler numbers. Multiple dimensions. The Taylor series may be generalized to functions of more than one variable with History. The Taylor series is named for mathematician Brook Taylor, who first published the power series formula in 1715. Constructing a Taylor Series. Several methods exist for the calculation of Taylor series of a large number of functions. One can attempt to use the Taylor series as-is and generalize the form of the coefficients, or one can use manipulations such as substitution, multiplication or division, addition or subtraction of standard Taylor series (such as those above) to construct the Taylor series of a function, by virtue of Taylor series being power series. In some cases, one can also derive the Taylor series by repeatedly applying integration by parts. The use of computer algebra systems to calculate Taylor series is common, since it eliminates tedious substitution and manipulation. Example 1. Consider the function for which we want a Taylor series at 0. We have for the natural logarithm and for the cosine function We can simply substitute the second series into the first. Doing so gives Expanding by using multinomial coefficients gives the required Taylor series. Note that cosine and therefore formula_5 are even functions, meaning that formula_127, hence the coefficients of the odd powers formula_7, formula_129, formula_130, formula_131 and so on have to be zero and don't need to be calculated. The first few terms of the series are The general coefficient can be represented using Faà di Bruno's formula. However, this representation does not seem to be particularly illuminating and is therefore omitted here. Example 2. Suppose we want the Taylor series at 0 of the function We have for the exponential function and, as in the first example, Assume the power series is Then multiplication with the denominator and substitution of the series of the cosine yields Collecting the terms up to fourth order yields Comparing coefficients with the above series of the exponential function yields the desired Taylor series
2,232
PHP Programming/Integration Methods (HTML Forms, etc.). Integrating PHP. There are quite a few ways that PHP is used. Following are a few methods that PHP can be called. Forms. Forms are, by far, the most common way of interacting with PHP. As we mentioned before, it is recommended that you have knowledge of HTML, and here is where we start using it. If you don't, just head to the HTML Wikibook for a refresher. Form Setup. To create a form and point it to a PHP document, the HTML tag <form> is used and an action is specified as follows: <form method="post" action="action.php"> <!-- Your form here --> </form> Once the user clicks "Submit", the form body is sent to the PHP script for processing. All fields in the form are stored in the variables $_GET or $_POST, depending on the method used to submit the form. The difference between the GET and POST methods is that GET submits all the values in the URL, while POST submits values transparently through HTTP headers. A general rule of thumb is if you are submitting sensitive data, use POST. POST forms usually provide more security. Remember $_GET and $_POST are superglobal arrays, meaning you can reference them anywhere in a PHP script. For example, you don't need to call "global $_POST" or "global $_GET" to use them inside functions. Example. Let's look at an example of how you might do this. <!-- Inside enterlogin.html --> <html> <head> <title>Login</title> </head> <body> <form method="post" action="login.php"> Please log in.<br/> Username: <input name="username" type="text" /><br /> Password: <input name="password" type="password" /><br/> <input name="submit" type="submit" /> </form> </body> </html> This form would look like the following: And here's the script you'd use to process it (login.php): <?php // Inside enterlogin.html if($_POST['username'] == "Spoom" && $_POST['password'] == "idontneednostinkingpassword") echo("Welcome, Spoom."); else echo("You're not Spoom!"); ?> Let's take a look at that script a little closer. if($_POST['username'] == "Spoom" && $_POST['password'] == "idontneednostinkingpassword") As you can see, $_POST is an array, with keys matching the names of each field in the form. For backward compatibility, you can also refer to them numerically, but you generally shouldn't as this method is much clearer. And that's basically it for how you use forms to submit data to PHP documents. It's that easy. For More Information. PHP Manual: Dealing with Forms PHP from the Command Line. Although PHP was originally created with the intent of being a web language, it can also be used for commandline scripting (although this is not common, because simpler tools such as the bash scripting are available). Output. You can output to the console the same way you would output to a webpage, except that you have to remember that HTML isn't parsed (a surprisingly common error), and that you have to manually output newlines. The typical hello world program would look like this: <?php print "Hello World!\n"; Notice the newline character at the end-of-line - newlines are useful for making your output look neat and readable. Input. PHP has a few special files for you to read and write to the command line. These files include the stdin, stdout and stderr files. To access these files, you would open these files as if they were actual files using fopen, with the exception that you open them with the special php:// "protocol", like this: $fp = fopen("php://stdin","r"); To read from the console, you can just read from stdin. No special redirection is needed to write to the console, but if you want to write to stderr, you can open it and write to it: $fp = fopen("php://stderr","w"); Bearing how to read input in mind, we can now construct a simple commandline script that asks the user for a username and a password to authenticate himself. <?php $fp = fopen("php://stdin","r"); print "Please authenticate yourself\n"; print "Username: "; // rtrim to cut off the \n from the shell $user = rtrim(fgets($fp, 1024)); print "Password: "; // rtrim to cut off the \n from the shell $pass = rtrim(fgets($fp, 1024)); if (($user=="someuser") && ($pass=="somepass")) { print "Good user\n"; // ... do stuff ... } else die("Bad user\n"); fclose($fp); Note that this script just serves as an example as to how to utilise PHP for commandline programming - the code contained here demonstrates a very poor way to authenticate a user or to store a password. Remember that your PHP scripts are readable by others!
1,329
Video Game Design/Introduction/What is a video game. What is a game? It's a good question, and a common question, one that you can spend a great deal of time arguing over. The definition, conscious or not, will influence how one decides to design a game. So it makes sense to begin there. There are multiple ways we could define a game, with varying ranges of inclusiveness. The essential problem with most definitions, however, is that they only work when one looks at certain types of games or players. First, let's state some things we can be certain of: The Competition definition holds that games are entirely about players competing against each other, to be the first or the best at something, or in the case of a solitaire game, to overcome the challenge presented to him through the gameplay. This is a reasonable definition for many games, and can be stretched even to very simple games such as Catch, where the only gameplay involved is tossing a ball or other object to the other players. The competition in Catch would be the players versus the ball and the environment; the player holding the ball must try to successfully throw it to the other player, and the other player must try to catch the ball without missing or dropping it. There is a deliberate attempt to make the game challenging; the players could hand the ball off to one another but they never choose to do so. But what about something even simpler and less rule-based than Catch? Could we call building blocks a game to play? We do say that one "plays" with them. For the Competition definition, we would have to come up with some sort of competition that the player has between himself and the blocks, or his mind. The player does have a "challenge" in building structures of his own devising, in coming up with architecture that won't collapse. But neither the blocks nor his mind are trying to hinder him in this goal. They are both tools, assisting him. The blocks have the property of being weighted and prone to gravity, and the mind is not going to assist him perfectly through every step of the process, but these are properties of the objects, not active attempts to foil him. Unlike in the case of Catch, where the players wish to be challenged, someone building blocks is interested in using his tools purely to a productive advantage. So if you consider building blocks to be a game, then the Competition definition fails. It should be noted, though that many designers consider them and derivative items(Legos, Simcity, etc.) to be toys rather than games. Games in History. The earliest known games are board games such as Go and Nine Men's Morris. Sports records show that athletic games existed in ancient times as well. Up until the industrial age, newly designed games were not well-known, if they existed. But a lot of the games and sports played today were invented or had modern rules drawn during the 19th century. Baseball, basketball and football (both the American and international games) are examples of sports that grew up in that period. While some evolution of the rules has taken place since then, they largely resemble the same games played today. New board games like the game of Goose also started appearing at this time. These changes may be most easily attributed to a combination of improved transportation, communication, and manufacturing; with a more mobile society, popular games could easily spread throughout the world, and games with specialized equipment could be built in larger numbers. The next 'big wave' came with the 1950s and 60s, with the newly developing American consumer society. In this period both flashier and more complex games started appearing. Of special note was the increasing complexity of war games that continued into the 70s, that eventually branched off into the role-playing game. Of course, during the same time frame video games were just being born. As far back as the 1950s the use of electronics, and especially computers, as a medium for entertainment had been considered or experimented with by academics and enthusiastic students. By the 1970s, they were ready to be mass marketed, first with Pong and variations thereof, then in a series of arcade games such as Canyon Bomber and Lunar Lander, and at about the same time with the Atari 2600 VCS(Video Computer System). These very early video games are notable because of their originality. They were made by and for an inexperienced gaming population, something very different from what is seen today. And most of them found success, being neither copycat imitators nor rushed, uninspired "genre" pieces. Today, the technology has advanced greatly. No longer just a few rules with simple graphics and sound effects, most modern video games attempt to be virtual reality experiences that engage their players by conveying settings and stories in a visceral manner, combining the techniques of the cinema with gameplay rules to make settings seem realistic. They are produced at great expense and risk; the amount of content needed to achieve an impact competitive with other games has grown astronomically large. Indeed, one of the more interesting developments in gaming has been the added significance of the 'storyline'. Back when the prime requirement of a video game was to reproduce the playability factor associated with the traditional board game, the inclusion of a background to the events taking place in the game was a manual filler at best. However, with the evolution of gaming technology bringing us ever closer to a fully convincing visual representation of an 'imagined reality', there is now a huge emphasis on the way in which the story unravels around the user's interactions. Any modern action or adventure game that wants to stand a chance out there now requires strong script writing, a solid plot with the occasional twist, and if they're really pushing the boat out, multiple endings/outcomes based on choices made by the user throughout the game. The makers of these games not only want to create a fully interactive cinematic experience, but also one that stands a good chance of winning an Oscar. Future games may break the boundaries of the screen and common input devices and use methods of input and output that we can hardly imagine today. The problem with today's game market is that because of the money involved most new games are similar to old tried and tested formulas. The sad fact is that originality is lost, the big players no longer want to risk their money on original ideas. The Gameplay Experience. Perhaps more important than what the game is, is what the player gets out of the game. That's what a "gameplay experience" is - it encompasses the whole range of player thoughts and feelings during play. Some games try to be more engaging than others, and they may come from different angles. But whether the game is just a component of a larger event (a party, for example) or an event in itself, the experience, rather than the production values or quantity of gameplay, is the measure of what makes the game effective. Overall the gaming experience is the most important thing, although the target at the end of the game can enhance the play. Take gambling for example, the thought of winning money at the end of the game heightens the playing experience. The gaming experience is all about immersion: to believe in a small way you are connected to the game world and can affect the events that are unraveling before you. Whether you are snowboarding down a mountain, playing god or shooting your way through mindless zombies, you are connected with the virtual world. It is your journey through that world that is as important, if not more so, than the end finale. The emotions you are put through in a game also play a big part in the gaming experience. If for example you are playing a game based on a film, you take the emotions conjured up during that film into the game, whether that be a feeling of fear or of invincibility. You therefore become even further immersed, possibly more so than someone who may not have seen the film. Social Games. Social games, used to pass time or as an excuse to drink or as a way of demonstrating status, among other things, are usually simple enough to grasp quickly, or can be played with little concentration over gossip. Challenging Games. Challenging games are focused on the presentation of increasingly difficult problems for the player(or players) to solve. Sports are examples of challenging games because they try to test the limits of human abilities; this is why professional leagues can be sold as entertainment for spectators. Other players help you solve these problems or you can solve on your own. This type of games help solve problems in real life, cooperation lessons and solving. Virtual Realities. Modern game design, especially video game design, has grown increasingly interested in conveying immersive worlds that players can become deeply involved in. Virtual reality is the most appropriate term for this path of design, which is concerned not so much with the storyline or actual realism as it is with making the setting and action as convincing as possible. Addiction. Despite the wonderful benefits games offer as diversions, spending too much time, to the neglect of normal life duties, and social development with direct contact with others can and often does happen. The line between addiction and a strong enjoyment of games is real, although the addict will rarely admit which side they are on. There are many resources to help the addict. What is a video game? A video game is a specific type of software that runs on hardware, a computer or video game console. That hardware platform requires at least some memory (that can be in several forms), some processing capacity and ways to interact with a display and some method with which a player can control the game. When you get right down to it, that is what a video game is. It's interactive media. The "player" presses, clicks, or types something and then the game will respond according to some established rules. The elements of communication, therefore, are vital. Video Games are interactive video art pieces. In simpler terms; a video game is just another way to have fun and express creativity. While definitions are nice and simple, in order to efficiently understand video game design, you really need to know the mechanics behind it all. We will not go into that in depth just yet, but realize that it requires programming, graphic design, sound design, music composition, and so much more. Since the development of the first video game in 1931, the video game industry has grown on a kind of exponential curve. There were a few bumps in the road, but the industry has come to the point where it is taking in over $7 billion dollars annually. Salaries for people in the video game industry range from $32k to $200K. And a single video game can sell from $10 to almost $100. Of all the things, it is not anything without a player, that is the participating audience of this interactive media. After playing. Games will train and educate people, providing new skills and knowledge they can use outside of the game. "An example is readily found in flight simulators, but even arcade racing games will train the player to manage more complex situations" They will leave the player in an emotional and intellectually changed state. This can be a profound change (anger, catharsis, even love) or a superficial one. The relevance of these is different for each game and must play a role in game-design. "In general designers will try to provide a catharsis at the end of each game, a happy ending." But more importantly the player will take away an updated concept of the game itself. This will not just be static knowledge, for example maps of the game or the pros and cons of combos. The player will also gain dynamic knowledge, going over the gameplay in his head, replaying rather than analyzing the game. This after game experience is more relevant to game-design as it creates a slower, more profound feedback loop into the game. "For example a puzzle game is changed when the gamer can pause or replay after a night's rest. Another example has a player planning his RPG character upgrades and coming back to the game anxious to obtain them." A last important issue is the relevance of the world to the game. People can come away from a game curious, and learn more about the real world that will then influence later gameplay. "For example a tactical tank simulation designed with real world combat in mind can be cracked by studying real world combat." Social contact will provide them with info that will also influence later gameplay. Players will visit walkthroughs or discuss tactics with friends. And don't forget that the player will predict these matters while playing. "For example, a hardcore gamer will not readily plunge into a children's pony-combing game if he has to tell his friends later on." Tying in the player after he stops playing can make or break a game. Ultimately you want the player to come back.
2,828
GIMP/Create a Metal Effect. This article will describe you how to create a brushed metal effect. Step 1. First, we fill the canvas using the hurl filter ("Filters > Noise > Hurl"). Set Randomization (%) to 100. This will do the same thing as the Scatter RGB filter ("Filters > Noise > Scatter RGB") with all channels (Red, Green, and Blue) set to 1.0, However, Scatter RGB has slightly more options, and if you turn off Independent RGB in the settings dialogue, may allow you to skip the desaturation step. Step 2. Then, apply a motion blur to the image ("Filters > Blur > Motion Blur...") with an angle of 0 and a length of about 50. Change the angle to 180 and repeat the effect. Why? Notice how the right edge of the final image isn't nice and brushed looking? Doing it in both directions will smooth this out. You may have to adjust the length setting so as not to "overblur". Alternatively, you can use the Gaussian Blur filter ("Filters > Blur > Gaussian Blur"). Just click the chain icon and turn the vertical value down and adjust the horizontal value until you are satisfied. This effect will not leave a side of the image left unblurred and may have an even more satisfying final result with proper adjustment. Step 3. This is the shortest step. Now we only need to desaturate the image. Go to "Colours > Desaturate" and press Lightness, although Luminosity may sometimes look better. Apply cropping / tiling / brightness / lighting filters to suit. Cropping may not be necessary if you've used the Gaussian Blur approach as opposed to the Motion Blur approach.
425
High School Mathematics Extensions/Set Theory and Infinite Processes/Solutions. Set Theory and Infinite Processes. These solutions were not written by the author of the rest of the book. They are simply the answers I thought were correct while doing the exercises. I hope these answers are useful for someone and that people will correct my work if I made some mistakes How big is infinity? exercises. 2. The number of square numbers is also equal to the number of natural numbers. They are both countably infinite and can be put in one to one correspondence. (S means square numbers and is not an official set like N) 3. The cardinality of even numbers less than 100 is not equal to the cardinality of natural numbers less than 100. You can simply write out both of them and count the numbers. Then you will see that cardinality of even numbers less than 100 is 49 and the cardinality of natural numbers less than 100 is 99. Thus the set of natural numbers less than 100 is bigger than the set of even numbers less than 100. The big difference between infinite and finite sets thus is that a finite set can not be put into one to one correspondence with any of its subsets, while an infinite set can be put into one to one correspondence with at least one of its subsets. 4. Each part of the sum is answered below Is the set of rational numbers bigger than N? exercises. 1. To change the matrix from Q' to Q the first step you need to take is to remove the multiple entries for the same number. You can do this by leaving an empty space in the table when gcd(topnr,bottomnr)≠1 because when the gcd isn't 1 the fraction can be simplified by dividing the top and bottom number by the gcd. This will leave you with the following table. Now we only need to add zero to the matrix and we're finished. So we add a vertical row for zero and only write the topmost element in it (0/1) (taking gcd doesn't work here because gcd(0,a)=a) This leaves us with the following table where we have to count all fractions in the diagonal rows to see that Q is countably infinite. 2. To show that formula_3 you have to make a table where you put one infinity in the horizontal row and one infinity in the vertical row. Now you can start counting the number of place in the table diagonally just like Q' was counted. This works because a table of size AxB contains A*B places.
572
Aros/User/FAQs. What is AROS all about? For the Wikipedia description look here For a detailed description look here What is the legal status of AROS? European law says that it is legal to apply reverse engineering techniques to gain interoperability. It also says that it is illegal to distribute the knowledge gained by such techniques. It basically means that you are allowed to disassemble or resource any software to write something which is compatible (for example, it would be legal to disassemble Word to write a program which converts Word documents into ASCII text). There are of course limitations: you are not allowed to disassemble the software if the information you would gain by this process can be obtained by other means. You must also not tell others what you learned. A book like "Windows inside" is therefore illegal or at least of dubious legality. Since we avoid disassembling techniques and instead use common available knowledge (which includes programming manuals) which don't fall below any NDA, the above doesn't apply directly to AROS. What counts here is the intention of the law: it is legal to write software which is compatible with some other software. Therefore we believe that AROS is protected by the law. Patents and header files are a different issue, though. We can use patented algorithms in Europe since European law doesn't allow patents on algorithms. However, code that uses such algorithms that are patented in the USA could not be imported to the USA. Examples of patented algorithms in AmigaOS include screen dragging and the specific way menus work. Therefore we avoid implementing these features in exactly the same way. Header files on the other hand must be compatible but as different as possible from the original. To avoid any trouble we applied for an official ok from Amiga Inc. They are quite positive about the effort but they are very uneasy about the legal implications. We suggest that you take that fact that Amiga Inc did not send us any cease and desist letters as a positive sign. Unfortunately, no legally sound agreement has been made yet, besides good intentions on both sides. Can't you implement feature XYZ? Well,: 1. If it was really important, it probably would be in the original OS. ;-) 2. Why don't you do it yourself and send a patch to us? The reason for this attitude is that there are plenty of people around who think that their feature is the most important and that AROS has no future if that feature is not built in right away. Our position is that AmigaOS, which AROS aims to implement, can do everything a modern OS should do. We see that there are areas where AmigaOS could be enhanced, but if we do that, who would write the rest of the OS? In the end, we would have lots of nice improvements to the original AmigaOS which would break most of the available software but worth nothing, because the rest of the OS would be missing. Therefore, we decided to block every attempt to implement major new features in the OS until it is more or less completed. We are getting quite close to that goal now, and there have been a couple of innovations implemented in AROS that aren't available in AmigaOS. How compatible is AROS with AmigaOS? Very compatible, it let's you run AROS (on your classic amiga hardware if you like) and let's you run both aros and most classic amiga programs, games, and demos, as long as they are compatible, see compatibility list here. binaries: We expect that AROS will run existing software on the 68k family Amiga hardware without problems. On other hardware, the existing software can be run through integrated Amiga emulation ("emumiga" or "Janus-UAE"), or the code must be recompiled (see below). sourcecode: Porting programs from AmigaOS to AROS is currently mostly a matter of a recompilation, with the occasional tweak here and there. There are of course programs for which this is not true, but it holds for most modern ones. We will offer a preprocessor which you can use on your code which will change any code that might break with AROS and/or warn you about such code. See Aros/Developer/Porting_software for more on porting software to AROS. Why are you only aiming for compatibility with 3.1? Well, to be honest this is a misconception. The base development is aimed at implementing the apis used in "atleast" AmigaOS v3.1. However along the way it has been necessary to revise the plans somewhat where amigaos3.1 didnt support the necessary functionality. The end result is a feature set somewhere in between OS3.1 and 3.9 however 3.1 is still used as the _base_ of our goals. There are other developments which do not fit into this framework since they don't involve either OS's apis - such as hidds and the relevant technologies to implement and support them. To this end developers are pretty free to improve on the underlying AmigaOS technologies so long as they can still support existing AmigaOS functionality and APIS with very little to no effort. What hardware architectures is AROS available for? Currently AROS is available in a quite usable state as native and hosted (under Windows, Linux, FreeBSD and NetBSD) for the i386 architecture (i.e. IBM PC AT compatible clones) and hosted (Linux and NetBSD) for the m68k architecture (e.g. the Amiga, Atari and Macintosh) and hosted on top of ARM Linux distributions. There are ports under way at varying degrees of completeness to SUN SPARC (hosted under Solaris) and Palm compatible handhelds (native). Will there be a port of AROS to PPC? There is a port on the Sam-440 board made by Michael Schulz and another port to EFIKA 5200B. Furthermore, a Linux-PPC hosted AROS exists already. Why are you using Linux and X11? We use Linux and X11 to speed up development. For example, if you implement a new function to open a window you can simply write that single function and don't have to write hundreds of other functions in layers.library, graphics.library, a slew of device drivers and the rest that that function might need to use. The goal for AROS is of course to be independent of Linux and X11 (but it would still be able to run on them if people really wanted to), and that is slowly becoming a reality with the native versions of AROS. We still need to use Linux for development though, since good development tools haven't been ported to AROS yet. Why does autorepeat stop working in X11 after running AROS? This is a long-standing bug in AROS. Run the following command after having quit AROS to enable autorepeat again: > xset r on How do you intend to make AROS portable? One of the major new features in AROS compared to AmigaOS is the HIDD (Hardware Independent Device Drivers) system, which will allow us to port AROS to different hardware quite easily. Basically, the core OS libraries do not hit the hardware directly but instead go through the HIDDs, which are coded using an object oriented system that makes it easy to replace HIDDs and reuse code. Why do you think AROS will make it? We hear all the day from a lot of people that AROS won't make it. Most of them either don't know what we are doing or they think the Amiga is already dead. After we explained what we do to the former, most agree that it is possible. The latter make more problems. Well, is Amiga dead right now? Those who are still using their Amigas will probably tell you that it isn't. Did your A500 or A4000 blow up when Commodore went bankrupt? Did it blow up when Amiga Technologies did? The fact is that there is quite little new software developed for the Amiga (although Aminet still chugs along quite nicely) and that hardware is also developed at a lower speed (but the most amazing gadgets seem appear right now). The Amiga community (which is still alive) seems to be sitting and waiting. And if someone releases a product which is a bit like the Amiga back in 1984, then that machine will boom again. And who knows, maybe you will get a CD along with the machine labeled "AROS". :-) What do I do if AROS won't compile? Please post a message with details (for example, the error messages you get) on the AROS User mailing list or become a developer and subscribe to the AROS Developer list and post it there, and someone will try to help you. Will AROS have memory protection, SVM, RT, ...? This is a heavily discussed subject, some people say yes others no - only time will tell... Can I become a beta tester? Sure, no problem. In fact, we want as many beta testers as possible, so everyone is welcome! We don't keep a list of beta testers though, so all you have to do is to download AROS, test whatever you want and send us a report. What is the relation between AROS and UAE? UAE is an Amiga emulator, and as such has somewhat different goals than AROS. UAE wants to be binary compatible even for games and hardware hitting code, while AROS wants to have native applications. Therefore AROS is much faster than UAE, but you can run more software under UAE. We are in loose contact with the author of UAE and there is a good chance that code for UAE will appear in AROS and vice versa. For example, the UAE developers are interested in the source for the OS because UAE could run some applications much faster if some or all OS functions could be replaced with native code. On the other hand, AROS could benefit from having an integrated Amiga emulation. Since most programs won't be available on AROS from the start, Fabio Alemagna has ported UAE to AROS so you can run old programs at least in an emulation box. What is the relation between AROS and Haage & Partner? Haage & Partner used parts of AROS in AmigaOS 3.5 and 3.9, for example the colorwheel and gradientslider gadgets and the SetENV command. This means that in a way, AROS has become part of the official AmigaOS. This does not imply that there is any formal relation between AROS and Haage & Partner. AROS is an open source project, and anyone can use our code in their own projects provided they follow the license. What is the relation between AROS and MorphOS? The relationship between AROS and MorphOS is basically the same as between AROS and Haage & Partner. MorphOS uses parts of AROS to speed up their development effort; under the terms of our license. As with Haage & Partner, this is good for both the teams, since the MorphOS team gets a boost to their development from AROS and AROS gets good improvements to our source code from the MorphOS team. There is no formal relation between AROS and MorphOS; this is simply how open source development works. What programming languages are available? Most development for AROS is done using ANSI C by crosscompiling the sources under a different OS, e.g. Linux, FreeBSD or NetBSD. see AROS Developer Docs for more Why is there no m68k emulator in AROS? To make old Amiga programs run on AROS, we have ported UAE to AROS. AROS' version of UAE will probably be a bit faster than other versions UAE since AROS needs less resources than other operating systems (which means UAE will get more CPU time), and we'll try to patch the kickstart ROM in UAE to call AROS functions which will give another small improvement. Of course, this only applies to the native flavors of AROS and not the hosted flavors. But why don't we simply implement a virtual m68k CPU to run software directly on AROS? Well, the problem here is that m68k software expects the data to be in big endian format while AROS also runs on little endian CPUs. The problem here is that the little endian routines in the AROS core would have to work with the big endian data in the emulation. Automatic conversion seems to be impossible (just an example: there is a field in a structure in the AmigaOS which sometimes contains one ULONG and sometimes two WORDs) because we cannot tell how a couple of bytes in RAM are encoded. What is Zune? In case you read on this site about Zune, it's simply an open-source reimplementation of MUI, which is a powerful (as in user- and developer-friendly) object-oriented shareware GUI toolkit and de-facto standard on AmigaOS. Zune is the preferred GUI toolkit to develop native AROS applications. As for the name itself, it means nothing, but sounds good. Zune is also a handheld MP3-player / multimedia hardware product from Microsoft, but is in no way related to AROS' Zune.
3,051
Aros/Developer/BuildSystem. Overview. AROS uses several custom development tools in its build-system to aid developers by providing an easy means to generate custom makefiles for amigaos like components. The most important ones are: MetaMake. Introduction. MetaMake is a special version of make which allows the build-system to recursively build "targets" in the various directories of a project, or even another project. The name of the makefile's used is defined in the MetaMake config file and defaults to makefile for AROS – so we shall use this name to donate MetaMake Makefiles from here on in. MetaMake searches directory tree's for mmakefiles – and, for each it finds, process's the metatargets. You can also specify a program which converts "source" mmakefiles (aptly named mmakefile.src) into proper mmakefile's before MetaMake will be invoked on the created mmakefile. MetaTargets. MetaMake uses normal makefile syntax but gives a special meaning to a comment line that start with #MM. This line is used to define so called metatargets. There exist three ways of defining a metatarget in a makefile: Real MetaTargets. #MM metatarget : metaprerequisites This defines a metatarget with its metaprerquisites: When a user asks to build this metatarget, first the metaprerequisites will be build as metatargets, and afterwards the given metatarget. This form also indicates that in this makefile also a makefile target is present with the same name. #MM metatarget : prerequisites This form indicates that the make target on the next line is also a metatarget but the prerequisites are not metaprerequisites: The line for the definition of a metatarget can be spread over several lines if one ends every line with the character and starts the next line with #MM. Virtual MetaTargets. #MM- metatarget : metaprerequisites This is the same definition as for Real MetaTarget's – only now no "normal" make target is present in the makefile with the same name as the metatarget: How MetaMake works. MetaMake is run with a metatarget to be built specified on the command line. MetaMake will first build up a tree of all the mmakefiles present in a directory and all subdirectories (typically from the aros source base directory) – and autogenerate them where applicable. While doing this it will process the mmakefiles and build a tree of all the defined metatargets and their dependencies. Next it will build all the dependencies (metaprerequisites) needed for the specified metatarget – and finally the metatarget itself. "metaprerequisite are metatarget's in their own rite – and are processed in the same fashion so that dependancies they have are also fulfilled." For each metatarget, a walk through of all the directories is done – and in every mmakefile where Real MetaTarget's are defined, make is called with the name of the target as a "make target". Exported variables. When MetaMake calls normal make, it also defines two variables... $(TOP) contains the value of the rootdirectory. $(CURDIR) contains the path relative to $(TOP). Autogenerating mmakefile's. Another feature of MetaMake is automatic generation of mmakefile's from a source mmakefile's. When the directory tree is scanned for mmakefiles, ones with a .src suffix that are newer then any present mmakefile are processed using a specified script that regenerate's the mmakefile from the source mmakefile. The called script is defined in the configuration file. Examples. The next few examples are taken from the AROS project. Example 1: normal dependencies. #MM contrib-regina-module : setup linklibs includes contrib-regina-includes This example says that in this makefile a contrib-regina-module is present that has to be build but the before building this metatarget first the metatargets setup, linklibs, ... has to be build; e.g. that the includes linklibs etc. have to be present before that this module can be build. Example 2: metatarget consisting of submetatargets. #MM- contrib-freetype : contrib-freetype-linklib \ #MM contrib-freetype-graph \ #MM contrib-freetype-fonts \ #MM contrib-freetype-demos Here actually is said that the contrib-freetype metatarget consists of building linklib, graph, fonts and demos of freetype. If some extra work needs to be done in the makefile where this metatarget the definition can start with '#MM ' and a normal make target 'contrib-freetype' has to be present in the makefile. Also the use of the line continuation for the metatarget definition is shown here. Example 3: Quick building of a target. #MM workbench-utilities : includes linklibs setup-clock-catalogs #MM workbench-utilities-quick : workbench-utilities When a user executes MetaMake with workbench-utilities as an argument, make will be called in all the directories where the metaprerequisites are present in the makefile. This can become quite annoying when debugging programs. When now the second metatarget workbench-utilities-quick is defined as shown above only that target will be build in this directory. Of course the user has then to be sure that the metatargets on which workbench-utilities depend are up-to-date. Usage and configuration files. Usage: codice_1 To build mmake, just compile codice_2 It doesn't need any other files. mmake looks for a config file codice_3 or .mmake.config in the current directory for a file in the environment variable codice_4 or a file codice_5 in the directory codice_6. This file can contain the following things: Example Here is an example: # This is a comment # Options before the first [name] are defaults. Use them for global # defaults defaultoption value # Special options for the project name. You can build targets for this # project with "mmake name.target" [AROS] # The root dir of the project. This can be accessed as $(TOP) in every # makefile or when you have to specify a path in mmake. The default is # the current directory top /home/digulla/AROS # This is the default name for Makefiles. The default is "Makefile" defaultmakefilename makefile # If you just say "mmake AROS", then mmake will go for this target defaulttarget AROS # mmake allows to generate makefiles with a script. The makefile # will be regenerated if it doesn't exist, if the source file is # newer or if the file specified with genmakefiledeps is newer. # The name of the source file is generated by concatenating # defaultmakefilename and ".src" genmakefilescript gawk -f $(TOP)/scripts/genmf.gawk --assign "TOP=$(TOP)" # If this file is newer than the makefile, the script # genmakefilescript will be executed. genmakefiledeps $(TOP)/scripts/genmf.gawk # mmake will read this file and every variable in this file will # be available everywhere where you can use a variable. globalvarfile $(TOP)/config/host.cfg # Some makefiles must have a different name than # defaultmakefilename. You can add them manually here. #add compiler/include/makefile #add makefile A metatarget look like so: project.target. Example: AROS.setup. If nothing is specified, mmake will make the default target of the first project in the config file. If the project is specified but no target, mmake will make the default target of this project. GenMF. Introduction. Genmf uses two files for generating a mmakefile. First is the macro definition file and finally the source mmakefile where these macro's can be used. * This syntax example assumes you have AROS' sources (either from SVN or downloaded from the homesite). Assuming 'genmf.py' is found in your $PATH and that $AROSDIR points to location of AROS' sources root (e.g. /home/projects/AROS or alike). [user@localhost]# genmf.py $AROSDIR/config/make.tmpl mmakefile.src mmakefile This creates a mmakefile from the mmakefile.src in the current directory. In general the % character is used as the special character for genmf source makefiles. After ./configure i run the make command and that halts with an error from within the genmf.py script that is cannot find some file. the files that are fed to the genmf.py script seem to be lines in the /tmp/genmfxxxx file. the problem is that the lines are not created right. so when the lines are fed to the genmf.py script it cannot handle it. Metamake creates tmpfiles: ./cache.c: strcpy(tmpname, "/tmp/genmfXXXXXX"); Metamake actually calls genmf.py to generate the genmf file. It is located in bin/$(arch)-$(cpu)/tools MetaMake uses time stamps to find out if a mmakefile has changed and needs to be reparsed. For mmakefiles with dynamic targets we would have to avoid that time stamp comparison. This is I think only the case if the metarules would change depending on an external config file without that the mmakefile itself changes. But this reminds me another feature I had in mind for mmake. I would make it possible to have real files as prerequisites of metatargets. This is to avoid that make is called unnecessary in directories. I would introduce a special character to indicate if a metatarget depends on a file, let's take @ and have the following rule This would indicate that for this mmakefile metatarget 'bar' only has to be build if file foo changes. So if mmake wants to build metatarget 'bar' if would only call make if file foo in the same directory as the mmakefile has changed. This feature would also be able to indicate if the metarules have to be rebuild, I would allocate the special __MM__ metatarget for it. By default always the implicit metarule would be there: But people could add config files is needed: Does MetaMake really do variable substitution? Yes, have a look in the var.c file. The generated mmakefile for Demos/Galaxy still has #MM- demo-galaxy : demo-galaxy-$(AROS_TARGET_CPU) and I think the substitution is done later by Gnu/Make. No, for gmake it is just a comment line; it does not know anything about mmake. And it also the opposite case; mmake does not know anything about gmake it just all the lines starting with #MM. So the next thing does not what you think it does in a gmake file: mmake will see both lines as just ignores the if statement ! It will complain if it does not know target. That is one of the main reasons I proposed the above feature. The main feature of mmake is that is allows for modular directory structure you can add or delete directories in the build tree and metamake will automatically update the metarules and the build itself to the new situation. For example it would allow to checkout only a few subdirectories of the ports directory if one wants to work on one of the programs there. Macro definition. A macro definition has the following syntax: %define macroname option1[=[default][\A][\M]] option2[=[default][\A][\M]] ... %end macroname is the name of the macro. option1, option2, ... are the arguments for the macro. These options can be used in the body of this template by typing %(option1). This will be replaced be the value of option1. The macro can be followed by a default value. If no default value is specified an empty string is taken. Normally no space are allowed in the default value of an argument. If this is needed this can be done by surrounding the value with double quotes ("). Also two switches can be given: \A Is the switch to always need a value for this. When the macro is instantiated always a value need to be assigned to this argument. \M Is the switch to turn on multi words. This means that all the words following this argument will be assigned to this argument. This also means that after the use of such an argument no other argument can be present because it will become part of this argument. Macro instantiation. The instantiation of the macro is done by using the '%' character followed by the name of the macro to instantiate (without a round brackets around it): %macro_name [option1=]value [option2=]value Two ways are possible to specify value for arguments to a macro: value This will assign the value to the argument defined as the first argument to this macro. The time this format is used it will be assigned to the second argument and so on. option1=value This will assign the given value to the option with the specified name. When giving values to arguments also double quotes need to be used if one wants to include spaces in the values of the arguments. Macro instantiation may be used inside the body of a macro, even macro's that will only be defined later on in the macro definition file. Examples FIXME (whole rules to be shown as well as action to be used in make rules) AROS Build-System usage. AROS Build-System configuration. Before the build-system can be invoked via make – you will need to run "./configure" to set up the environment for your chosen target platform i.e. ./configure --target=pc-i386 This causes the configure script to perform the following operations ... AROS MetaMake configuration file. [add the default settings for mmake] Default AROS MetaMake MetaTargets. AROS uses a set of base metatargets to perform all the steps needed to build the tools and components not only used to compile aros but also that make up aros itself AROS Build MetaMake MetaTargets. AROS.AROS AROS.contrib AROS.development AROS.bootiso [list standard metatargets used during the build process] Special AROS MetaMake MetaTargets. "************ denotes a Real MetaTarget" ************-setup ************-includes Default AROS mmakefile Variables. The following variables are defined for use in mmakefile's. //System related variables $(ARCH) $(AROS_HOST_ARCH) $(AROS_HOST_CPU) $(AROS_TARGET_ARCH) $(AROS_TARGET_CPU) $(AROS_TARGET_SUFFIX) / $(AROS_TARGET_VARIANT) //Arch specific variables $(AROS_TARGET_BOOTLOADER) //Directory related variables $(TOP) $(CURDIR) $(HOSTDIR) $(TOOLDIR) $(PORTSDIR) $(TARGETDIR) $(GENDIR) $(OBJDIR) $(BINDIR) $(EXEDIR) $(LIBDIR) $(OSGENDIR) $(KOBJSDIR) $(AROSDIR) $(AROS_C) $(AROS_CLASSES) $(AROS_DATATYPES) $(AROS_GADGETS) $(AROS_DEVS) $(AROS_FS) $(AROS_RESOURCES) $(AROS_DRIVERS) $(AROS_LIBS) $(AROS_LOCALE) $(AROS_CATALOGS) $(AROS_HELP) $(AROS_PREFS) $(AROS_ENVARC) $(AROS_S) $(AROS_SYSTEM) $(AROS_TOOLS) $(AROS_UTILITIES) $(CONTRIBDIR) AROS mmakefile.src High-Level Macros. Note : In the definition of the genmf rules sometimes mmake variables are used as default variables for an argument (e.g. dflags=%(cflags)). This is not really possible in the definition file but is done by using text that has the same effect. Building programs There are two macro's for building programs. One macro %build_progs that will compile every input file to a separate executable and one macro %build_prog that will compile and link all the input files into one executable. %build_progs. This macro will compile and link every input file to a separate executable and has the following definition: %define build_progs mmake=/A files=/A \ objdir=$(GENDIR)/$(CURDIR) targetdir=$(AROSDIR)/$(CURDIR) \ cflags=$(CFLAGS) dflags=$(BD_CFLAGS$(BDID)) ldflags=$(LDFLAGS) \ uselibs= usehostlibs= usestartup=yes detach=no With the following arguments: %build_prog. This macro will compile and link the input files to an executable and has the following definition: %define build_prog mmake=/A progname=/A files=%(progname) asmfiles= \ objdir=$(GENDIR)/$(CURDIR) targetdir=$(AROSDIR)/$(CURDIR) \ cflags=$(CFLAGS) dflags=$(BD_CFLAGS$(BDID)) ldflags=$(LDFLAGS) \ aflags=$(AFLAFS) uselibs= usehostlibs= usestartup=yes detach=no With the following arguments: mmake=/A This is the name of the metatarget that will build the program. progname=/A The name of the executable. files= The basenames of the C source files that will be compiled and linked into the executable. By default just the name of the executable is taken. asmfiles= The assembler files to assemble and include in the executable. By default no asm files are included in the executable. objdir=$(GENDIR)/$(CURDIR) The directory where the compiled object files will be put. targetdir=$(AROSDIR)/$(CURDIR) The directory where the executables will be placed. cflags=$(CFLAGS) The flags to add when compiling the .c files. By default the standard AROS cflags (the $(CFLAGS) make variable) are taken. This also means that some flags can be added by assigning these to the USER_CFLAGS and USER_INCLUDES make variables before using this macro. dflags=%(cflags) The flags to add when doing the dependency check. Default is the same as the cflags. aflags=$(AFLAGS) The flags to add when compiling the asm files. By default the standard AROS aflags (e.g. $(AFLAGS)) are taken. This also means that some flags can be added by assigning these to the SPECIAL_AFLAGS make variable before using this macro. ldflags=$(LDFLAGS) The flags to use when linking the executable. By default the standard AROS link flags will be used. uselibs= A list of static libraries to add when linking the executable. This is the name of the library without the lib prefix or the .a suffix and without the -l prefix for the use in the flags for the C compiler. By default no libraries are used when linking the executable. usehostlibs= A list of static libraries of the host to add when linking the executable. This is the name of the library without the lib prefix or the .a suffix and without the -l prefix for the use in the flags for the C compiler. By default no libraries are used when linking the executable. usestartup=yes Use the standard startup code for the executables. By default this is yes and this is also what one wants most of the time. Only disable this if you know what you are doing. detach=no Wether the executable will run detached. Defaults to no. %build_linklib. Building static linklibraries Building link libraries is straight forward. A list of files will be compiled or assembled and collected in a link library into a specified target directory. The definition of the macro is as follows: %define build_linklib mmake=/A libname=/A files="$(basename $(wildcard *.c)) \ asmfiles= cflags=$(CFLAGS) dflags=%(cflags) aflags=$(AFLAGS) \ objdir=$(OBJDIR) libdir=$(LIBDIR) With the meaning of the arguments as follows: mmake=/A This is the name of the metatarget that will build the linklib. libname=/A The base name of the library to generate. The file that will be generated will be called lib%(libname).a files=$(basename $(wildcard *.c)) The C files to compile and include in the library. By default all the files ending in .c in the source directory will be used. asmfiles= The assembler files to assemble and include in the library. By default no asm files are included in the library. cflags=$(CFLAGS) The flags to use when compiling the .c files. By default the standard AROS cflags (e.g. $(CFLAGS)) are taken. This also means that some flags can be added by assigning these to the USER_CFLAGS and USER_INCLUDES make variables before using this macro. dflags=%(cflags) The flags to add when doing the dependency check. Default is the same as the cflags. aflags=$(AFLAGS) The flags to add when compiling the asm files. By default the standard AROS aflags (e.g. $(AFLAGS)) are taken. This also means that some flags can be added by assigning these to the SPECIAL_AFLAGS make variable before using this macro. objdir=$(OBJDIR) The directory where to generate all the intermediate files. The default value is $(OBJDIR) which in itself is by default equal to $(GENDIR)/$(CURDIR). libdir=$(LIBDIR) The directory to put the library in. By default the standard lib directory $(LIBDIR) will be used. %build_module. Building modules consists of two parts. First is a macro to use in mmakefile.src files. Another is a configuration file that describes the contents of the module. The mmakefile.src macro. This is the definition header of the build_module macro: %define build_module mmake=/A modname=/A modtype=/A \ conffile=%(modname).conf files="$(basename $(wildcard *.c))" \ cflags=$(CFLAGS) dflags=%(cflags) objdir=$(OBJDIR) \ linklibname=%(modname) uselibs= Here is a list of the arguments for this macro: mmake=/A This is the name of the metatarget that will build the module. Also a %(mmake)-quick and %(mmake)-clean metatarget will be defined. modname=/A This is the name of the module without the suffix. modtype=/A This is the type of the module and corresponds with the suffix of the module. At the moment only library, mcc, mui and mcp are supported. Support for other modules is planned in the future. conffile=%(modname).conf The name of the configuration file. Default is modname.conf. files="$(basename $(wildcard *.c))" A list of all the C source files without the .c suffix that contain the code for this module. By default all the .c files in the current directory will be taken. cflags=$(CFLAGS) The flags to use when compiling the .c files. By default the standard AROS cflags (e.g. $(CFLAGS)) are taken. This also means that some flags can be added by assigning these to the USER_CFLAGS and USER_INCLUDES make variables before using this macro. dflags=%(cflags) The flags to add when doing the dependency check. Default is the same as the cflags. objdir=$(OBJDIR) The directory where to generate all the intermediate files. The default value is $(OBJDIR) which in itself is by default equal to $(GENDIR)/$(CURDIR). linklibname=%(modname) The name to be used for the static link library that contains the library autoinit code and the stubs converting C stack calling convention to a call off the function from the library functable with the appropriate calling mechanism. These stubs are normally not needed when the AROS defines for module functions are not disabled. There will always be a file generated with the name $(LIBDIR)/lib%(linklibname).a .. and by default linklibname will be the same as modname. uselibs= A list of static libraries to add when linking the module. This is the name of the library without the lib prefix or the .a suffix and without the -l prefix for the use in the flags for the C compiler. By default no libraries are used when linking the module. The module configuration file. The module configuration file is subdived in several sections. A section is defined with the following lines: ## begin sectionname ## end sectionname The interpretation of the lines between the ##begin and ##end statement is different for every section. The following sections are defined: * config The lines in this section have all the same format: optionname string with the string starting from the first non white space after optionname to the last non white space character on that line. A list of all the options available: basename Followed by the base name for this module. This will be used as a prefix for a lot of symbols. By default the modname specified in the makefile is taken with the first letter capitalized. libbase The name of the variable to the library base in. By default the basename will be taken with Base added to the end. libbasetype The type to use for the libbase for use internally for the library code. E.g. the sizeof operator applied to this type has to yield the real size of the object. Be aware that it may not be specified as a pointer. By default 'struct LibHeader' is taken. libbasetypeextern The type to use for the libbase for code using the library externally. By default 'struct Library' is taken. version The version to compile into the module. This has to be specified as major.minor. By default 0.0 will be used. date The date that this library was made. This has to have the format of DD.MM.YYYY. As a default 00.00.0000 is taken. libcall The argument passing mechanism used for the functions in this module. It can be either 'stack' or 'register'. By default 'stack' will be used. forcebase This will force the use of a certain base variable in the static link library for auto opening the module. Thus it is only valid for module that support auto opening. This option can be present more than once in the config section and then all these base will be in the link library. By default no base variable will be present in the link library. * cdef In this section all the C code has to be written that will declare all the type of the arguments of the function listed in the function. All valid C code is possible including the use of #include. * functionlist In this section all the functions externally accessible by programs. For stack based argument passing only a list of the functions has to be given. For register based argument passing the names of the register have to be given between rounded brackets. If you have function foo with the first argument in D0 and the second argument in A0 it gives the following line in in the list: foo(D0,A0) %build_module_macro. Building modules (the legacy way) Before the %build_module macro was developed already a lot of code was written. There a mixture of macro's was usedin the mmakefile and they were quite complicated. To clean up these mmakefiles without needing to rewrite too much of the code itself a second genmf macro was created to build modules that were written using the older methodology. This macro is called build_module_macro. For writing new modules people should consider this macro as deprecated and only use this macro when the %build_module doesn't support the module yet they want to create. The mmakefile.src macro. This is the definition header of the build_module_macro macro: %define build_module_macro mmake=/A modname=/A modtype=/A \ conffile=%(modname).conf initfile=%(modname)_init \ funcs= files= linklibfiles= cflags=$(CFLAGS) dflags=%(cflags) \ objdir=$(OBJDIR) linklibname=%(modname) uselibs= usehostlibs= \ genfunctable= genincludes= compiler=target Here is a list of the arguments for this macro: mmake=/A This is the name of the metatarget that will build the module. It will define that metatarget but won't include any metaprerequisites. If you need these you can add by yourself with an extra #MM metatargets : ... line. Also a %(mmake)-quick and %(mmake)-clean metatarget will be defined. modname=/A This is the name of the module without the suffix. modtype=/A This is the type of the module and corresponds with the suffix of the module. It can be one of the following : library gadget datatype handler device resource mui mcc hidd. conffile=%(modname).conf The name of the configuration file. Default is modname.conf. funcs= A list of all the source files with the .c suffix that contain the code for the function of a module. Only one function per C file is allowed and the function has to be defined using the AROS_LHA macro's. files= A list of all the extra files with the .c suffix that contain the extra code for this module. initfile=%(modname)_init The file with the init code of the module. cflags=$(CFLAGS) The flags to add when compiling the .c files. By default the standard AROS cflags (the $(CFLAGS) make variables are taken. This also means that some flags can be added by assigning these to the USER_CFLAGS and USER_INCLUDES make variables before using this macro. dflags=%(cflags) The flags to add when doing the dependency check. Default is the same as the cflags. objdir=$(OBJDIR) The directory where to generate all the intermediate files. The default value is $(OBJDIR) which in itself is by default equal to $(GENDIR)/$(CURDIR). linklibname=%(modname) The name to be used for the static link library that contains the library autoinit code and the stubs converting C stack calling convention to a call off the function from the library functable with the appropriate calling mechanism. These stubs are normally not needed when the AROS defines for module function are not disabled. There will always be a file generated with the name : $(LIBDIR)/lib%(linklibname).a ... and by default linklibname will be the same as modname. uselibs= A list of static libraries to add when linking the module. This is the name of the library without the lib prefix or the .a suffix and without the -l prefix for the use in the flags for the C compiler. By default no libraries are used when linking the module. usehostlibs= A list of static libraries of the host to add when linking the module. This is the name of the library without the lib prefix or the .a suffix and without the -l prefix for the use in the flags for the C compiler. By default no libraries are used when linking the module. genfunctable= Bool that has to have a value of yes or no or left empty. This indicates if the functable needs to be generated. If empty the functable will only be generated when funcs is not empty. genincludes= Bool that has to have a value of yes or no or left empty. This indicates if the includes needs to be generated. If empty the includes will only be generated for a library, a gadget or a device. compiler=target Indicates which compiler to use during compilation. Can be either target or host to use the target compiler or the host compiler. By default the target compiler is used. The module configuration file. For the build_module_macro two files are used. First is the module configuration file (modname.conf or lib.conf) and second is the headers.tmpl file. The modules config file is file with a number of lines with the following syntax: name <string> Init the various fields with reasonable defaults. If <string> is XXX, then this is the result: libname xxx basename Xxx libbase XxxBase libbasetype XxxBase libbasetypeptr XxxBase * Variables will only be changed if they have not yet been specified. libname <string> Set libname to <string>. This is the name of the library (i.e. you can open it with <string>.library). It will show up in the version string, too. basename <string> Set basename to <string>. The basename is used in the AROS-LHx macros in the location part (last parameter) and to specify defaults for libbase and libbasetype in case they have no value yet. If <string> is xXx, then libbase will become xXxBase and libbasetype will become xXxBase. libbase <string> Defines the name of the library base (i.e. SysBase, DOSBase, IconBase, etc.). If libbasetype is not set, then it is set to <string>, too. libbasetype <string> The type of libbase (with struct), i.e. struct ExecBase, struct DosLibrary, struct IconBase, etc.). libbasetypeptr <string> Type of a pointer to the libbase. (e.g. struct ExecBase *). version <version>.<revision> Specifies the version and revision of the library. 41.0103 means version 41 and revision 103. copyright <string> Copyright string. define <string> The define to use to protect the resulting file against double inclusion (i.e. #ifndef <string>...). The default is _LIBDEFS_H. type <string> What kind of library is this ? Valid values for <string> are: device, library, resource and hidd. option <string>... Specify an option. Valid values for <string> are: o noexpunge Once the lib/dev is loaded, it can't be removed from memory. Be careful with this option. o rom For ROM based libraries. Implies noexpunge and unique. o unique Generate unique names for all external symbols. o nolibheader We don't want to use the LibHeader prefixed functions in the function table. o hasrt This library has resource tracking. You can specify more than one option in a config file and more than one option per option line. Separate options by space. The header.tmpl file. Contrary to the %build_module macro for %build_module_macro the C header information is not included in the configuration file but an additional files is used with the name headers.tmpl. This file has different section where each of the sections will be copied in a certain include file that is generated when the module is build. A section has a structure as follows: ##begin sectionname ##end sectionname With sectionname one of the following choices: * defines * clib * proto %build_archspecific. Compiling arch and/or CPU specific files In the previous paragraph the method was explained how a module can be build with the AROS genmf macro's. Sometimes one wants to replace certain files in a module with an implementation only valid for a certain arch or a certain CPU. The macro definition Arch specific files are handled by the macro called %build_archspecific and it has the following header: %define build_archspecific mainmmake=/A maindir=/A arch=/A files= asmfiles= \ cflags=$(CFLAGS) dflags=%(cflags) aflags=$(AFLAGS) compiler=target And the explanation of the argument to this macro: mainmmake=/A The mmake of the module from which one wants to replace files or to wich to add additional files. maindir=/A The directory where the object files of the main module are stored. The is only the path relative to $(GENDIR). Most of the time this is the directory where the source files of the module are stored. arch=/A The architecture for which these files needs to be build. It can have three different forms ARCH-CPU, ARCH or CPU. For example when linux-i386 is specified these files will only be build for the linux port on i386. With ppc it will be build for all ppc processors and with linux it will be build for all linux ports. files= The basenames of the C source files to replace add to the module. asmfiles= The basenames of the asm source files to replace or add to the module. cflags=$(CFLAGS) The flags to add when compiling the .c files. By default the standard AROS cflags (the $(CFLAGS) make variables are taken. This also means that some flags can be added by assigning these to the USER_CFLAGS and USER_INCLUDES make variables before using this macro. dflags=%(cflags) The flags to add when doing the dependency check. Default is the same as the cflags. aflags=$(AFLAGS) The flags to add when assembling the asm files. By default the standard AROS cflags (the $(AFLAGS) make variable) are taken. This also means that some flags can be added by assigning these to the SPECIAL_AFLAGS make variable before using this macro. compiler=target Indicates which compiler to use during compiling C source files. Can be either target or host to use the target compiler or the host compiler. By default the target compiler is used. %rule_archalias. Code shared by different ports A second macro called %rule_archalias allows to create a virtual architecture. And code for that virtual architecture is shared between several architectures. Most likely this is used for code that uses an API that is shared between several architecture but not all of them. The macro has the following header: With the following arguments Examples 1. This is an extract from the file config/linex/exec/mmakefile.src that replaces the main init.c file from exec with a linux specialized one: %build_archspecific \ mainmmake=kernel-exec maindir=rom/exec arch=linux \ files=init compiler=host 2. For the dos.library some arch specific files are grouped together in the unix arch. The following lines are present in the several mmakefiles to make this possible In config/linux/mmakefile.src: %rule_archalias mainmmake=kernel-dos arch=linux alias=unix In config/freebsd/mmakefile.src: %rule_archalias mainmmake=kernel-dos arch=freebsd alias=unix And finally in config/unix/dos/mmakefile.src: %build_archspecific \ mainmmake=kernel-dos maindir=rom/dos \ arch=unix \ files=boot \ compiler=host Libraries. A simple library that uses a custom suffix (.wxt), and returns TRUE in its init function, however the Open code never gets called – and openlibrary fails? (the init function does get called though..) With a conf file with no ##functionlist section I get the error: In readref: Could not open (null) Genmodule tries to read a ref file when no ##functionlist section is available. After adding a dummy function to the conf file it worked for me. Take care: haven't added any flags which avoids creating of header files and such. How to deal with library base pointers in plug-ins when you call library functions. use only one function -> called to make the "plugin" register all its hooks with wanderer. Iterate through the plugin directory, and for each file ending ".wxt", create an internal plugin structure in which i store the pointer to the libbase of the OpenLibrary'd plugin. After enumerating the plugins, iterate the list of plugin structs and call the single library function which causes them to all register with wanderer. had been using some of the struct library fields (lib_Node.ln_Name was the culprit). We should remove the dos.c, intuition.c, etc. files with hardcoded version numbers from autoinit and replace them with -ldos -lintuition inside gcc specs file. This would avoid starting programs on older versions of libraries. If an older version suffice some __xxx_version global can be defined in the program code to enable this. We could also provide based on the info you described below exec_v33 exec_v45 link libraries that would also make sure no function of a newer version is used. A very clean solution to get the desired effect. -noarosc mentions checking the spec file to find out about it but there is nothing in the specs file related. This was added to disabled automatic linking of arosc to all libraries. It was used in the build_library macro – check V0. Automatic linking of arosc.library which had per task context to other libraries which had global context was a very bad thing. "C standard library" objects belonging to global context library were allocated on opening task context. When the task exited and global context library not, global context library was using "freed" memory. A note to any of you wanting to upgrade to Ubuntu 12.10, or any distribution that uses gcc 4.7. There is an issue (bug? misfeature?) in gcc 4.7 where the '-specs /path/to/spec/override' is processed *after* gcc checks that it has been passed valid arguments. This causes gcc to fail with the error: "gcc-4.7: error: unrecognized command line option "-noarosc"" when it is used to link programs for the x86 and x86_64 targets if you are using the native host's compiler (for example, when compiling for linux-x86_64 hosted). Please use gcc-4.6 ("export CC=gcc-4.6") for hosted builds until further notice (still valid as of March 2013). Per task. There are other things for which arosc.library needs to be per task based: autoclosing of open files and autofreeing of malloced memory when a programs exits; a per task errno and environ variable that can be changed by calling library functions. regina.library does also do that by linking with arosc_rel. It needs some more documentation to make it usable by other people. You can grep aroscbase inside the regina source code to see where it is used. regina.library and arosc.library are per task libraries. Each time regina.library is opened it also opens arosc.library and it then gets the same libbase as the program that uses regina.library. By linking with arosc_rel and defining aroscbase_offset arosc.library functions called from regina.library will be called with the arosc libbase stored in it's own libbase, and the latter is different for each task that has opened regina.library. The AROS_IMPORT_ASM_SYM of aroscbase in the startup section of regina.conf assures that the arosc.library init functions are called even if the programs that uses regina.library does not use an arosc.library function itself and normally would not auto-open it. Problem is that both bz2 and z library use stdio functions. The arosc.library uses the POSIX file descriptors which are of type int to refer to files. The same file descriptor will point to different files in different tasks. That's why arosc.library is a pertask library. FILE * pointer internally have a file descriptor stored that then links to the file. Now bz2 and z are using also stdio functions and thus also they need a different view for the file descriptors depending in which program the functions are called from. That's why bz2 and z become also pertask libraries. it breaks POSIX compatibility to use a type other than int for file descriptors. Would a better solution be to assign a globally unique int to each file descriptor, and thus avoid the need to make arosc.library a per-task library? far simpler solution – all DOS FileHandles and FileLocks are allocated from MEMF_31BIT. Then, we can be assured that their BPTRs fit into an int. you will most likely kill the 64-bit darwin hosted target. AFAIR it has 0 (zero) bytes of memf_31bit memory available. Must modules which are using pertask libraries be implemented itself as pertask library? Is it a bug or feature that I get now the error about missing symbolsets handling. You will now see more verbose errors for missing symbol sets, for example: By linking with jpeg and arosc, instead of jpeg_rel and arosc_rel, it was pulling in the PROGRAM_ENTRIES symbolset for arosc initialization. Since jpeg.datatype is a library, not a program, the PROGRAM_ENTRIES was not being called, and some expected initialization was therefore missing. It is the ctype changes that is causing the problem. This code now uses ADD2INIT macro to add something to initialization of the library. As you don't handle these init set in your code it gives an error. You can for now use -larosc.static -larosc or implement init set handling yourself. The move to ctype handling is that in the future we may want to have locale handling in the C library so toupper/tolower may be different for different locales. This was not possible with the ctype stuff in the link lib. Ideally in the source code sqlite3-aros.c whould be replaced with sqlite3.conf and genmodule would be called from makefile-new Use %build_module, and add additional initialization with the ADD2*() family of macros. If you insist on %build_module_simple, you will need to link explicitly with libautoinit. To handle per-task stuff manually: You can get the task-specific data in one of your routines, using GetTaskStorageSlot(). if you're not using the stackcall API, that's the general gist of it. would recommend that you use the static libraries until the pertask/peropener features have stabilized a bit more. You can always go back to dynamic linking to pertask/peropen libs later. You should be able to use arosc.library without needing to be pertask. Things gets more complicated if code in library uses file handles, malloc, errno or similar things. Is the PROGRAM_ENTRIES symbolset correct for arosc initialization then, or should it be in the INIT set? If so move the arosc_startup.c to the INIT set. Think about datatypes. Zune (muimaster.library) caches datatype objects. Task A may be the one triggering NewDtObject(). Task B may be the one triggering DisposeDTObject(). NewDTObject() does OpenLibrary of say png.datatype. DisposeDTObjec() does CloseLibrary of say png.datatype. If png.datatype usees some pertask z.library that's a problem, isn't it? As png.datatype is not per opener and is linked with arosc there should only be a problem when png.datatype is expunged from memory not when opened or closed. It will also use the arosc.library context from the task that calls the Init function vector of png.datatype and it will only be closed when the Expunge vector is called. relbase. stackcall/peropener arosc_rel.a is meant to be used from shared libraries not from normal programs. Auto-opening of it is also not finished, manual work is needed ATM. z_au, png_au, bz2_au, jpeg_au, and expat_au now use the relbase subsytem. The manual init-aros.c stub is no longer needed. Currently, to use relative libraries in your module, you must: can't find a valid way to implement peropener libraries with 'stack' functions without a real ELF dynamic linker (ie ld-linux.so). The inherent problem is determining the where the 'current' library base is when a stack function is called. Example 1 – Other libraries doing weird things behind your back Ok, to fix that issue, let's suppose we use a stub wrapper that sets the taskslot to the (global) library base. This was no problem with old implementation. StackBase was passed in the scratch register StackFunc() each time it was called. This base was then used. Example 2 – Local vs Global bases Hmm. Ok, that behavior is going to be a little weird to explain to developers. I don't see the need to support local bases. Example 3 – Callback handlers Function pointers to functions in a peropener library may be a problem but is it needed ? All in all, until we have either or may sometimes want to do malloc in a library that is not freed when the Task that happened to call function is exiting. Let's say picture caching library that uses ported code which internally uses malloc. If you have a pertask library the malloc will allocate memory on the Task that currently is calling the library and this memory will disappear when this task quits (should do free() prior exit). Before your change the caching library could just link with libarosstdc.a (and not libarosstdc_rel.a) and it worked. idea could be to either link(*) or unlink(**) the malloc to a given task, depending on from where it is called (within library or not). No, the whole point is to have the malloced memory _not_ to be attached to a task so a cached image can be used from different tasks even if the first task already died. static. call the static link library for the static version for the few shared libraries that need it different, like libz_static.a Ideally all code should just work with the shared library version. usestartup=no and the '-noarosc' LDFLAG both imply arosc.static (it doesn't hurt to link it, and if you really want arosc.library, that will preempt arosc.static) Again this will make -lz not link with the shared library stubs. IMO uselibs=z should use shared library by default. 'uselibs="utility jpeg.datatype z.static arossupport"' method. if there's a dynamic version of a library, it should always be used: static linking of libraries should be discouraged for all the usual reasons, e.g. the danger of embedding old bugs (not just security holes), bloat etc. Don't see the need for a -static option (or any other way to choose between static and dynamic libraries). Makedepend. AROS build system generates for each .c file a .d file where the includes are listed. The .c is recompiled when any of the includes changes. Remember that AROS is an OS in development so we often do/did changes to the core header files. If this makedepend was not done programs would not be rebuilt if changes are made to AROS libraries or other core code. OK, so it's basically creating the dependencies of the .o mmakefile. We do get an error from it, so something is in fact going wrong. But what is? Probably a hacky mmakefile so that include file is not found during makedepend but is found during compilation or maybe a wrong dependency so it is not guaranteed that the include file is there during makedepend. And I do think it would be better if the build would stop when such an error occurs. configuration files. We are talking about configuration files for modules like this: rom/graphics/graphics.conf. I have been thinking about similar things, but first I would like to convert our proprietary .conf format to xml. Manually writing file parsings is so passe :) Uhh.. I have no objection to using a 'standard' parser, but I have to vote no on XML *in specific*. JSON or YAML (summaries of both are on Wikipedia) are available would be better choices, since they are much more human readable, but semantically equivalent to XML. I agree that xml is not the easiest format to edit in a text editor and is quite bloated. From the other side it has ubiquitous in scripting and programming language and in text editors and IDEs. I also like that the validity of a xml file can be checked through a schema file and that it also can be a guide for the editor. There are also tools to easily convert xml files based on this schema etc. It does not matter what format it is in but it should take as much coding away from the (genmodule) programmer. Another improvement over XML could be the inclusion of literal code. Currently some literal code snippets are included in .conf file and in XML they would need some character encoding. How is this for JSON or YAML ? YAML supports UniCode internally. I don't know how well that could be ported to AROS though since it seems AROS doesn't have UniCode support yet. JSON is based on JavaScript notation and YAML 1.2 can import JSON files as it implemented itself as a complete super-set of JSON. YAML's only 1.2 implementation is in C++ using CMake as a build script creator. If we use the C implementation of libYaml, it's only YAML 1.1 compliant and loses the backward compatibility to JSON. Any data languages can be checked against a scheme; it's mostly a matter of writing out the schemes to check against. You can but my questions if the tools exists. From the second link you provided: "There are a couple of downsides to YAML: there are not a lot of tools available for it and it’s also not very easy to validate (I am not aware of anything similar to a DTD or a schema)". I find validation/syntax checking as important as human readability. Syntax checking is in the parsing in all four cases. The validation the XML can do is whether it conforms to the parsing and whether it conforms to a specific scheme. YAML and JSON are specifically intended for structured data, en I guess my example is too, so the equivalent XML scheme would check whether the content was correctly structured for structured data. The other three don't need that as anything they parse is by definition structured data. All four have the same solution: They are all essentially tree builders, and you can walk the tree to see if each node conforms to your content scheme. The object is to use a defined schema/DTD for the files that are describing a library. Text editors that understand schemas can then let you only add fields that are valid by the schema. So this schema let everyone validate a XML if it is a valid XML library description file; they can use standard tools for that. AFAICS JSON and YAML parsers only validate if the input file is a valid JSON/YAML file, not that it is a valid JSON/YAML library description file. AFAICS no such tools exist for these file formats. ETask Task Storage. __GM_* functions Genmodule no longer has to have internal understanding of where the TaskStorage resides. All of that knowledge is now in exec.library and the arch/*-all/include/aros/cpu.h headers. Location of the TaskStorage slots It was important to me that the address of the ETask does not change. For example, it would be pretty bad if code like this broke: Also, I wanted to minimize the number of places that need to be modified if the TaskStorage location needed to be moved (again). et_TaskStorage is automatically resized by Exec/SetTaskStorageSlot() as needed, and a new ETask's et_TaskStorage is cloned from its parent, if the parent was also an ETask with et_TaskStorage. What I wanted to say here is that some overhead may be acceptable for SetTaskStorageSlot() if properly documented. E.g. to not call the function in time critical paths. You clone Parent TaskStorage when creating a subtask as before. This may be acceptable if it is documented that a slot allocated in the parent may not be valid in child if it is allocated in parent after the child has been created. For other use cases think it acceptable to have to be sure that in a Task first a SetStorageSlot() has to be done before getting the value. Auto generation of oop.library. updated genmodule to be capable of generating interface headers from the foo.conf of a root class, and have tested it by updating graphics.hidd to use the autogenerated headers. Hopefully this will encourage more people to use the oop.library subsystem, by making it easier to create the necessary headers and stubs for an oop.library class interface. Note that this is still *completely optional*, but is encouraged. Plans to extend this to generating Objective C interfaces in the future, as well as autoinit and relbase functionality. This allows a class interface to be defined, and will create a header file in $(AROS_INCLUDES)/interface/My_Foo.h, where 'My_Foo' is the interface's "interfacename". In the future, this could be extended to generate C++ pure virtual class headers, or Objective C protocol headers. The header comes complete with aMy_Foo_* attribute enums, pMy_Foo_* messages, moMy_Foo method offsets, and the full assortment of interface stubs. To define a class interface, add to the .conf file of your base class: Documentation. It would be nice if we could just upload to diff (maybe as zip file) and then the patching is automatically done. If you have a local copy of the whole website, you can update only the file(s) that are changed with a rsync-type script (maybe rsync itself works for the purpose). Misc. The AROS build system arch/common is for drivers where it is difficult to say to which CPU and/or arch they belong: for example a graphics driver using the PCI API could as well run inside hosted linux as on PPC native. Then it's arch-independent code and it should be in fact outside of arch. Currently they are in workbench/devs/drivers. Can be discussed, but looks like it's just a matter of being used up to a particular location. At least no one changed this. Even if it's not specific to a particular platform, the code in arch/common is hardware dependent, whereas the code in rom/ and workbench/ is supposed to be non-hardware-specific. This has been discussed before when you moved other components (e.g. ata.device) from arch/common to rom/devs. IIRC you accepted that that move was inappropriate in retrospect (but didn't undo it). Having said that, arch/all-pc might be a good place for components shared between i386-pc and x86_64-pc such as the timer HIDD. On further inspection it seems that most drivers are already in workbench/hidds. Introduction. AROS build system is based around GNU toolchain. This means we use gcc as our compiler, and the build system needs a POSIX environment to run. Currently AROS has been successfully build using the following environments: From these two Windows environments, MinGW is the preferred one, because of significantly faster (compared to Cygwin) operation. There's, however, a known problem: if you want to build native port, GRUB2 can't be built. Its own build system fails is currently incompatible with MinGW, and will fail. You can work around it if you use—with-bootloader=none argument when configuring AROS. This will disable building the primary bootloader. You can perfectly live with that if you already have GRUB installed. Running on a host whose binary format is different from ELF (i. e. Darwin and Windows), requires you to use native AROS-targeted crosstoolchain. It can be built together with AROS, however using a standalone preinstalled toolchain significantly shortens the build time and saves up your drive space. A pretty good set of prebuilt toolchains for Darwin and Windows can be obtained from AROS Archives. Cross-compiling a hosted version of AROS requires you, additionally, to have the second crosstoolchain, targeted to what will be your host. For example if you're building Windows-hosted AROS under Linux, you'll need Windows-targeted crosstoolchain. Because of this, building a hosted version is best to be done on the same system it will run on. In the past, configure found e.g. i386-elf-gcc etc. on the path during a cross-compile without passing special options. I'd like to retain that capability. That *should* still work if you pass in—disable-crosstools. Remember, --enable-crosstools is the default now – and it would be silly to use the external crosstools is AROS is just going to build its own anyway. For the kernel tools though, yes, I definitely agree. Let me know if you have a system where the kernel tool type isn't detected properly. Making use of threaded builds (make -j X)? if not it might be worth using. Please don't; vps is a virtual machine also running some web sites. I don't want to fully starve the rest that is running on that machine. I appreciate what you are saying – but without info on the virtualised hardware cant really comment. How many "cores" does the vm have? if it has >2, don't see why adding an additional thread (make -j 2) should cause any noticeable difference to the web services it also hosts? 26 February 2012, configure has been restructured, to generate three sets of *_*_{cc,as,objdump...} definitions. If we are building crosstools: orig_target_* - AROS built toolchain (in bin/{host}/tools/crosstools/...) aros_kernel_* - External toolchain, if—with-kernel-tool-prefix, or the architecture configure it as such (ie hosted archs) Otherwise, it points to the orig_target_* tools aros_target_* - AROS target tools (in bin/{host}/tools/${target_tool_prefix}-* If we are *not* building crosstools: (--disable-crosstools, or—with-crosstools=...) aros_kernel_* - External toolchain (required, and configure should be checking for it!) orig_target_* - Points to aros_kernel_* aros_target_* - AROS target tools (in bin/{host}/tools/${target_tool_prefix}-* modified collect-aros to mark ABIv1 ELF files with EI_OSABI of 15 (AROS) instead of 0 (generic Unix). For now, I'm going to hold off on the change to refuse to load ABIv0 files (with EI_OSABI of 0) until I can get some more testing done (since dos/internalloadseg_elf.c is reused in a few places). A separate change to have ABIv0 refuse to load ABIv1 applications will need to be made. The patch to have ABIv1 refuse to load ABIv0 applications will come in the near future. Custom tools. ../$srcdir/configure" --target=linux-i386—enable-debug=all—with-portssources="$curdir/$portsdir Always use the 'tools/crosstools' compiler to build contrib/gnu/gcc. AFAIK this was the previous solution, using TARGET_CC override in mmakefile... the host toolchain should only be used for compiling the tools (genmodule, elf2hunk, etc), and for the bootstrap (ie AROSBootstrap on linux-hosted and grub2 for pc-*). To be exact, the 'kernel' compiler is used for compiling GRUB2, and probably AROSBootstrap too. This is important when cross-compiling. How about we invert the sense of --enable-crosstools? We make it '--disable-crosstools', and crosstools=yes is on by default? That way we can support new arch bringup (if we don't have working crosstools yet), but 'most people' won" have to deal with the issues of, say, having C compiled with (host) gcc 4.6.1, but C++ compiled with (crosstools) gcc 4.2 add-symbol-file boot/aros-bsp-linux 0xf7b14000 add-symbol-file boot/aros-base 0xf7b6a910 There's "loadkick" gdb command now which does this auomatically. Btw, don't use add-symbol-file. Use "loadseg <address>". You need to run as you have a stale config. $ ./config.status—recheck && ./config.status In the end I would like to get rid of the mmakefile parsing by mmake. What I would like to put in place is that mmake calls the command: 'make -f mmakefile __MM__' and it parses the output of that command. The mmakefile would the be full of statements like: This could be generated by genmf macros or gmake functions. I think this approach would give some advantages: Would like to express the following 'build all libraries I depend on' concept in Metamake: At the moment it not possible as mmake is a static scanner and does not support loops or function like $(addsuffix ...). Look in the AROS dev maillist for a thread called 'mmake RFC' (in Aug 2010) describing my idea. If you look at the svn log of tools/MetaMake there is r34165 'Started to write a function which calls the _MM_ target in a mmakefile. ...' Can see this breaking because it wont know which "parent" metatarget(s) to invoke to build the prerequisites based on the object files / binaries alone, unless you add a dependancy on the (relevant) metatarget for every binary produced. i.e it would be like doing "make <prerequisites-metatarget>-quick" for the prerequisite. Yes, each module target would get an extra linklib-modulename target. (not linklib-kernel-dos, just linklib-dos, for example). mmake at the moment only knows about metatargets and metadependencies. It does not handle real files or knows when something is old or new. Therefore it always has to try all metadependencies and make will find out if it is up to date or needs to be rebuilt. This can be changed to also let mmake dependencies on real files (e.g. the .c files for a shared library); remember when something was last build and check if files have changed. But this won't be a small change. IS there some way we can pass info about the file types in the "files=" parameter, so that the macros can automatically pass the files to the necessary utility macros? IMO uselibs= should only be needed when non-standard libraries are used. In my ABI V1 I even made a patch to remove all standard libs from the uselibs= statement. I do plan to submit this again sometime in the future. And there should not be a need to add these libs to uselibs=. linklibs that are standardly linked should be build by the linklibs-core metatarget. %build_module takes care of the linklinbs-core dependency. Currently a lot of linklibs are not dependent of this metatarger because a lot of the standard libs autoopened by libautoinit.a. TBH, I also find it a bit weird. Standard libraries don't need -lXXX, because they "link" via proto files, right? They are (currently) only used for the linklibs-<foo> dependency autogeneration. Was under the impression you wanted to move all the per-library autoinit code back to the specific libraries? Yes to avoid the current mismatch between versions in libautoinit and libxxx.a. for %build_prog and some others so it might seem logical to add another cppfiles But then we might need to add dfiles, modfiles or pfiles for the D language, Modula-2 and Pascal as well in the future so your idea about adding it all to the files parameter in one way or another seems to be more future proof to me. Personally, I'd prefer to let make.tmpl figure it all out from the extensions, even though it'd be a large changeset to fix all the FILES=lines. By the way: what are the 'standard libraries'? That is to be discussed. I would include almost all libs in our workbench/libs and rom/ directories unless there is a good reason not to use it as a standard linklib. mesa will always require -lGL to be passed because AROSMesaGetProcAddress is only present in linklib. Also nobody will write code #include <proto/mesa.h>. All code will have #include <GL/gl.h> working on minimal-version autoopening, to enhance binary compatibility with m68k and PPC AOS flavors. To be clear I like the feature you are implementing, I don't like it that programmers have to specify a long list of libs to uselibs= all the time. Does this give the programmer a way to specify that he'll need more than the minimum for a function? For example, one aspect of a function may have been buggy/unimplemented in the first version. If that aspect is used, a version is needed that supports it properly. Yes, in the library.conf file, you would use: foo.conf Then, if you use FooSet(), you'll get version 34 of the library, but if your code never calls FooSet(), you'll only OpenLibrary() version 33. OpenLibrary requiring version 34 in one case and 37 in the other, depending on whether I needed that specific NULL-handling aspect of FooSet(). How will this work with otherwise automatic determination of minimum versions? Uh... You'll have the handle library loading yourself, then: Syntax of the makefile. Where do I need to make the changes to add 'contrib' to the amiga-m68k build process? You need to study scripts in /AROS/scripts/nightly/pkg and get some knowledge from them. Neil can probably give you better explanation. Contrib
16,914
Aros/Developer/Zune. Introduction. Zune is an object-oriented GUI toolkit. It is nearly a clone (at both API and Look&Feel level) of MUI, a well-known Amiga shareware product by Stefan Stuntz. Therefore, MUI developers will feel at home here, while others will discover the concepts and qualities that Zune shares with MUI. The programmer has a much easier time to design its GUI: no need for hardcoded values, Zune is font-sensitive, and adapts to any window size due to its layout system. He/she mostly needs to only specify the semantic of its GUI to Zune, which will arrange the low-level details automatically. Zune is based on the BOOPSI system, the framework inherited from AmigaOS (TM) for object-oriented programming in C. Zune classes does not derive from existing BOOPSI gadget classes; instead, the Notify class (base class of the Zune hierarchy) derives from the BOOPSI root class. Creating Prefs from scratch with using our Zune Prefs classes. A good introduction to the BOOPSI system is the Chapter 12, "BOOPSI - Object-oriented Intuition" on-line here. Here BOOPSI NewObject() has been replaced by MUI_NewObject(), DisposeObject() becomes MUI_DisposeObject(), etc. Prerequisites. Some knowledge of OOP is more than welcome. Knowing AROS APIs and concepts like taglists and BOOPSI is essential. As Zune is a MUI clone, all the documentation pertaining to MUI is applicable to Zune. In particular, the latest available MUI developer kit and MUI autodocs can be found at here. In this LHA archive, 2 documents are warmly recommended: MUIdev.guide, the MUI programmer documentation and PSI.c source code, demonstrating Zune practices like OOP and dynamic object creation Zune basics and conventions. MUI (= Zune) sticks a prefix on the start to signify what is being access/changed: BOOPSI Primer/Concepts. Class. A class is defined by its name, its parent class and a dispatcher. name: either a string for the public classes, so that they may be used by any program in the system, or none if its a private class used only by a single application. parent class: all BOOPSI classes are forming a hierarchy rooted at the class aptly named rootclass. It allows each subclass to implement its own version of specific parent operation, or to fall back on the one provided by its parent. Also known as base class or super class. dispatcher: it gives access to all operations (called methods) provided by this class, ensuring that each operation is handled by the proper code or passed to its super class. BOOPSI type for a class is Class * also known as IClass. Object. An object is an instance of class: each object has its specific data, but all objects of the same class share the same behavior. An object has several classes if we count the parents of its true class (the most derived one) up to the rootclass. BOOPSI type for an object is Object *. It has no field you can directly access. When a Zune object is created and destroyed several methods are called: If the documentation says that something is only valid between setup and cleanup it means that the first place where you can use it is the setup method and you aren't allowed to use it after cleanup. Attribute. An attribute is related to the instance data of each object: you can not access these data directly, you can only set or get the attributes provided by an object to modify its internal state. An attribute is implemented as a Tag (ULONG value or'ed with TAG_USER). GetAttr() and SetAttrs() are used to modify an object's attributes. Attributes can be one or more of the following: Initialization-settable (I) : the attribute can be given as parameter at the object creation. Settable (S) : You can set this attribute at any time (or at least, not only creation). Gettable (G) : You can get the value of this attribute. Method. A BOOPSI method is a function which receives as parameters an object, a class and a message: object: the object you act on class: the considered class for this object. message: contains a method ID which determines the function to call within a dispatcher, and is followed by its parameters. To send a message to an object, use DoMethod(). It will use the true class first. If the class implements this method, it will handle it. Else it will try its parent class, until the message is handled or the rootclass is reached (in this case, the unknown message is silently discarded). Summary, before a method can act on an object, it needs a BOOPSI message BOOPSI Examples. Let's see basic examples of this OOP framework: Getting an attribute We'll query a MUI String object for its content: void f(Object *string) IPTR result; GetAttr(string, MUIA_String_Contents, &result); printf("String content is: %s\n", (STRPTR)result); Object * is the type of BOOPSI objects. IPTR must be used for the type of the result, which can be an integer or a pointer. An IPTR is always written in memory, so using a smaller type would lead to memory corruption! Here we query a MUI String object for its content: MUIA_String_Contents, as any other attribute, is a ULONG (it's a Tag) Zune applications use more often the get() and XGET() macros instead: get(string, MUIA_String_Contents, &result); result = XGET(string, MUIA_String_Contents); Setting an attribute Let's change the content of our string: SetAttrs(string, MUIA_String_Contents, (IPTR)"hello", TAG_DONE); Pointers parameters must be casted to IPTR to avoid warnings. After the object parameter, a taglist is passed to SetAttrs and thus must end with TAG_DONE. You'll find the set() macro useful: set(string, MUIA_String_Contents, (IPTR)"hello"); But it's only with SetAttrs() that you can set several attributes at once: SetAttrs(string, MUIA_Disabled, TRUE, MUIA_String_Contents, (IPTR)"hmmm...", TAG_DONE); Calling a method Let's see the most called method in a Zune program, the event processing method called in your main loop: result = DoMethod(obj, MUIM_Application_NewInput, (IPTR)&sigs); Parameters are not a taglist, and thus don't end with TAG_DONE. You have to cast pointers to IPTR to avoid warnings. "Hello world" example sourcecode. Screenshot 'Hello World' // gcc hello.c -lmui int main(void) Object *wnd, *app, *but; // GUI creation app = ApplicationObject, SubWindow, wnd = WindowObject, MUIA_Window_Title, "Hello world!", WindowContents, VGroup, Child, TextObject, MUIA_Text_Contents, "\33cHello world!\nHow are you?", End, Child, but = SimpleButton("_Ok"), End, End, End; if (app != NULL) ULONG sigs = 0; // Click Close gadget or hit Escape to quit DoMethod(wnd, MUIM_Notify, MUIA_Window_CloseRequest, TRUE, (IPTR)app, 2, MUIM_Application_ReturnID, MUIV_Application_ReturnID_Quit); // Click the button to quit DoMethod(but, MUIM_Notify, MUIA_Pressed, FALSE, (IPTR)app, 2, MUIM_Application_ReturnID, MUIV_Application_ReturnID_Quit); // Open the window set(wnd, MUIA_Window_Open, TRUE); // Check that the window opened if (XGET(wnd, MUIA_Window_Open)) // Main loop while((LONG)DoMethod(app, MUIM_Application_NewInput, (IPTR)&sigs) != MUIV_Application_ReturnID_Quit) if (sigs) sigs = Wait(sigs | SIGBREAKF_CTRL_C); if (sigs & SIGBREAKF_CTRL_C) break; // Destroy our application and all its objects MUI_DisposeObject(app); return 0; Remarks/General. We don't manually open libraries, it's done automatically for us. GUI creation. We use a macro-based language to easily build our GUI. A Zune application has always 1 and only 1 Application object: An application can have 0, 1 or more Window objects. Most often a single one: Be nice, give a title to the window: A window must have 1 and only 1 child, usually a group. This one is vertical, that means that its children will be arranged vertically: A group must have at least 1 child, here it's just a text: Zune accepts various escape codes (here, to center the text) and newlines: An End macro must match every xxxObject macro (here, TextObject): Let's add a second child to our group, a button! With a keyboard shortcut o indicated by an underscore: Finish the group: Finish the window: Finish the application: So, who still needs a GUI builder? :-) Error handling. If any of the object in the application tree can't be created, Zune destroys all the objects already created and application creation fails. If not, you have a fully working application: if (app != NULL) ... When you're done, just call MUI_DisposeObject() on your application object to destroy all the objects currently in the application, and free all the resources: MUI_DisposeObject(app); Notifications. Notifications are the simplest way to react on events. The principle? We want to be notified when a certain attribute of a certain object is set to a certain value: Here we'll listen to the MUIA_Window_CloseRequest of our Window object and be notified whenever this attribute is set to TRUE. So what happens when a notification is triggered? A message is sent to an object, here we tell our Application to return MUIV_Application_ReturnID_Quit on the next event loop iteration: As we can specify anything we want here, we have to tell the number of extra parameters we are supplying to MUIM_Notify: here, 2 parameters. For the button, we listen to its MUIA_Pressed attribute: it's set to FALSE whenever the button is being released (reacting when it's pressed is bad practice, you may want to release the mouse outside of the button to cancel your action - plus we want to see how it looks when it's pressed). The action is the same as the previous, send a message to the application: Opening the window. Windows aren't open until you ask them to: If all goes well, your window should be displayed at this point. But it can fail! So don't forget to check by querying the attribute, which should be TRUE: Main loop. Let me introduce you my lil' friend, the ideal Zune event loop: Don't forget to initialize the signals to 0 ... The test of the loop is the MUIM_Application_NewInput method: It takes as input the signals of the events it has to process (result from Wait(), or 0), will modify this value to place the signals Zune is waiting for (for the next Wait()) and will return a value. This return value mechanism was historically the only way to react on events, but it was ugly and has been deprecated in favor of custom classes and object-oriented design. The body of the loop is quite empty, we only wait for signals and handle Ctrl-C to break out of the loop: { if (sigs) sigs = Wait(sigs | SIGBREAKF_CTRL_C); if (sigs & SIGBREAKF_CTRL_C) break; Conclusion. This program gets you started with Zune, and allows you to toy with GUI design, but not more. Notification actions. Notification allows you to respond to events your application/gui or any other object might cause. Due to the attribute and method based nature of Zune, and a few special attributes, most applications can be almost completely automated through the use of Notification(s). As seen in hello.c, you use MUIM_Notify to call a method if a certain condition happens. If you want your application to react in a specific way to events, you can use one of these schemes: Zune Examples/Tutorials. Some other good examples are here: Tools. MUIBuilder. A native build and sourceforge svn repository has been started and Aros/Developer/ZuneFurther documentation can be read. v3 is W.I.P. so it is not recommended for now. This thread discussed using the m68k build and diff file which modifications were necessary to make it build-able under AROS. Summary Just go ahead and create your GUIS under any version of AROS or under amiga 68k version and do some modifications (which would be necessary for MorphOS/AmigaOS3/AmigaOS4, too). ChocolateCastle. The binary which makes creating MUI classes and applications easier. Programmers using this system often avoid writing custom classes, being discouraged by a lot of boring and schematical typing. A significant part of a typical custom class may be generated automatically. This is usually 2 to 5 kB of source code. Automating this work speeds programming up and helps avoiding simple typing errors.
3,273
Spanish/Partes del cuerpo. < Lesson 8 =El cuerpo ("The Body")= In Spanish, possessives are used much less frequently than in English when relating to body parts and clothes. Le duele la cabeza = "his/her head aches" <br> Mírame a los ojos = "look at my eyes" <br> Tienes rota la camisa = "your shirt is torn" A sentence like “le duele su cabeza” sounds artificial. However, probably due to badly translated television movies, you can hear “dame tu mano” instead of “dame la mano” ("give me your hand"). ^ Spanish ^ | « Lesson 7 | Lesson 8 |Lesson 9 »
186
High School Mathematics Extensions/Counting and Generating Functions/Solutions. Counting and Generating Functions. These solutions were not written by the author of the rest of the book. They are simply the answers I thought were correct while doing the exercises. I hope these answers are useful for someone and that people will correct my work if I made some mistakes. Generating functions exercises. 1. 2. 2c only contains the exercise and not the answer for the moment Linear Recurrence Relations exercises. This section only contains the incomplete answers because I couldn't figure out where to go from here. 1. Let G(z) be the generating function of the sequence described above. 2. Let G(z) be the generating function of the sequence described above. 3. Let G(z) be the generating function of the sequence described above. Further Counting exercises. 1. We know that therefore 2. formula_66 *Differentiate from first principle* exercises. 1.
236
Statistics/Summary/Range. Range of Data. The range of a sample (set of data) is simply the maximum possible difference in the data, i.e. the difference between the maximum and the minimum values. A more exact term for it is "range width" and is usually denoted by the letter R or w. The two individual values (the max. and min.) are called the "range limits". Often these terms are confused and students should be careful to use the correct terminology. For example, in a sample with values 2 3 5 7 8 11 12, the range is 11 (|12|-|2|+1=11) and the range limits are 2 and 12. The range is the simplest and most easily understood measure of the dispersion (spread) of a set of data, and though it is very widely used in everyday life, it is too rough for serious statistical work. It is not a "robust" measure, because clearly the chance of finding the maximum and minimum values in a population depends greatly on the size of the sample we choose to take from it and so its value is likely to vary widely from one sample to another. Furthermore, it is not a satisfactory descriptor of the data because it depends on only two items in the sample and overlooks all the rest. A far better measure of dispersion is the standard deviation ("s"), which takes into account all the data. It is not only more robust and "efficient" than the range, but is also amenable to far greater statistical manipulation. Nevertheless the range is still much used in simple descriptions of data and also in quality control charts. The mean range of a set of data is however a quite efficient measure (statistic) and can be used as an easy way to calculate "s". What we do in such cases is to subdivide the data into groups of a few members, calculate their average range, formula_1 and divide it by a factor (from tables), which depends on n. In chemical laboratories for example, it is very common to analyse samples in duplicate, and so they have a large source of ready data to calculate "s". For example: If we have a sample of size 40, we can divide it into 10 sub-samples of n=4 each. If we then find their mean range to be, say, 3.1, the standard deviation of the parent sample of 40 items is approximately 3.1/2.059 = 1.506. With simple electronic calculators now available, which can calculate "s" directly at the touch of a key, there is no longer much need for such expedients, though students of statistics should be familiar with them.
597
Conlang/Advanced/Grammar/Alignment/Trigger. Trigger systems are apparently found only in conlangs, inspired by the applicativization systems of like the Filipino language Tagalog. In the case systems of English and other Indo-European languages, there's typically a subject (agent of the action), a verb (action) and an object or several objects (patient(s) of the action). Trigger languages on the other hand divide things into agent, patient and other objects. The differences between "trigger languages" and "nominative/accusative languages" are: Triggering works as follows: Every argument of the sentence is marked for its role (agent, patient, other objects). The triggered argument is marked on the verb. In Carsten Becker's conlang Ayeri, perhaps the best known trigger conlang, trigger and case marker change places.
191
Introduction to Paleoanthropology/Definition. To effectively study paleoanthropology, one must understand that it is a subdiscipline of anthropology and have a basic understanding of archaeology dating techniques, evolution of cultures, Darwinian thought, genetics, and primate behaviours. __toc__ There is a long academic tradition in modern anthropology which is divided into four fields, as defined by Franz Boas (1858-1942), who is generally considered the father of the Anthropology in the United States of America. The study of anthropology falls into four main fields: Although these disciplines are separate, they share common goals. All forms of anthropology focus on the following: Sociocultural anthropology/ethnology. This field can trace its roots to processes of European colonization and globalization, when European trade with other parts of the world and eventual political control of overseas territories offered scholars access to different cultures. Anthropology was the scientific discipline that searches to understand human diversity, both culturally and biologically. Originally anthropology focused on understanding groups of people then considered "primitive" or "simple" whereas sociology focused on modern urban societies in Europe and North America although more recently cultural anthropology looks at all cultures around the world, including those in developed countries. Over the years, sociocultural anthropology has influenced other disciplines like urban studies, gender studies, ethnic studies and has developed a number of sub-disciplines like medical anthropology, political anthropology, environmental anthropology, applied anthropology, psychological anthropology, economic anthropology and others have developed. Linguistic anthropology. This study of human speech and languages includes their structure, origins and diversity. It focuses on comparison between contemporary languages, identification of language families and past relationships between human groups. It looks at: Archaeology. Is the study of past cultures through an analysis of artifacts, or materials left behind gathered through excavation. This is in contrast to history, which studies past cultures though an analysis of written records left behind. Archaeology can thus examine the past of cultures or social classes that had no written history. Historical archaeology can be informed by historical information although the different methods of gathering information mean that historians and archaeologists are often asking and answering very different kinds of questions. It should be noted that recovery and analysis of material remains is only one window to view the reconstruction of past human societies, including their economic systems, religious beliefs, and social and political organization. Archaeological studies are based on: Physical anthropology. Is the study of human biological variation within the framework of evolution, with a strong emphasis on the interaction between biology and culture. Physical anthropology has several subfields: Paleoanthropology. As a subdiscipline of physical anthropology that focuses on the fossil record of humans and non-human primates. This field relies on the following: Evolution of hominids from other primates starting around 8 million to 6 million years ago. This information is gained from fossil record of primates, genetics analysis of humans and other surviving primate species, and the history of changing climate and environments in which these species evolved. Evidence of hominid activity between 8 and 2.5 million years ago usually only consists of bone remains available for study. Because of this very incomplete picture of the time period from the fossil record, various aspects of physical anthropology (osteometry, functional anatomy, evolutionary framework) are essential to explain evolution during these first millions of years. Evolution during this time is considered as the result of natural forces only. Paleoanthropologists need to be well-versed in other scientific disciplines and methods, including ecology, biology, anatomy, genetics, and primatology. Through several million years of evolution, humans eventually became a unique species. This process is similar to the evolution of other animals that are adapted to specific environments or "ecological niches". Animals adapted to niches usually play a specialized part in their ecosystem and rely on a specialized diet. Humans are different in many ways from other animals. Since 2.5 million years ago, several breakthroughs have occurred in human evolution, including dietary habits, technological aptitude, and economic revolutions. Humans also showed signs of early migration to new ecological niches and developed new subsistence activities based on new stone tool technologies and the use of fire. Because of this, the concept of an ecological niche does not always apply to humans any more. Summary. The following topics were covered: Further modules in this series will focus on physical anthropology and be oriented toward understanding of the natural and cultural factors involved in the evolution of the first hominids.
1,048
Outdoor Survival/Shelter. Helpful Hints. Shelter will keep you out of the elements and help prevent hypothermia. It will provide shade in the heat. It also has an important positive psychological effect. Shelter for Survival. In the average consideration of survival it often seems shelter is overlooked, or at least taken for granted and misplaced on the scale of priorities. It is always good to remember the "rule of threes":<br> "A person can survive for three minutes without air,<br> three hours without shelter,<br> three days without water,<br> three weeks without food."<br> While not absolute these guidelines are reasonable and appropriately stress the need for shelter. Protection from the elements comes second only to breathing. Quick thoughts about the conditions under which three hours of exposure is a generous life-expectancy should immediately clarify that when shelter is most needed it may be most difficult to find or create. Levels of shelter:<br> Clothing and basic tools<br> Minimal short-term protection<br> Long-term protection<br> A recent engineer by the name of Howard "Chubby" Fultz has developed an idea that could revolutionize outdoor survival. It is commonly referred to as "The Chubby Dome." The official name is "CD320." Although still in the developing stages, The Chubby Dome offers a fresh water supply, food storage, power, shelter, and a small bathroom. Clothing: Dress in Layers! The first level of shelter to consider is that of clothing. Simply put, you have to be able to "walk home" in the clothing you have on or at hand. Take a hard look at the difference between what you want to wear and what you should have for all possible conditions. Waypoints where additional resources are available (camp, basecamp, your car) can certainly be the initial target of any self-extraction. Such caches then cover situations of different scope ~what's on your person will get you back to camp, what's at camp will get you to your car, etc. The concept of walking home may sound out of place, but there is a distinction between being lost and/or injured, and simply being in a difficult spot for an indefinite period of time. As a rule, if you are lost or injured do not wander -get yourself found-. A minimal set of equipment should be incorporated into one's wardrobe. Without the ability to make fire and cut things -right now- you're not even trying to be prepared and will be hard-pressed to spend anything but a summer night outdoors safely.
625
Geographic Information Systems. Introduction. GIS is an organised collection of computer programs, computer, geographic data, and people.This definition gives you the components that make up GIS.People who know how to use computer(hardware)and programme(software)to provide information (from geographic data) are able to solve a problem or answer a specific question. What is GIS? Geographic Information Systems provide a method for integrating and analyzing spatial (digital map based) information such as "where is the nearest movie theater?" alongside related non-spatial information (what movies are playing there?). GIS have three major capabilities (computer mapping, spatial analysis and spatial database) and can operate on a range of platforms (desktop/laptop computer, Internet, PDA, etc.). Many people are becoming far more familiar with seeing the results both textually - for example when their phone shows them the nearest pub - and on open map systems such as Google Maps. Where in the past people had to literally use pencils and string on a paper map to find their nearest school, a computer can do this now extremely quickly an accurately, as long as all the information has been entered correctly in the first place. In a broader context, GIS involves people and often brings a philosophy of change. For example, in 1994, the New York Police Department introduced GIS to locate crime 'hot-spots', analyze underlying problems and devise strategies and solutions to deal with the problems. Since 1993, violent crime has dropped by two-thirds in New York City. This strategy, known as COMPSTAT, has expanded to cities and jurisdictions across the United States and around the world. GIS Software. One leading GIS software vendor is ESRI, based in Redlands, California, which offers ArcGIS for the desktop, ArcGIS Server for Internet mapping, ArcPad for PDAs and a range of other products and services for developers. Other popular GIS software packages are available from Cadcorp, Intergraph, MapInfo, Manifold and Autodesk. ERDAS Imagine, ENVI, Idrisi, and PCI Geomatica are geared towards remote sensing i.e. analysis of satellite/aircraft images. There are many third-party extensions and utilities for ArcGIS and other GIS and raster software platforms. Currently, open source GIS software options can be chosen from the first OS GIS package GRASS, recent open source options are DIVA GIS, QGIS, and uDig. There are efforts underway, through the Open GIS Consortium to provide interoperability among spatial data formats and software. The leading contender for spatial data storage is another open source package called PostGIS, which is a spatial extension to the open source database PostgreSQL. What Is Topology? Topology is the branch of mathematics concerned with spatial properties in special the continuous deformations of objects, such as deformations that involve stretching (but no tearing or gluing) and how they connect together. In geography this is an indispensable tool for map-making in expressing relevant geographic information, especially in topological maps. Why Is Geospatial Topology Important In A GIS? Geospatial analysis provides a unique perspective on the world. It is a tool with which to examine events, patterns, and processes that operate on or near the surface of our planet. To have this tool interact with the relevant topological information is a must; providing greater understanding of how rivers flow, weather is affected by terrain, or even how humans and other animals flow or choose a specific habitat. This kind of information can only be gathered and made useful if topological information is present in the GIS. Spatial Data. Spatial data comes in two major formats, as vector or raster data. The main difference is that a raster is usually a static background picture used to illustrate, whereas a vector is an intelligent ladder of information that can be selected and searched. Vector (points, lines, polygons) Vector data often represents anthropogenic (human) features such as roads, buildings, political boundaries (counties, congressional districts, etc.), and other features such as lakes and rivers. Vector data is scalable without loss of resolution and is generally represented by XYZ points in a Cartesian frame reference. Raster (grids/images) Raster data is pixellated data, and the more pixels that map the data, the better the resolution. However, if raster data is enlarged, it simply enlarges the pixels, which then leads to a loss of resolution. There are many moves being made to make raster data more usable/searchable as it is much faster to collect, unlike vectors where each piece of data usually has a manual input origin. Raster data is usually derived from satellite imagery or aerial photography (known as remote sensing). Ordinary cameras are only sensitive to visible light. Satellite sensors can capture not only visible light, but also the thermal, microwave, infrared or other types of energy emanating from the Earth's surface. This extra data provides information about sea surface temperatures, vegetation, ozone, etc. Remote sensing is also used to study other planets and extraterrestrial bodies, such as Mars. Data sources. The U.S. Federal Government offers a wealth of spatial data for free or at cost, with major offerings from the United States Geological Survey (USGS), NASA and the U.S. Census Bureau. State governments also serve spatial data through GIS portals such as MassGIS. As well, many local governments have detailed GIS data of tax parcels, roads, buildings, etc. Other governments around the world also offer spatial data, though not necessarily for free. Private industry also offers GIS data, with TeleAtlas/GDT offering a wealth of vector data, while Ikonos and DigitalGlobe (QuickBird) provide high-resolution satellite imagery. Some public-domain data sources are listed at http://visual.wiki.taoriver.net/moin.cgi/EarthMapTool .
1,339
High School Mathematics Extensions/Matrices/Solutions. Matrices. Multiplication of non-vector matrices exercises. 1. 2. 3. The important thing to notice here is that the 2x2 matrix remains the same when multiplied with the other matrix. The matrix with only 1s on the diagonal and 0s elsewhere is known as the "identity" matrix, called "I", and any matrix multiplied on either side of it stays the same. That is formula_19 NB:The remaining exercises in this section are leftovers from previous exercises in the 'Multiplication of non-vector matrices' section 3. The important thing to notice here is that the 1 to 9 matrix remains the same when multiplied with the other matrix. The matrix with only 1s on the diagonal and 0s elsewhere is known as the "identity" matrix, called "I", and any matrix multiplied on either side of it stays the same. That is formula_19 4. a) b) c) formula_29 d) e) As an example I will first calculate A2 Now lets do the same simplifications I have done above with A5- f) Determinant and Inverses exercises. 1. The simultaneous equation will be translated into the following matrices formula_53 Because we already know that We can say that there is no unique solution to these simultaneous equations. 2. First calculate the value when you multiply the determinants Now let's calculate C by doing the matrix multiplication first Which is equal to the value we calculated when we multiplied the determinants, thus for the 2×2 case. 3. Thus det(A) = -det(A') is true. 4. a) thus det(A) = det(B) b) if formula_74 for some k it means that formula_75. But we can write formula_76, thus formula_77. This means that formula_78. 5. a) b) c) d) We see that P and it's inverse disappear when you raise the matrix to the fifth power. Thus you can see that we can calculate An very easily because you only have to raise the diagonal matrix to the n-th power. Raising diagonal matrices to a certain power is very easy because you only have to raise the numbers on the diagonal to that power. e) We use the method derived in the exercise above.
571
Esperanto/Appendix/Web resources. Chatrooms. Other. IRC: ##esperanto on irc.freenode.net Jabber: [email protected] (For more information on this, see Kiel uzi Jabber) MSN: Add "[email protected]" to your user list (interfaces with the chatroom at Lernu!)
96
Introduction to Paleoanthropology/Dating Techniques. Having an accurate time scale is a crucial aspect of reconstructing how anatomical and behavioral characteristics of early hominids evolved. __toc__ Researchers who are interested in knowing the age of particular hominid fossils and/or artifacts have options that fall into two basic categories: Relative dating methods. Relative dating methods allow one to determine if an object is earlier than, later than, or contemporary with some other object. It does not, however, allow one to independently assign an accurate estimation of the age of an object as expressed in years. The most common relative dating method is stratigraphy. Other methods include fluorine dating, nitrogen dating, association with bones of extinct fauna, association with certain pollen profiles, association with geological features such as beaches, terraces and river meanders, and the establishment of cultural seriations. Cultural seriations are based on typologies, in which artifacts that are numerous across a wide variety of sites and over time, like pottery or stone tools. If archaeologists know how pottery styles, glazes, and techniques have changed over time they can date sites based on the ratio of different kinds of pottery. This also works with stone tools which are found abundantly at different sites and across long periods of time. Principle of stratigraphy. Stratigraphic dating is based on the principle of depositional superposition of layers of sediments called strata. This principle presumes that the oldest layer of a stratigraphic sequence will be on the bottom and the most recent, or youngest, will be on the top. The earliest-known hominids in East Africa are often found in very specific stratigraphic contexts that have implications for their relative dating. These strata are often most visible in canyons or gorges which are good sites to find and identify fossils. Understanding the geologic history of an area and the different strata is important to interpreting and understanding archaeological findings. Chronometric dating methods. The majority of chronometric dating methods are radiometric, which means they involve measuring the radioactive decay of a certain chemical isotope. They are called chronometric because they allow one to make a very accurate scientific estimate of the date of an object as expressed in years. They do not, however, give "absolute" dates because they merely provide a statistical probability that a given date falls within a certain range of age expressed in years. Chronometric methods include radiocarbon, potassium-argon, fission-track, and thermoluminescence. The most commonly used chronometic method is radiocarbon analysis. It measures the decay of radioactive carbon (14C) that has been absorbed from the atmosphere by a plant or animal prior to its death. Once the organism dies, the Carbon-14 begins to decay at an extremely predictable rate. Radioactive carbon has a half-life of approximately 5,730 years which means that every 5,730 years, half of the carbon-14 will have decayed. This number is usually written as a range, with plus or minus 40 years (1 standard deviation of error) and the theoretical absolute limit of this method is 80,000 years ago, although the practical limit is close to 50,000 years ago. Because the pool of radioactive carbon in the atmosphere (a result of bombardment of nitrogen by neutrons from cosmic radiation) has not been constant through time, calibration curves based on dendrochronology (tree ring dating) and glacial ice cores, are now used to adjust radiocarbon years to calendrical years. The development of Atomic Absorption Mass Spectrometry in recent years, a technique that allows one to count the individual atoms of 14C remaining in a sample instead of measuring the radioactive decay of the 14C, has considerably broadened the applicability of radiocarbon dating because it is now possible to date much smaller samples, as small as a grain of rice, for example. Dendrochronology is another archaeological dating technique in which tree rings are used to date pieces of wood to the exact year in which they were cut down. In areas in which scientists have tree rings sequences that reach back thousands of years, they can examine the patterns of rings in the wood and determine when the wood was cut down. This works better in temperate areas that have more distinct growing seasons (and thus rings) and relatively long-lived tree species to provide a baseline. Methods of dating in archaeology. Techniques of recovery include: Types of archaeological remains include: Data collection and analysis is oriented to answer questions of subsistence, mobility or settlement patterns, and economy. Methods in physical anthropology. Data collections based on study of hard tissues (bones and teeth), usually the only remains left of earlier populations, which include:
1,084
Arabic/Arabic sounds. Sounds in both English and Arabic. Most of the sounds in Arabic are also in English and vice versa. For example, the Arabic ba (ب) sounds exactly like the b in English, the Arabic zay (ز), sounds just like the z in English and the Arabic versions of k (ك), m (م), n (ن), f (ف), and j (ج) are all just the same. The counterpart of l (ل) in Arabic, known as laam, is not exactly the same sound as the English one (the Arabic one is pronounced with the tip of the tongue touching the roof of the mouth a bit farther back). The counterpart of r (ر) in Arabic, known as raa, is very different from English r. The Arabic one is trilled (it is a "rolled r"). In addition to the above, the following Arabic sounds also exist in English: -Thâ (ث) makes the sound "th" (voiceless) as in thin or thick or through. -Dhâ (ذ) makes the sound "th" ("dh") (voiced) as in them or there or the. -Shîn (ش) makes the sound "sh" as in shoot or shin. -Tâ marbûta (ة) is usually silent in modern Arabic. In Classical Arabic, it is pronounced t, the same as the letter tâ. -Hamza (ء) represents the glottal stop. It is pronounced by stopping the flow of breath at the back of the mouth cavity (the glottis). We make this sound when speaking English; we just have no symbol for it in our alphabet. Think of the dash in uh-oh, or the Cockney way of saying British as Bri-ish. However, hamza is also written as a diacritic. -Yâ (ي) acts just like a y. It can be a vowel (always a long vowel) at the end of a word sounding like î (the "ee" in beet), or it can be a consonant ("y"). It can be a consonant (as in the y in yes) or a vowel (like the "ee" in beet) in the middle of a word. Sounds in Arabic only. There are sounds in Arabic hard for English speakers to tell apart. Look at what makes them different. Like "k" (ك vs ق). In Arabic, the corresponding letter to q (ق) makes a different sound than the corresponding letter to k (ك), whereas in English they are redundant. The q is further back in the throat while the k is not as in English, k in English is voiceless, while its Arabic counterpart is voiced. So is the Arabic counterpart of q. Kuwait starts with a k. Qatar with a q. Listen to the difference. Like "h". The most significant sound that English speakers hear in Arabic are the three corresponding letters to h. The first (ه) is equivalent to the h and is thus very light, almost not heard at all. The noise comes from friction in the upper throat. The second (ح) comes from deep down in the throat, from actual friction from the vocal cords themselves. It sounds a little like blowing warm air on your cold hands or very fine sandpaper. The third (خ) is very rough, almost like collecting phlegm, or exactly like the French r, only if you pronounce it voiceless. It is very similar to the last sound in "Bach." None of these three sounds have any humming or vowel sound (known as voicing). They are all like a whisper; your vocal cords do not vibrate. Like "h" but voiced. The ayn (ع) may be difficult to hear and produce because though it is a consonant in Arabic, it sounds like the English a, as in water. It is produced like the y in you, but the constriction is made down in the throat instead of the mouth. It is a little like the sound a doctor asks to hear when looking down your throat. While saying "aaah", pull the back of your tongue back into your throat a bit, with a little squeeze. Similar is the ghayn (غ), which is a rougher version with more of the ch from "Bach", only with vocal cords vibrating. The difference between the last sound in the previous paragraph and the last sound in the first paragraph should be very clear. They are both rough, but one has no vibrating vocal cords while the other has them. Letters that come in hard and soft varieties. Arabic has hard and soft versions of s, t, d, and th (as there, not thin). Arabic-speakers often refer to their language as the language of Daad (hard d) because it is so hard for foreigners to say it right. However, this refers to the classical pronunciation of Daad (ض) which probably resembled a sound closer to an emphatic z sound (voiced lateral fricative). It can be easier for English-speakers to think of the hard sounds as lower-pitched and the soft sounds as higher-pitched. What complicates matters is that for English-speakers the hard s and the soft s sound the same, but the vowels before and after them are affected by the consonant. To the English speaker, the vowels are different, and the consonant is the same. To the Arabic-speaker the vowels are the same, and the consonant is different. Partly, it is because Arabic has very few recognized vowel sounds. An a and an e to an Arabic-speaker are usually the same, depending on the dialect. It is one letter pronounced differently depending on what letter comes before or after it. Also, the hard letters can be hard for even native speakers to say and so are often changed in local colloquial Arabic to something else. For example, the hard d in many areas is pronounced exactly like the z, but it is not formal, proper Arabic. The soft s, "seen" (س), is pronounced just like the English s (with mouth open, small and weak). The hard s, "sod" (ص), is with the mouth more closed, with a lower pitch, as if you were a big, stalwart man. The easiest way to say it is to make the vowels before and after it lower-pitched and deeper. The soft t, ta (ت) is just like the English t, soft and weak, and the hard t "taw" (ط) is deep and strong. The soft d, del (د) is even softer than the English d, and the hard d, Dod (ض) is very deep and hard. The soft dh, dhel (ذ), is just like the th in "the", and the hard dh, DHa (ظ) is deep and strong. Note on transliteration. Since Arabic does not use capital letters, transliterating soft letters usually uses in lowercase, hard letters in uppercase. Transliterating is not an exact science and is never entirely consistent. You are highly recommended to learn the Arabic alphabet because studying Arabic from English letters is terrifically frustrating. The same sound can be written totally differently in English letters. `Ayn is usually written as ' (with a 6 shape) in English and hamza as an opposite ' (with a 9 shape). `ayn and hamza are often left out when transliterating, especially `ayn at the beginning of a word (as in `iraaq). Ghayn(غ) is written as gh, soft ha as h, khaa (the one in Bach) as kh, and hard ha as "H" (capital h), its sound is exactly like the French r, only that ghayn is not rhotic. Sounds in English only. In case you mistakenly think that Arabic has more sounds than English, the following sounds "do not" exist in Arabic, with the sounds that are usually substituted for them in borrowed words. v (f) p (b in new loanwords, such as “computer” (كمبيوتر) and f in old loanwords, such as Palaestina (فلسطين)) g (j, gh, q, or k) r is very close but is always rolled (similar to Spanish r) As mentioned above, in Arabic the corresponding letter to q (ق) makes a different sound than the corresponding letter to k (ك). Conclusion. Most sounds in English and Arabic correspond perfectly. Give yourself time to recognize the different sounds of h and the hard and soft letters. And learn the alphabet. The book "Alif Baa" is one of many introductions to the Arabic alphabet. Much better, take a short course, but make sure it is one that teaches the Arabic alphabet. Those 50 hours it takes you to get comfortable with the alphabet will be endlessly worth it. Plus, if you ever want to learn Persian, Ottoman Turkish, Kurdish, Berber, or Urdu, these languages are also written with the same letters. Some of these languages also have a few extra or modified letters in order to make up for the sounds not available in Arabic but that are found in those languages. Insha'Allah (God willing) you will learn these sounds with ease. Gaining proficiency in the sounds. Learning and differentiating those sounds is crucial to having a good Arabic accent. First expose yourself to the sounds, as many hours of the day as possible. If you have audio in Arabic, play it again and again, in the background of your life. You can occasionally also listen to it attentively, but you need to listen to how Arabic is spoken to be able to speak it.
2,207
High School Mathematics Extensions/Logic/Solutions. Logic. Compound truth tables exercises. 1. NAND: x NAND y = NOT (x AND y) 2. NOR: x OR y = NOT (x OR y) 3. XOR: x XOR y is true if and ONLY if either x or y is true. Produce truth tables for: 1. xyz 2. x'y'z' 3. xyz + xy'z 4. xz 5. (x + y)' 6. x'y' 7. (xy)' 8. x' + y' Laws of Boolean algebra exercises. 1. 2. Show that x + yz is equivalent to (x + y)(x + z) Logic Puzzles exercises. Please go to Logic puzzles.
196
Introduction to Philosophy/What is Political Philosophy. < Introduction to Philosophy A good definition for Political Philosophy is found only after determining what is politics, which is a sticky question to begin with. Politics could be defined as "the question of how to distribute a scarce amount of resources 'justly.'" Which is, essentially, the way in which people obtain, keep, and exercise power. Political philosophy, then, is the study of the theories behind politics. These theories may be used to gain power or to justify its existence. Mostly, however, they have been used to justify or legitimate the existence of contemporary political structures by appealing to "rationality," "reason," or, among others, "natural law." Plato's "Republic" is a good starting point for political philosophy, however, it's really a treatise on education. It starts out by trying to define Justice (one of Kenneth Burke's "God Terms"). In it, he makes an argument for a sort of acetic life-style by, through a standard Platonic dialogue, laying out a minimally functional society. He then, somewhat parodically, responds to the question of luxury by outlining how to 'justly' lay out a state that will accommodate luxuries for the entitled (a state that looks very similar to Sparta). It's a good starting place, because it lays out his conception of Justice, which, inevitably, is based on his theory of the forms, which is a similar basis of conceptions of natural law. Skipping a few thousand years and many important texts, we get to Nicolo Machiavelli's "The Prince, " which was written in 1513 and published, after his death, in 1532. Machiavelli lived in Florence under the Medici family's rule. During a brief period of reform, the Medicis were chased from power, and Machiavelli became a diplomat. When the Medicis returned, Machiavelli was basically exiled. One reason he might have written "The Prince" was to try to return to public life in Florence. This book is often criticized for its moral relativism, and, in an inadequate summarization, that power defines moral action. Moving along, we can get to the social contract theorists, namely Jean Jacques Rousseau, Thomas Hobbes, John Locke, Charles Montesquieu, and Baruch Spinoza. This, of course, will be a brief and incomplete treatment of them, but it's a starting place. Hobbes' theory is mostly found in his book "Leviathan." In it, he defines the state of nature (the prepolitical society) as a place where life is "nasty, brutish, and short." It's important to understand that Hobbes was writing after the Thirty Years War (a religious conflict between, primarily, the English Protestants and the Spanish Catholics), so he had a very pessimistic view of Human Nature. He basically thought that man, left to its own devices, would war against itself, hence the above quotation. From that, Hobbes had a view that society with a state, at its worst, was better than having no state at all, so he concluded that any state action would be justified if for no other reason than that it is a "modus vivendi," a lesser evil. The crux of this, and all social contract theory, is that the citizen has a sort of contract with the state in which people give up some autonomy to make their lives better. For Hobbes, this autonomy was given up to protect life at its most fundamental level. Locke, however, has an entirely different notion. His view of the state of nature (the prepolitical society) is much nicer. Basically people will respect each other and not infringe on another's person or property. If someone does, then the agressee has the natural right to rectify the situation, and anyone else who witnesses an aggressor has the duty to help the agressee. Locke concedes that property disputes will eventually be numerous enough that this would be time consuming, and in the general interest, people will form a state in order to have someone else protect their property and persons--basically to settle disputes--out of convenience. Much of this is laid out in "The Second Treatise on Government." It's important to note that much of the U.S. constitution is based on Locke's political philosophy. A question, of course, is why these social contract theorists are doing what they're doing. I mean, they're creating these ridiculous constructs of a prepolitical reality through a strange process of abstraction. One answer is that they're justifying the existence of the state and the state's actions. Under Hobbes' view, the state is thus legitimate in anything it does, in any violation of 'human rights' because, well, things would be worse off without it. Locke goes about it in order to have a sort of neutral procedure to protect property rights. More contemporary social contract theorists include Robert Nozick (his main book on this, "Anarchy, the State, and Utopia," is a defense for the libertarian state), John Rawls (whose book "A Theory of Justice" outlines the philosophy of deontological liberalism, which is a system of redistributive justice), and Bruce Ackerman (who wrote , arguing for a sort of dialogic justice.) Another important figure is John Stuart Mill who wrote "Utilitarianism." Expanding on Jeremy Bentham's notion of Utilitarianism, Mill's short book influenced political calculations for over a hundred years. The main thrust of Utilitarianism is to increase the overall utility for a society. Utility is similar to happiness, so a decision or distributive scheme that would increase the societal level of happiness is better than a scheme that wouldn't--simple enough. The main critique of Bentham's Utilitarianism is the case in which there are three people and a certain distributions would give person A and B 151 utilitons each (the measurement for utility) and give Person C 0 utilitons. In other configurations, each might be able to have 100 utilitons each, however, the extra two utilitons are created by this distribution at the expense of Person C. This would create, to invoke the catchphrase, a "tyranny of the majority." Mill's Utilitarianism responded to this by adding in protections for the minority. Some more contemporary political philosophers are Hannah Arendt, Jurgen Habermas, Ernesto Laclau, Judith Butler, Richard Rorty, and Slavoj Zizek.
1,463
Introduction to Paleoanthropology/Evolution Culture. The concept of progress. Progress is defined as a gradual but predictable bettering of the human condition over time, that is, things are always getting better over time. Characteristics. Progressivism flourished mainly in optimistic times, that is, times of scientific advances and expanding imaginations: Progressivism has been the doctrine that legitimizes all scientific discoveries and labels them as "advances". Us vs. Them mentality. The problem with categorizing "progressive" judgments must be viewed in long-term perspective as a struggle between two basically incompatible cultural systems: Historical background of Western nations. Hunting and gathering was predominant as a way of life for about 7 million years, with agriculture and plant domestication beginning around 10,000 years ago, and life in cities or states has been around for only the past 5,000 years or so. Changes, or progress, since the first appearance of urban life and state organization (5,000 yrs ago): This situation shifted 500 years ago: The Industrial Revolution. This period marks a major explosion at the scale of humankind: Very quickly, industrial nations could no longer supply from within their own boundaries the resources needed to support further growth or even to maintain current consumption levels. As a consequence: Increased rates of resource consumption, accompanying industrialization, have been even more critical than mere population increase: Industrial ideological systems and prejudices: Ethnocentrism. Ethnocentrism is the belief in the superiority of one's own culture. It is vital to the integrity of any culture, but it can be a threat to the well-being of other peoples when it becomes the basis for forcing Western standards upon non-Western tribal cultures. The impact of modern civilization on tribal peoples is a dominant research theme in anthropology and social sciences. Among economic development writers, the consensus is the clearly ethnocentric view that any contact with superior industrial culture causes non-Western tribal peoples to voluntarily reject their own cultures in order to obtain a better life. In the past, anthropologists also often viewed this contact from the same ethnocentric premises accepted by government officials, developers, missionaries, and the general public. But in recent years, there has been considerable confusion in the enormous culture change literature regarding the basic question of why tribal cultures seem inevitably to be acculturated or modernized by industrial civilization. Arguing for efficiency and survival of the fittest, old-fashioned colonialists elevated this "right" to the level of an ethical and legal principle that could be invoked to justify the elimination of any cultures that were not making "effective" use of their resources. This viewpoint has found its way into modern theories of cultural evolution, expressed as the "Law of Cultural Dominance": "any cultural system which exploits more effectively the energy resources of a given environment will tend to spread in that environment at the expense of other less effective (indigenous) systems." While resource exploitation is clearly the basic cause of the destruction of tribal peoples, it is important to identify the underlying ethnocentric attitudes that are often used to justify what are actually exploitative policies. Apart from the obvious ethical implications involved here, upon close inspection all of these theories expounding the greater adaptability, efficiency, and survival value of the dominant industrial culture prove to be quite misleading. "Of course, as a culture of consumption, industrial <br> civilization is uniquely capable of consuming resources at tremendous <br> rates, but this certainly does not make it a more effective culture than <br> low-energy tribal cultures, if stability or long-run ecological success is <br> taken as the criterion for "effectiveness."" Likewise, we should expect, almost by definition, that members of the culture of consumption would probably consider another culture's resources to be underexploited and to use this as a justification for appropriating them. Among some writers, it is assumed that all people share our desire for what we define as material wealth, prosperity, and progress and that others have different cultures only because they have not yet been exposed to the superior technological alternatives offered by industrial civilization. Supporters of this view seem to minimize the difficulties of creating new wants in a culture and at the same time make the following highly questionable and clearly ethnocentric assumptions: Assumption 1 - Unquestionably, tribal cultures represent a clear rejection of the materialistic values of industrial civilization, yet tribal individuals can indeed be made to reject their traditional values if outside interests create the necessary conditions for this rejection. The point is that far more is involved here than a mere demonstration of the superiority of industrial civilization. Assumption 2 - The ethnocentrism of the second assumption is obvious. Clearly, tribal cultures could not have survived for millions of years if they did not do a reasonable job of satisfying basic human needs. Assumption 3 - Regarding the third assumption, there is abundant evidence that many of the material accoutrements of industrial civilization may well not be worth their real costs regardless of how appealing they may seem in the short term.
1,211
Dutch/Lesson 5. Gesprek 5-1. Notice that John uses the polite forms "alstublieft" (please) and "hartelijk dank" (thank you very much) to a total stranger rather than the more informal "alsjeblieft and "dank je". Grammatica 5-1 ~ Conjugation of verbs; the four moods. Dutch has a relatively simple system of verbs with four moods and eight tenses. The Dutch verb has a few more endings than the English one. We will focus on three forms: Imperative mood. The simplest form is the imperative mood. As in English it is simply the "stem" of the verb: There is a (rather archaic) plural of the imperative, that takes an extra -t: Often imperatives are 'softened' to a kind request or encouragement with modal adverbs like "maar" or "even": In polite address with 'u' often a -t is added, although grammarians don't always consider that an imperative: Indicative mood in the present tense. By far the most important mood is the indicative one and its tenses. We will look at the present tense only here. The first person singular has the same form as the imperative: The third person ("he/she") singular acquires a final -t in the present. In English it gets a -s instead: In contrast to English this also applies to the second person singular: However, the -t ending is lost for the informal jij form, when the word order is reversed, e.g. when asking a question: The Dutch verb has a 'plural' form that generally ends in "-en", which is used for all plural persons and for the infinitive as well: Notice that the vowel usually does not change and therefore we are doubling either consonants or vowels when we go from one syllable to two: Brief exercise. Choose the correct form of the verb, then hover you mouse over the verb to see the right answer. Fill in the correct verb form in the blank. Hover to check the answer. Infinitive mood. The plural form is also the infinitive of the verb: It occasionally takes 'te' as in English 'to' but that is more exceptional in Dutch. The form with "te" is known as the "extended infinitive" and it has its own uses. Some of them are quite comparable to what happens in English: The infinitive can be used as a "noun" where English uses the gerund in -ing. It is always neuter in gender: There is a present participle, it ends in -end(e) rather than -ing. It is used mostly as an adjective: There are forms ending in -ing in Dutch but they are (feminine) "nouns of action" only loosely associated with the verb they derive from, e.g. We will revisit verbal nouns much more extensively in one of the later lessons. Some verbs are monosyllabic, e.g. Subjunctive mood. The subjunctive mood is even rarer in Dutch than it is in English. It only exists in third person singular and (with few exceptions) present tense. It looks like the infinitive minus -n: It is only mentioned here for the sake of completeness. It is only used in a few wishes and recipes. Some irregular verbs. Of course, there are a number of irregular verbs in Dutch, but often they are the same ones as in English. In English "can" and "may" do not take an -s in the third person. In Dutch a similar thing happens: We will revisit irregulars later. Exercise 5.1. Read conversation 5.1 again and underline all verbs. Mark all endings as 0) - none 1) - t and 2) -en and identify in each case why this ending is used. Exercise 5.2. Translate into Dutch: Grammatica 5-2. Clitics revisited. As shown before many personal pronouns have a strong and a weak form: The weak forms me, je, we and ze are used when the emphasis lies on some other part of the sentence. The strong form expresses mild emphasis. Some pronouns do not have clitics, like "u" and "jullie". In the spoken language there are more weak forms than in the written one, e.g. for "he" (ie), "him" ('m) and for "her" (d'r or 'r) In the written language they are often written in full "hij", "haar" and "hem". For possessive pronouns the same holds. Compare: Again the spoken language has a clearer distinction than the written one. The forms m'n, z'n, and especially d'r are often written as mijn, zijn and haar in formal writing. The form "je" is pretty much the only clitic possessive generally accepted in writing. Woordenschat 5. <br clear="all"> Quizlet. The vocabulary of this lesson can be trained at Quizlet (27 terms) Progress made. If you have studied this lesson well, you should Cumulative term count
1,238
Introduction to Paleoanthropology/Darwinian Thought. Pre-Darwinian Thoughts on Evolution. Throughout the Middle Ages, there was one predominant component of the European world view: stasis. The social and political context of the Middle Ages helps explain this world view: This social and political context, and its world view, provided a formidable obstacle to the development of evolutionary theory. In order to formulate new evolutionary principles, scientists needed to: From the 16th to the 18th century, along with renewed interest in scientific knowledge, scholars focused on listing and describing all kinds of forms of organic life. As attempts in this direction were made, they became increasingly impressed with the amount of biological diversity that confronted them. These scholars included: Therefore, the principle of "fixity of species" that ruled during the Middle Ages was no longer considered valid. In the mid-19th century, offered a new theory which pushed further the debate of evolutionary processes and marks a fundamental step in their explanation by suggesting that evolution works through natural selection. Charles Darwin (1809-1882). Charles Darwin's life as a scientist began when he took a position as naturalist aboard "HMS Beagle", a ship charting the coastal waters of South America. As the ship circled the globe over a five-year period (1831-1836), Darwin puzzled over the diversity and distribution of life he observed. Observations and collections of materials made during these travels laid the foundation for his life's work studying the natural world. As an example, the "Beagle" stopped five weeks in the Galapagos archipelago. There Darwin observed an unusual combination of species and wondered how they ended up on this island. Darwin's observations on the diversity of plants and animals and their particular geographical distribution around the globe led him to question the assumption that species were immutable, established by a single act of creation. He reasoned that species, like the Earth itself, were constantly changing. Life forms colonized new habitats and had to survive in new conditions. Over generations, they underwent transmutation into new forms. Many became extinct. The idea of evolution slowly began to take shape in his mind. In his 1859 publication "On the Origin of Species", Darwin presented some of the main principles that explained the diversity of plants and animals around the globe: adaptation and natural selection. According to him, species were mutable, not fixed; and they evolved from other species through the mechanism of natural selection. Darwin's theory of natural selection. In 1838, Darwin, at 28, had been back from his voyage on the "Beagle" for two years. He read Thomas Malthus's "Essay on Population", which stated that human populations invariably grow until they are limited by starvation, poverty, and death, and realized that Malthus's logic could also apply to the natural world. This realization led Darwin to develop the principle of evolution by natural selection, which revolutionized our understanding of the living world. His theory was published for the first time in 1859 in "On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life". Darwin's Postulates. The theory of adaptation and how species change through time follows three postulates: Examples of adaptation by natural selection. During his voyage on the "HMS Beagle", Darwin observed a curious pattern of adaptations among several species of finches (now called Darwin's finches) that live on the Galapagos Islands. Several traits of finches went through drastic changes in response to changes in their environment. One example is beak depth: Through natural selection, average morphology (an organism's size, shape and composition) of the bird population changed so that birds became better adapted to their environment. Benefits and disadvantages of evolution. Individual Selection. Adaptation results from competition among individuals, not between entire populations or species. Selection produces adaptations that benefit individuals. Such adaptation may or may not benefit the population or species. In the case of finches' beak depth, selection probably does allow the population of finches to compete more effectively with other populations of seed predators. However, this need not be the case. Selection often leads to changes in behavior or morphology that increase the reproductive success of individuals but decrease the average reproductive success and competitive ability of the group, population, and species. The idea that natural selection operates at the level of the individual is a key element in understanding adaptation. Directional Selection. Instead of a completely random selection of individuals whose traits will be passed on to the next generation, there is selection by forces of nature. In this process, the frequency of genetic variants for harmful or maladaptive traits within the population is reduced while the frequency of genetic variants for adaptive traits is increased. Natural selection, as it acts to promote change in gene frequencies, is referred to as directional selection. Stabilizing Selection. Finches' beaks (Example) Large beaks have benefits as well as disadvantages. Birds with large beaks are less likely to survive their juvenile period than birds with small beaks, probably because they require more food to grow. Evolutionary theory prediction: At this point, the population reaches equilibrium with regard to beak size. The process that produces this equilibrium state is called stabilizing selection. Even though average characteristics of the beak in the population will not change in this situation, selection is still going on. The point to remember here is that populations do not remain static over the long run; if so, it is because a population is consistently favored by stabilizing selection. Rate of Evolutionary Change. In Darwin's day, the idea that natural selection could change a chimpanzee into a human, much less that it might do so in just a few million years (which is a brief moment in evolutionary time), was unthinkable. Today, most scientists believe that humans evolved from an apelike creature in only 5 to 10 million years. In fact, some of the rates of selective change observed in contemporary populations are far faster than necessary for natural selection to produce the adaptations that we observe. Therefore the real puzzle is why the change in the fossil record seem to have been quite slow. The fossil record is still very incomplete. It is quite likely that some evolutionary changes in the past were rapid, but the sparseness of the fossil record prevents us from detecting them. Darwin's Difficulties. In "On the Origin of Species", Darwin proposed that new species and other major evolutionary changes arise by the accumulation of small variations through natural selection. This idea was not widely embraced by his contemporaries. Darwin's critics raised a major objection to his theory: The actions of selection would inevitably deplete variation in populations and make it impossible for natural selection to continue. Yet Darwin couldn't convince his contemporaries that evolution occurred through the accumulation of small variations because he could not explain how variation is maintained, because he and his contemporaries did not yet understand the mechanics of inheritance. For most people at the time, including Darwin, many of the characteristics of offspring were thought to be an average of the characteristics of their parents. This phenomena was believed to be caused by the action of blending inheritance, a model of inheritance that assumes the mother and father each contribute a hereditary substance that mixes, or "blends", to determine the characteristics of the offspring. The solution to these problems required an understanding of genetics, which was not available for another half century. It was not until well into the 20th century that geneticists came to understand how variation is maintained, and Darwin's theory of evolution was generally accepted.
1,814
Introduction to Paleoanthropology/Genetics/Introduction. Although Charles Darwin is credited with discovering the first observations of natural selection, he never explained how or why the process happens. Other scholars tackled these problems. Gregor Mendel (1822-1884). Darwin recognized the importance of individual variation in process of natural selection, but could not explain how individual differences were transmitted from one generation to another. Although none of main scientists in the 19th-century debate about evolution knew it, the key experiments necessary to understand how genetic inheritance really worked had already been performed by an obscure monk, Gregor Mendel, who lived near Brno, in the Czech Republic. Between 1856 and 1863, Mendel performed many breeding experiments using the common edible garden pea plants. He meticulously recorded his observations and isolated a number of traits in order to confirm his results. In 1866, Mendel published a report where he described many features of the mode of inheritance which Darwin was seeking. He proposed the existence of three fundamental principles of inheritance: Segregation; Independent Assortment; Dominance and Recessiveness. Because the basic rules of inheritance Mendel discovered apply to humans as well as to peas, his work is of prime relevance for paleoanthropology and human evolution. Nevertheless Mendel's work was beyond the thinking of the time; its significance was overlooked and unrecognized until the beginning of the 20th century. Mendelian Genetics. Mendel's research. Mendel observed that his peas had seven easily observable characteristics, with only two forms, or variants, for each trait: After crossing plants, Mendel noted and carefully recorded the number of plants in each generation with a given trait. He believed that the ratio of plant varieties in a generation of offspring would yield clues about inheritance, and he continually tested his ideas by performing more experiments. From his controlled experiments and the large sample of numerous breeding experiments, Mendel proposed the existence of three fundamental principles of inheritance: Segregation. Mendel began crossing different varieties of purebred plants that differed with regard to a specific trait. For example, pea color. In the experiment: These results suggested an important fact: This is Mendel's first principle of inheritance: principle of segregation. Independent Assortment. Mendel also made crosses in which two traits were considered simultaneously to determine whether there was a relationship between them. For example: Plant height and seed color. Results of experiments: No relationship between the two traits were found; nothing to dictate that a tall plant must have yellow (or green) seeds; therefore, expression of one trait is not influenced by the expression of the other trait. Based on these results, Mendel stated his second principle of inheritance: the principle of independent assortment. This principle says that the genes that code for different traits assort independently of each other. Dominance and Recessiveness. Mendel also recognized that the trait that was absent in the first generation of offspring plants had not actually disappeared at all - it had remained, but was masked and could not be expressed. To describe the trait that seemed to be lost, Mendel used the term recessive; the trait that was expressed was said to be dominant. Thus the important principle of dominance and recessiveness was formulated; and it remains today an essential concept in the field of genetics. Implications of Mendel's research. Mendel thought his findings were important, so he published them in 1866. Scientists, especially botanists studying inheritance, in the late 19th century, should have understood the importance of Mendel's experiments. But instead, they dismissed Mendel's work, perhaps because it contradicted their own results or because he was an obscure monk. Soon after the publication of his work, Mendel was elected abbot of his monastery and was forced to give up his experiments. His ideas did not resurface until the turn of the 20th century, when several botanists independently replicated Mendel's experiments and rediscovered the laws of inheritance. The role of cell division in inheritence. Mitosis and Meiosis By the time Mendel's experiments were rediscovered in 1900, some facts were well known: In order for plants and animals to grow and maintain good health, body cells of an organism must divide and produce new cells. Cell division is the process that results in the production of new cells. Two types of cell division have been identified: Mendel and chromosomes. Mendel stated in 1866 that an organism's observed traits are determined by "particules" (later named genes by the American geneticist T.H. Morgan) acquired from each of the parents. This statement was only understood by further research. Between the time of Mendel's initial discovery of the nature of inheritance and its rediscovery at the turn of the century, a crucial feature of cellular anatomy was discovered: the chromosome. In 1902, a graduate student from Columbia University, (Walter Sutton) made the connection between chromosomes and properties of inheritance discovered by Mendel's principles: Molecular genetics. In first half of the 20th century, geneticists made substantial progress in: By the middle of the 20th century it was known that chromosomes contain two structurally complex molecules: protein and DNA (deoxyribonucleic acid). It was also determined that the particle of heredity postulated by Mendel was DNA, not protein - though exactly how DNA might contain and convey the information essential to life was still a mystery. In the early 1950s, several biologists (led by Francis Crick and James Watson), at Cambridge University, made a discovery that revolutionized biology: they deduced the structure of DNA. Through this discovery, we now know how DNA stores information and how this information controls the chemistry of life, and this knowledge explains why heredity leads to the patterns Mendel describes in pea plants, and why there are sometimes new variations. Molecular Components. Cells<br> Cells are basic units of life in all living organisms. Complex multicellular forms (plants, insects, birds, humans, ...) are composed of billions of cells, all functioning in complex ways to promote the survival of the individual. DNA Molecules<br> Complex molecule with an unusual shape: like two strands (called nucleotides) of a rope (composed of alternating sequences of phosphate and sugar molecules) twisted around one another (double helix). Chemical bases that connect two strands constitute code that contains information to direct production of proteins. It is at this level that development of certain traits occurs; . . Yet, since the DNA in a single chromosome is millions of bases long, there is room for a nearly infinite variety of messages. DNA molecules have the unique property of being able to produce exact copies of themselves: as long as no errors are made in the replication process, new organisms will contain genetic material exactly like that in ancestral organisms. Genes<br> A Gene is a short segment of the DNA molecule that directs the development of observable or identifiable traits. Thus genetics is the study of how traits are transmitted from one generation to the next. Chromosomes<br> Each chromosome contains a single DNA molecule, roughly two meters long that is folded up to fit in the nucleus. Chromosomes are nothing more than long strands of DNA combined with protein to produce structures that can actually be seen under a conventional light microscope Each kind of organism has characteristic number of chromosomes, which are usually found in pairs. For example, human cells contain 23 pairs. Cellular processes. DNA Replication<br> In addition to preserving a message faithfully, hereditary material must be replicable. Without the ability to make copies of itself, the genetic message that directs the activities of living cells could not be spread to offspring, and natural selection would be impossible. Cells multiply by dividing in such a way that each new cell receives a full complement of genetic material. For new cells to receive the essential amount of DNA, it is first necessary for the DNA to replicate: Protein Synthesis<br> One of most important functions of DNA is that it directs protein synthesis within the cell. Proteins are complex, three-dimensional molecules that function through their ability to bind to other molecules. Proteins function in myriad ways: Proteins are not only major constituents of all body tissues, but also direct and perform physiological and cellular functions. Therefore critical that protein synthesis occur accurately, for, if it does not, physiological development and metabolic activities can be disrupted or even prevented. Evolutionary significance of cellular processes. Meiosis is a highly important evolutionary innovation, since it increases variation in populations at a faster rate than mutation alone can do in asexually reproducing species. Individual members of sexually reproducing species are not genetically identical clones of other individuals. Therefore each individual represents a unique combination of genes that has never occurred before and will never occur again. Genetic diversity is therefore considerably enhanced by meiosis. If all individuals in a population are genetically identical over time, the natural selection and evolution cannot occur. Therefore, sexual reproduction and meiosis are of major evolutionary importance because they contribute to the role of natural selection in populations. Synthesizing the knowledge. Darwin believed that evolution proceeded by the gradual accumulation of small changes. But Mendel and the biologists who elucidated the structure of the genetic system around the turn of the century proved that inheritance was fundamentally discontinuous. Yet turn-of-the-century geneticists argued that this fact could not be reconciled with Darwin's idea that adaptation occurs through the accumulation of small variations. These arguments convinced most biologists of the time, and consequently Darwinism was in decline during the early part of the 20th century. In the early 1930s, a team of British and American biologists showed how Mendelian genetics could be used to explain continuous variation. Their insights led to the resolution of two main objections to Darwin's theory: When their theory was combined with Darwin's theory of natural selection and with modern biological studies, a powerful explanation of organic evolution emerged. This body of theory and the supporting empirical evidence is now called the modern synthesis. Variation maintained. Darwin knew nothing about genetics, and his theory of adaptation by natural selection was framed as a "struggle for existence": there is variation of observed traits that affects survival and reproduction, and this variation is heritable. Also, the blending model of inheritance appealed to 19th century thinkers, because it explained the fact that for most continuously varying characters, offspring are intermediate between their parents. According to Mendelian genetics, however, the effects of genes are blended in their expression to produce a green phenotype, but the genes themselves remain unchanged. Thus, when two green parents mate, they can produce blue, yellow and green offspring. Sexual reproduction produces no blending in the genes themselves, despite the fact that offspring may appear to be intermediate between their parents. This is because genetic transmission involves faithful copying of the genes themselves and reassembling them in different combinations in zygotes. The only blending that occurs takes place at the level of the expression of genes in phenotypes (ex. Beak depth, pea color). The genes themselves remain distinct physical entities. Yet, these facts do not completely solve the problem of the maintenance of variation. Indeed, even is selection tends to deplete variation, there would still be variation of traits due to environmental effects. In fact, without genetic variation there can be no further adaptation. Mutation. Genes are copied with amazing fidelity, and their messages are protected from random degradation by a number of molecular repair mechanisms. However, every once in a while, a mistake in copying is made that goes unrepaired. These mistakes damage the DNA and alter the message that it carries. These changes are called mutations, and they add variation to a population by continuously introducing new genes, some of which may produce novel traits that selection can assemble into adaptations. Although rates of mutation are very slow, this process plays an important role in generating variation. More importantly, this process provides the solution to one of Darwin's dilemma: the problem of accounting for how variation is maintained in populations. Twentieth century research has shown that there are two pools of genetic variation: hidden and expressed. Mutation adds new genetic variation, and selection removes it from the pool of expressed variation. Segregation and recombination shuffle variation back and forth between the two pools with each generation. In other words: if individuals with a variety of genotypes are equally likely to survive and reproduce, a considerable amount of variation is protected (or hidden) from selection; and because of this process, a very low mutation rate can maintain variation despite the depleting action of selection. Human evolution and adaptation are intimately linked to life processes that involve cells, replication and decoding of genetic information, and transmission of this information between generations. Because physical anthropologists are concerned with human evolution, adaptation, and variation, they must have a thorough understanding of the factors that lie at very root of these phenomena. Because it is genetics that ultimately links or influences many of the various subdisciplines of biological anthropology.
3,272
Introduction to Paleoanthropology/Primates/Modern. The Classification System. In order to understand the exact place of humanity among the animals, it is helpful to describe the system used by biologists to classify living things. The basic system was devised by 18th-century Swedish naturalist Carl von Linné. The purpose of the Linnean system was simply to create order in the great mass of confusing biological data that had accumulated by that time. Von Linné classified living things on the basis of overall similarities into small groups or species. On the basis of homologies, groups of like species are organized into larger, more inclusive groups, called genera. Through careful comparison and analysis, von Linné and those who have come after him have been able to classify specific animals into a series of larger and more inclusive groups up to the largest and most inclusive of all, the animal kingdom. The Primate Order. Primates are only one of several mammalian orders, such as rodents, carnivores, and ungulates. As such, primates share a number of features with other mammals: Species. In modern evolutionary biology, the term species is usually defined as a population or group of organisms that look more or less alike and that is potentially capable of interbreeding to produce fertile offspring. Practically speaking, individuals are usually assigned to a species based on their appearance, but it is their ability to interbreed that ultimately validates (or invalidates) the assignment. Thus, no matter how similar two populations may look, if they are incapable of interbreeding, they must be assigned to different species. Populations within a species that are quite capable of interbreeding but may not regularly do so are called subspecies. Evolutionary theory suggests that species evolve from these populations or subspecies through the accumulation of differences in the gene pools of the separated groups. Primate Characteristics. Although living primates are a varied group of animals, they do have a number of features in common. These features are displayed in varying degrees by the different kinds of primates: in some they are barely detectable, while in others they are greatly elaborated. All are useful in one way or another to arboreal (or tree-dwelling) animals, although they are not essential to life in trees. Primate Sense Organs. The primates' adaptation to their way of life in the trees coincided with changes in the form and function of their sensory apparatus: the senses of sight and touch became highly developed, and the sense of smell declined. Catching insects in trees, as the early primates did and as many still do, demands quickness of movement and the ability to land in the right place without falling. Thus, they had to be adept at judging depth, direction, distance and the relationship of objects in space. Primates' sense of touch also became highly developed as a result of arboreal living. An effective feeling and grasping mechanism was useful to them in grabbing their insect prey, and by preventing them from falling and tumbling while moving through the trees. The Primate Brain. By far the most outstanding characteristic in primate evolution has been the enlargement of the brain among members of the order. Primate brains tend to be large, heavy in proportion to body weight, and very complex. Reasons for this important change in brain size are many: Primate Teeth. Although they have added other things than insects to their diets, primates have retained less specialized teeth than other mammals. The evolutionary trend for primate dentition has generally been toward economy, with fewer, smaller, more efficient teeth doing more work. Primate Skeleton. A number of factors are responsible for the shape of the primate skull as compared with those of most other mammals: changes in dentition, changes in the sensory organs of sight and smell, and increase in brain size. As a result, primates have more a humanlike face than other mammals. The upper body is shaped such as to allow greater maneuverability of the arms, permitting them to swing sideways and outward from the trunk of the body. The structural characteristics of the primate foot and hand make grasping possible; the digits are extremely flexible, the big toe is fully opposable to the other digits in most species, and the thumb is opposable to the other digits to varying degrees. The flexible, unspecialized primate hand was to prove a valuable asset for future evolution of this group. It allowed early hominines to manufacture and utilize tools and thus embark on the new and unique evolutionary pathway that led to the revolutionary ability to adapt through culture. Types of Living Primates. Prosimians. The most primitive of the primates are represented by the various prosimians, including the lemurs and the lorises, which are more similar anatomically to earlier mammalian ancestors than are other primates (monkeys, apes, humans). They tend to exhibit certain more ancestral features, such as a more pronounced reliance on olfaction (sense of smell). Their greater olfactory capabilities are reflected in the presence of a moist, fleshy pad at the end of the nose and in a relatively long snout. Lemurs and lorises represent the same general adaptive level. Both groups exhibit good grasping and climbing abilities and a fairly well developed visual apparatus, although their vision is not completely stereoscopic, and color vision may not be as well developed as in anthropoids. Lemurs. At present, lemurs are found only on the island of Madagascar and adjacent islands off the east coast of Africa. As the only natural nonhuman primates on this island, they diversified into numerous and varied ecological niches without competition from monkeys and apes. Thus, the 52 surviving species on Madagascar represent an evolutionary pattern that has vanished elsewhere. Lemurs range in size from 5 inches to a little over two feet. While the larger lemurs are diurnal and exploit a wide variety of dietary items (leaves, fruits, buds, bark), the smaller forms (mouse and dwarf lemurs) are nocturnal and insectivorous. Lemurs display considerable variation regarding numerous other aspects of behavior. While many are primarily arboreal, others (e.g. ring-tailed lemur) are more terrestrial. Some arboreal species are quadrupeds, and others are vertical clingers and leapers. Lorises. Lorises are similar in appearance to lemurs, but were able to survive in mainland areas by adopting a nocturnal activity pattern at a time when most other prosimians became extinct. Thus, they were (and are still) able to avoid competition with more recently evolved primates (diurnal monkeys). There are five loris species, all of which are found in tropical forest and woodland habitats of India, Sri Lanka, Southeast Asia and Africa. Locomotion in lorises is a slow, cautious climbing form of quadrupedalism, and flexible hip joints permit suspension by hind limbs while the hands are used in feeding. Some lorises are almost entirely insectivorous; others supplement their diet with various combinations of fruits, leaves, gums, etc. Tarsiers. There are seven recognized species, all restricted to island areas in Southeast Asia. They inhabit a wide range of forest types, from tropical forest to backyard gardens. They are nocturnal insectivores, leaping onto prey from lower branches and shrubs. They appear to form stable pair bonds, and the basic tarsier social unit is a mated pair and their young offspring. Tarsiers present a complex blend of characteristics not seen in other primates. They are unique in that their enormous eyes, which dominate much of the face, are immobile within their sockets. To compensate for this inability to move the eyes, tarsiers are able to rotate their heads 180º, like owls. Simians. Although there is much variation among simians (also called anthropoids), there are certain features that, when taken together, distinguish them as a group from prosimians (and other mammals) Monkeys. Approximately 70 percent of all primates (about 240 species) are monkeys, although it is frequently impossible to give precise numbers of species because the taxonomic status of some primates remains in doubt and there are constantly new discoveries. Monkeys are divided into two groups (New World and Old World) separated by geographical area as well as by several million years of separate evolutionary history. New World monkeys exhibit a wide range of size, diet, and ecological adaptation. In size, they vary from tiny marmosets and tamarins to the howler monkey. Almost all are exclusively arboreal; most are diurnal. Although confined to trees, New World monkeys can be found in a wide range of arboreal environments throughout most forested areas in Southern Mexico and Central and South America. One of the characteristics distinguishing New World monkeys from Old World is the shape of their nose: they have broad noses with outward-facing nostrils. Old World monkeys display much more morphological and behavioral diversity than New World monkeys. Except for humans, they are the most widely distributed of all living primates. They are found throughout sub-Saharan Africa and Southern Asia, ranging from tropical jungle habitats to semiarid desert and even to seasonally snow-covered areas in northern Japan. Most are quadrupedal and primarily arboreal. Apes and humans. This group is made up of several families: They differ from monkeys in numerous ways: Orangutans. Found today only in heavily forested areas on the Indonesian islands of Borneo and Sumatra, orangutans are slow, cautious climbers whose locomotor behavior can best be described as "four-handed", a tendency to use all four limbs for grasping and support. Although they are almost completely arboreal, they do sometimes travel quadrupedally on ground. They are very large animals with pronounced sexual dimorphism: males weigh over while females are usually less than . Gorillas. The largest of all living primates, gorillas are today confined to forested areas of western and equatorial Africa. There are four generally recognized subspecies: Western Lowland Gorilla, Cross River Gorilla, Eastern Lowland Gorilla, and Mountain Gorilla. Gorillas exhibit strong sexual dimorphism. Because of their weight, adult gorillas, especially males, are primarily terrestrial and adopt a semiquadrupedal (knuckle-walking) posture on the ground. All gorillas are almost exclusively vegetarian. Common Chimpanzees. The best-known of all nonhuman primates, Common Chimpanzees are found in equatorial Africa. In many ways, they are structurally similar to gorillas, with corresponding limb proportions and upper body shape, because of their similar locomotion when on the ground (quadrupedal knuckle-walking). However, chimps spend more time in trees; when on the ground, they frequently walk bipedally for short distances when carrying food or other objects. They are highly excitable, active and noisy. Common Chimpanzee social behavior is complex, and individuals form lifelong attachments with friends and relatives. They live in large, fluid communities of as many as 50 individuals or more. At the core of a community is a group of bonded males. They act as a group to defend their territory and are highly intolerant of unfamiliar chimps, especially nongroup males. Bonobos. Found only in an area south of the Zaire River in the Democratic Republic of Congo, Bonobos (also called Pygmy Chimpanzees) have a strong resemblance to Common Chimpanzees, but are somewhat smaller. Yet they exhibit several anatomical and behavioral differences. Physically, they have a more linear body build, longer legs relative to the arms, a relatively smaller head, and a dark face from birth. Bonobos are more arboreal than Common Chimpanzees, and they appear to be less excitable and aggressive. Like Common Chimpanzees, Bonobos live in geographically based, fluid communities, and they exploit many of the same foods, including occasional meat derived from killing small mammals. But they are not centered around a group of closely bonded males. Instead, male-female bonding is more important than in Common Chimpanzees.
2,832
Introduction to Paleoanthropology/Primates/Humans. __toc__ Information about primate behavior and ecology plays an integral role in the story of human evolution. Primate social behavior. Over the past four decades, primatologists have made prolonged close-range observations of monkeys and apes in their natural habitats, and we are discovering much about social organization, learning ability, and communication among our closest relatives (chimpanzees, and gorillas) in the animal kingdom. In particular, we are finding that a number of behavioral traits that we used to think of as distinctively human are found to one degree or another among other primates, reminding us that many of the differences between us and them are differences of degree, rather than kind. The Group. Primates are social animals, living and travelling in groups that vary in size from species to species. In most species, females and their offspring constitute core of social system. Among chimps, the largest organizational unit is the community, composed of 50 or more individuals. Rarely however are all these animals together at a single time. Instead they are usually ranging singly or in small subgroups consisting of adult males together, females with their young, or males and females together with their young. In the course of their travels, subgroups may join forces and forage together, but sooner or later these will break up into smaller units. Dominance. Many primate societies are organized into dominance hierarchies, that impose some degree of order with groups by establishing parameters of individual behavior. Although aggression is frequently a means of increasing one's status, dominance usually serves to reduce the amount of actual physical violence. Not only are lower-ranking animals unlikely to attack or even threaten a higher-ranking one, but dominant animals are also frequently able to exert control simply by making a threatening gesture. Individual rank or status may be measured by access to resources, including food items and mating partners. An individual's rank is not permanent and changes throughout life. It is influenced by many factors, including sex, age, level of aggression, amount of time spent in the group, intelligence, etc. In species organized into groups containing a number of females associated with one or several adult males, the males are generally dominant to females. Within such groups, males and females have separate hierarchies, although very high ranking females can dominate the lowest-ranking males (particularly young ones). Yet many exceptions to this pattern of male dominance: Aggression. Within primate societies, there is an interplay between affiliative behaviors that promote group cohesion and aggressive behaviors that can lead to group disruption. Conflict within a group frequently develops out of competition for resources, including mating partners and food items. Instead of actual attacks or fighting, most intragroup aggression occurs in the form of various signals and displays, frequently within the context of dominance hierarchy. Majority of such situations are resolved through various submissive and appeasement behaviors. But conflict is not always resolved peacefully. Individual interaction. To minimize actual violence and to defuse potentially dangerous situations, there is an array of affiliative, or friendly, behaviors that serve to reinforce bonds between individuals and enhance group stability. Common affiliative behaviors include reconciliation, consolation, and simple interactions between friends and relatives. Most such behaviors involve various forms of physical contact including touching, hand holding, hugging, and, among chimpanzees, kissing. In fact, physical contact is one of the most important factors in primate development and is crucial in promoting peaceful relationships in many primate social groups. One of the most notable primate activities is grooming, the ritual cleaning of another animal's coat to remove parasites, shreds of grass or other matter. Among bonobos and chimps, grooming is a gesture of friendliness, submission, appeasement or closeness. The mother-infant bond is the strongest and most long-lasting in the group. It may last for many years; commonly for the lifetime of the mother. Play. Frequent play activity among primate infants and juveniles is a means of learning about the environment, testing strength, and generally learning how to behave as adults. For example, Chimpanzee infants mimic the food-getting activities of their mothers, "attack" dozing adults, and "harass" adolescents. Communication. Primates, like many animals, vocalize. They have a great range of calls that are often used together with movements of the face or body to convey a message. Observers have not yet established the meaning of all the sounds, but a good number have been distinguished, such as warning calls, threat calls, defense calls, and gathering calls. Much of the communication takes place by the use of specific gestures and postures. Home range. Primates usually move about within circumscribed areas, or home ranges, which are of varying sizes, depending on the size of the group and on ecological factors, such as availability of food. Home ranges are often moved seasonally. The distance traveled by a group in a day varies, but may include many miles. Within this home range is a portion known as the core area, which contains the highest concentration of predictable resources (water, food) and where the group is most frequently found (with resting places and sleeping trees). The core area can also be said to be a group's territory, and it is this portion of the home range that is usually defended against intrusion by others: Among primates in general, the clearest territoriality appears in forest species, rather than in those that are terrestrial in their habits. Tool use. A tool may be defined as an object used to facilitate some task or activity. A distinction must be made between simple tool use and tool making, which involves deliberate modification of some material for its intended use. In the wild, gorillas do not make or use tools in any significant way, but chimpanzees do. Chimps modify objects to make them suitable for particular purposes. They can also pick up and even prepare objects for future use at some other location, and they can use objects as tools to solve new and novel problems. Examples: Primates and human evolution. Studies of monkeys and apes living today [especially those most closely related to humans: gorillas, bonobos and chimpanzees] provide essential clues in the reconstruction of adaptations and behavior patterns involved in the emergence of our earliest ancestors. These practices have several implications: To produce a tool, even a simple tool, based on a concept is an extremely complex behavior. Scientists previously believed that such behavior was the exclusive domain of humans, but now we must question this very basic assumption. At the same time, we must be careful about how we reconstruct this development. Primates have changed in various ways from earlier times, and undoubtedly certain forms of behavior that they now exhibit were not found among their ancestors. Also it is important to remember that present-day primate behavior shows considerable variation, not just from one species to another, but also from one population to another within a single species. Primate fossils. The study of early primate fossils tells us something we can use to interpret the evolution of the entire primate line, including ourselves. It gives us a better understanding of the physical forces that caused these primitive creatures to evolve into today's primates. Ultimately, the study of these ancient ancestors gives us a fuller knowledge of the processes through which insect-eating, small-brained animals evolved into a toolmaker and thinker that is recognizably human. Rise of the primates. For animals that have often lived where conditions for fossilization are generally poor, we do have a surprisingly large number of primate fossils. Some are relatively complete skeletons, while most are teeth and jaw fragments. Primates arose as part of a great adaptive radiation that began more than 100 million years after the appearance of the first mammals. The reason for this late diversification of mammals was that most ecological niches that they have since occupied were either preempted by reptiles or were nonexistent until the flowering plants became widespread beginning about 65 million years ago. By 65 million years ago, primates were probably beginning to diverge from other mammalian lineages (such as those which later led to rodents, bats and carnivores). For the period between 65-55 Myrs ago (Paleocene), it is extremely difficult to identify the earliest members of the primate order: Eocene primates. First fossil forms that are clearly identifiable as primates appeared during Eocene (55-34 million years ago). From this period have been recovered a wide variety of primates, which can all be called prosimians. Lemur-like adapids were common in the Eocene, as were species of tarsier-like primates. These first primates were insect eaters and their characteristics developed as an adaptation to the initial tree-dwelling environment: This time period exhibited the widest geographical distribution and broadest adaptive radiation ever displayed by prosimians. In recent years, numerous finds of Late Eocene (36-34 Myrs ago) suggest that members of the adapid family were the most likely candidates as ancestors of early anthropoids. Oligocene primates. The center of action for primate evolution after Eocene is confined largely to Old World. Only on the continents of Africa and Eurasia can we trace the evolutionary development of apes and hominids due to crucial geological events; particularly continental drift. During the Oligocene (34-23 Myrs ago), a great deal of diversification among primates occurred. The vast majority of Old World primate fossils for this period comes from just one locality: the Fayum area of Egypt. From Fayum, 21 different species have been identified. Miocene Primates. A great abundance of hominoid fossil material has been found in the Old World from the Miocene (23-7 million years ago). Based on size, these fossils can be divided into two major subgroupings: small-bodied and large-bodied hominoids. The remarkable evolutionary success represented by the adaptive radiation of large-bodied hominoids is shown in its geographical range from Africa to Eurasia. Large-bodied hominoids first evolved in Africa around 23 million years ago. Then they migrated into Eurasia, dispersed rapidly and diversified into a variety of species. After 14 million years ago, we have evidence of widely distributed hominoids in many parts of Asia and Europe. The separation of the Asian large-bodied hominoid line from the African stock (leading ultimately to gorillas, chimps and humans) thus would have occurred at about that time. Miocene apes and Human Origin. The large-bodied African hominoids appeared by 16 million years ago and were widespread even as recently as 8 million years ago. Based on fossils of teeth and jaws, it was easy to postulate some sort of relationship between them and humans. A number of features: position of incisors, reduced canines, thick enamel of the molars, and shape of tooth row, seemed to point in a somewhat human direction. Although the African hominoids display a number of features from which hominine characteristics may be derived, and some may occasionally have walked bipedally, they were much too apelike to be considered hominines. Nevertheless, existing evidence allows the hypothesis that apes and humans separated from a common evolutionary line sometime during the Late Miocene and some fossils, particularly the African hominoids, do possess traits associated with humans. Not all African apes evolved into hominines. Those that remained in the forests and woodlands continued to develop as arboreal apes, although ultimately some of them took up a more terrestrial life. These are the bonobos, chimpanzees and gorillas, who have changed far more from the ancestral condition than have the still arboreal orangutans.
2,715
European History/Renaissance Europe. The Italian Renaissance of the 13th and 14th centuries spread through the rest of Europe, representing a time when Europe sought knowledge from the ancient world and moved out of the Dark Ages. A renewed interest in science and experimentation, and a focus on the importance of living well in the present as opposed to the afterlife as promoted by the Church. The Renaissance brought on an explosion in art, poetry, and architecture. New techniques and styles developed as these art forms moved away from the colder and darker styles of the Middle Ages. This period, in this view, represents Europe emerging from a long period of backwardness and the rise of trade and exploration. The Italian Renaissance is often labeled as the beginning of the "modern" epoch. However, it is important to recognize the countless modern institutions that did have their roots in the Middle Ages, such as nation-states, parliaments, limited government, bureaucracies, and regulation of goods and services. Origins. In the wake of the Black Death, which decreased in incidence in 1351, faith in the power and significance of the church declined. The multitude of deaths (approximately 25-30 million between 1347 and 1351) signaled the need for a revival in art, education, and society in general. A large decrease in workers led to demands for higher wages, and thus uprisings were staged in several countries throughout Europe, particularly Germany, France, and Italy. The Renaissance began in northern Italy in the early 1300s. Although it was not inevitable, several factors—nationalism (due to an increased pride in the days of early Rome), the Crusades, revival of trade—helped to bring about reform. Throughout Italy (Florence, Genoa, Rome, Naples, and Milan in particular) scholars revived their studies of early Greek and Latin literature, derived from archived manuscripts. Upon examining these early works, they realized that culture was essential to living a meaningful life, and education (especially history) was important in understanding both the world of yesterday and contemporary times, as well as gaining insight into the future. Thus, Italian scholars called for a 'Renaissance' (French for rebirth) in European education and culture. Social Order and Cultural Change. Florentine Social Divisions. The denizens of Florence fell into one of four main social classes. These included the "grandi" (the great), the rulers of the city; the "popolo grosso" (big people), the capitalist merchants (these challenged the "grandi" for power); the smaller businesspeople, and the "popolo minuto" (little people), the lower economic classes, such as the paupers, who, despite constituting a third of the Florentine population, had no wealth at all. This division of society was prone to conflict, and eventually resulted in the successful Ciompi Revolt of 1378. The Ciompi Revolt was that of the poor, who revolted because of the constant feuds between the "grandi" and the "popolo grosso", the anarchy from the Black Death, and the collapse of banks, which made the "popolo minuto" even poorer. After a while, the Ciompi Revolt led to a four-year reign by the "popolo minuto" until Cosimo de’ Medici came into power in 1434 and restored the stability of Florence. The Household. The plague resulted in more favorable working positions for women, although the overall participation of women in public life varied with class as well as region. Married couples worked together frequently, and most men and women remarried quickly if their spouse died. Women's Education. Prior to the Protestant Reformation, Catholic convents were the primary mode of education for unmarried women. The focus of studies at these institutions was religion and spirituality. As Catholicism declined, women had to find new venues of learning, especially in Protestant countries. Many wrote and studied from their homes. When women were able to receive formal education, it was often not for the sake of their own intellectual development; rather, it was so that they were better able to educate their children. The Querelle de Femmes, or the Woman Question, was the debate surrounding the intellectual equality of women and men. Some believed that women’s secondary status was a direct result of their lack of access to education and opportunity. However, others believed that women were inherently inferior to their male counterparts. They justified these beliefs with the claims that men were physically stronger than women, God created Adam first, and Eve ate the apple that led to humanity’s banishment from the Garden of Eden. Also, during the Scientific Revolution, inaccurate portrayals of female anatomy were used to justify claims of male intellectual superiority. This topic would continue to be debated into the eighteenth century. Even as they preached the virtue of equality, the views of Enlightenment thinkers on the rights of women often diverged. One such philosopher, François Poullain de la Barre, wrote, “It would be a pleasant thing indeed to see a lady serve as a professor… or playing the part of an attorney… or leading an army… or speaking before states and princes as the head of an embassy.” He also believed that “the mind has no sex,” meaning that men and women shared the same intellectual capabilities. However, thinkers such as Rousseau disagreed, insisting that women were, by nature, less rational than men. This is striking because many of the people who conceived of and developed modern ideas of equality and freedom were, in fact, not advocates of equality for all, but only for a select group of the population. Witch Trials. Though the European witch trials began in the early sixteenth century and gained traction throughout the Protestant Reformation, the religious ideas utilized during the trials had been seen much earlier. In 1374, Pope Gregory XI declared that demons aided the practice of magic; 100 years later, Pope Innocent VIII published Desiring with the Greatest Ardour, which condemned witchcraft. By 1517, the beginning of the Reformation, witchcraft was widely condemned by the Catholic Church. Starting in 1560, the trials spread throughout Europe. Economists Peter Leeson and Jacob Russ argue that the Protestants used the witch trials as a form of coercion because Catholics and Protestants both wanted to convert Europeans to their religions. The regions with the highest concentrations of the trials were also populated by Protestants. Germany, the birthplace of the religion, had 40% of the witch trials. Protestantism also spread to Switzerland, France, England, and the Netherlands , and these regions contained 35% of the trials. Meanwhile, Catholic nations—Spain, Italy, Portugal, and Ireland—contained only six percent. In addition to the religious dimension, the European witch hunts of the early modern era were also gendered. The witch hunts in Europe targeted independent women. Overall, eighty percent of all accused were women , and, in Russia specifically, ninety to ninety-five percent of the accused were women. In contrast, witch hunters were disproportionately male. Most accusations were against independent women, often older widows, who lacked the support of a husband or family. Witch hunts also targeted gender non-conforming women and others who lived outside the social norms. Women who gathered in groups were viewed as a threat to authorities. The process of finding and executing witches was known as gender cleansing. According to Heinrich Kramer’s Malleus Maleficarum - Hammer of Witches, published in 1487, women were inclined towards satan by nature. This sentiment was widely used as justification for the witch hunts and executions. The religious scriptures followed by witch-hunting groups also deemed the female body impure and vulnerable to evil. Many of the accused were tortured into false confessions. In Europe, hangmen would carry out the torture and inspect the accused for physical signs of witchcraft. In her book Caliban and the Witch, Silvia Federici writes, “It is generally agreed that the witch-hunt aimed at destroying the control that women had exercised over their reproductive function and served to pave the way for the development of a more oppressive patriarchal regime” (Federici 14). Underclass. At the beginning of the Renaissance, the boundary between the poor and criminals was very thin. Larger cities frequently had problems with organized gangs. The so-called "decent society" treated the marginal elements of society with great suspicion and hatred. Women were featured prominently in the underclass, and many poor women found prostitution their only option. Hard Times for Business. The Hundred Years' War resulted in the various governments in Europe borrowing a great deal of money that they could not pay back. Thus, merchants were less likely to take risks, and instead invested in government bonds. The result of this was an overall decrease in trade. The Birth of Humanism. At the time, Italy was the center of culture in Europe. Middle class writers were supported by noble patronage, and as a result, during the beginning of the Renaissance literature blossomed alongside classic revival. This resulted in the rise of humanism, an intellectual movement that advocated the study of history and literature as the chief means of identifying with the glories of the ancient world. Humanism advocated classical learning and active participation of the individual in civic affairs. Renaissance scholars advocated the concept of "returning to the sources," attempting to reconcile the disciplines of the Christian faith with ancient learning. In addition, the concept of civic humanism arose, which advocated participation in government. Civilization was inspired by the writings of Roman emperors, and by the end of the 1400s intellectuals had a command of the Latin language. In the 1440s, Johannes Gutenberg created a printing press with movable type. This revolution in communication greatly assisted in the spread of Renaissance ideals throughout Europe, allowing the ideas to be printed for mass circulation, for the first time in history. Gutenberg was the first European ever to create a printing press with a movable type, though the Chinese had long before innovated this technology. The Renaissance conception of life and man's role on earth was more secular than in the past, but in no way was it nonreligious. It was now believed that God holds people above everything else, and that the greatest thing about being human is the human's free will to choose. People were celebrated, as Renaissance scholars argued that men are made in God's image, and that we should celebrate our God-given talents and abilities. People believed that life on Earth was intrinsically valuable, and that citizens should strive to be the best that they can. The emphasis of the Renaissance was on the individual rather than the collective. Italian Humanists. Francesco Petrarch (1304–1374) was an Italian scholar, poet, and early humanist. In his sonnets, he created the image of real people with personality, debunking the typical Medieval conceptions and stereotypes of people. Giovanni Boccaccio (1313–1375) wrote "The Decameron", a short story about the lives of people living during the Black Death. The book focused on people's responses to the plague rather than God's wrath. In this sense, the book was not about religion, but rather about people, a relatively new concept at the time. Pico della Mirandola (1463–1494) was an Italian Renaissance humanist philosopher and scholar. He authored the "Oration on the Dignity of Man," which has become known as the "Manifesto of the Renaissance." In this, he explained that man has unlimited potential, and with his free will can be anything he wants to be. He argued that man should make use of his abilities and not waste them. Finally, he explained that people should live their life with virtue, or the quality of being a man - shaping their own destiny, using all of their opportunities, and working aggressively through life. Northern Humanists. Sir Thomas More was an English lawyer, writer, and politician. He was a devout Catholic who wrote "Utopia", a novel that depicted Christian Humanist ideals producing an ideal fictional society. In his utopia, there was no crime, poverty, nor war. Much of the novel is a conversation that criticizes European practices, especially capital punishment. Desiderius Erasmus was a Dutch humanist and theologian. He was also a Catholic. In his "Handbook of a Christian Knight", he argued that through education, society can be reformed in the pious Christian model. He believed faith, good works, and charity were the core Christian values, and that elaborate ceremonies were superfluous. In his "The Praise of Folly", Erasmus claimed that the true Christian table of virtues, namely modesty, humility, and simplicity, had been replaced by a different, perverted value system of opulence, power, wealth, and so on. Arts. Renaissance art tended to focus on the human body with accurate proportions, and the most common subjects of art were religion, mythology, portraits, and the use of classical (Greco-Roman) subjects. Artisans of the Renaissance used oil paint to add shadow and light, and the use of the vanishing point in art became prominent during this time. Artists of the Renaissance depended on patronage, or financial support from the wealthy. Leonardo da Vinci (1452–1519). Leonardo da Vinci of Florence was known as one of the great masters of the High Renaissance, as a result of his innovations in both art and science. Leonardo is often viewed as the archetype of the "Renaissance Man" or "polymath" because of his expertise and interest in many different areas, including art, science, music, mechanics, architecture and the arts of war and philosophy. Leonardo is best known for his paintings, the most famous of which are The Last Supper and The Mona Lisa. There remain about fifteen paintings attributed reliably to Leonardo, and many others to his pupils and imitators. He also left several important drawings of which the "Vitruvian Man" is the most famous and probably the most reproduced drawing in the world. Leonardo left many note books of studies of many subjects, often profusely illustrated. Much of his work was intended for publication but only a small percentage of his work was published, relating to art and mathematics. In science, Leonardo practised meticulous observation and documentation. He dissected thirty corpses in order to study human anatomy and produced detailed drawings of the human skeletal, muscular, and internal organ systems including human fetuses. He made discoveries in anatomy, meteorology and geology, hydraulics, and aerodynamics. This led to his devising of many ingenious plans, including an underwater diving suit and non-functioning flying machines. He also sketched plans for elaborate killing machines. Many of these projects have proved impossible to create. On the other hand, he successfully built a mechanical lion that walked, roared and opened its chest to produce a bunch of flowers. Michelangelo (1475–1564). Michelangelo was one of the most prominent and important artists of the Renaissance, supported by the Medici family of Florence. Michelangelo's monumental sculpture of David preparing to kill the giant Goliath with his rock and sling is the perfect confirmation of the return to a humanistic appreciation of physical beauty from the austere medieval conception of emaciated, self-flagellated saints. Michelangelo also adorned the ceiling of the Vatican's Sistine Chapel with his "Creation of Adam" and other scenes and painted the "Last Judgment" on one wall of the Sistine Chapel in present day Vatican City. Raphael (1483-1520). Raphael was a famous painter and architect during the Renaissance. Some of Raphael's famous paintings include The School Of Athens, The Nymph Galatea, and Portrait of Pope Leo X with two Cardinals. "The Prince". "The Prince", a political treatise by the Florentine writer Niccolò Machiavelli (1464–1527), was an essential work of the Renaissance. For the first time, politics was presented as an objective science. Machiavelli recorded successful rulers and then drew conclusions without judgments. In other words, Machiavelli's politics were divorced from morality and religion. Machiavelli's research showed that a successful leader of a nation acted in a number of ways: Northern Renaissance vs. Italian Renaissance. The Southern Renaissance in Italy occurred earlier, from about 1300 to 1600, while the Northern Renaissance occurred later, ending in about 1630. The Southern Renaissance emphasized pagan and Greco-Roman ideals, and as a result was considerably more secular, while the Northern Renaissance advocated "Christian" humanism, or humility, tolerance, focus on the individual, and the importance of earnest life on earth. While the Southern Renaissance emphasized art and culture, the Northern Renaissance emphasized the sciences and new technology. This failed to occur in the south primarily because the Roman Catholic Church stunted learning and the sciences. While the Northern Renaissance was religiously diverse, with the rise of Protestantism and a great deal of religious division, the Southern Renaissance was entirely Roman Catholic. The Southern Renaissance saw far fewer universities, while the Northern Renaissance saw more universities and education. Also, Northern Renaissance humanists pushed for social reform based on Christian ideals. The New Monarchies. With the Renaissance came the rise of new monarchs. These new monarchs were kings who took responsibility for the welfare of all of society. They centralized power and consolidated authority - the kings now controlled tariffs, taxes, the army, many aspects of religion, and the laws and judiciary. In the way of the rise of new monarchs stood the church and nobles, who feared losing their power to the king. In addition, these new monarchs needed money, and they needed to establish a competent military rather than mercenaries. The middle class allied themselves with the new monarchs. The monarchs desired their support because their money came from trade, and this trade provided a great source of taxable revenue. The middle class supported the monarchs because they received the elimination of local tariffs, as well as peace and stability. France. From the tenth century onwards France had been governed by the Capetian dynasty. Although the family ruled over what might be, in theory, considered the most powerful country in Europe, the French monarchs had little control over their vassals, and many parts of France functioned as though they were independent states. The most powerful vassals of the French kings were the Plantagenet dynasty of England, who, through their Angevin ancestry, ruled large parts of western France. The ensuing conflicts, known as the Hundred Years War, helped to solidify the power of the French monarchs over their country. In France, the Valois dynasty came to the throne in 1328. Charles VII expelled the English and lowered the church's power under the state in 1422. Louis XI expanded the French state, laying the foundations for absolutism, in 1461. England. Edward IV began the restoration of royal authority, but the strengthening of the crown gained momentum only after the Tudor family came to power. Henry VII manipulated the Parliament to make it a tool of the king. He created the Royal court and the Star Chamber, in which royalty had the power to torture while questioning; this legal system allowed the king to limit the power of the aristocracy. He also promoted trade in order to gain the support of the middle class. His son, Henry VIII, took this process still further when, as a result of his desire to have a male heir, he founded the Anglican Church in England and broke away from the Catholic Church (this was also known as the reformation) After Henry married Catherine of Aragon, she failed to produce a male heir, and Henry desired to divorce her in order to marry a new lady, Anne Boleyn, who he hoped would be able to produce a son. The Catholic Church strictly prohibited divorce, however, and Henry found that the only way to sever his marriage was to separate from the Pope's jurisdiction. As a result, he withdrew England from the Catholic Church, establishing the Church of England, and in the First Act of Supremacy he established the monarch of England as the head of the Church. Spain. With the success of the Spanish "reconquista", Spain expelled the Jews and Muslims. The marriage of Queen Isabella I of Castile and King Ferdinand II of Aragon, the Catholic monarchs (Spanish:"Los Reyes Católicos"), was the final element which unified Spain. They revived the Spanish Inquisition to remove the last of the Jews. Holy Roman Empire. In the Holy Roman Empire, which occupied Austria and Bohemia, the Hapsburg Dynasty began. The Empire expanded its territory, acquiring Burgundy and attempting to unite Germany. The emperor's son married the daughter of the Spanish monarchs Ferdinand and Isabella; their son, Charles V, became heir to Spain, Austria, and Burgundy. Ottoman Turks. The Ottoman Turks were Muslims from Asia Minor who gradually conquered the old Byzantine Empire, completed in 1453 with the fall of Constantinople (renamed to Istanbul). The Rise of Court Life. New monarchs began to utilize the court during this time to increase their power by setting themselves apart from the lay people and by allowing the king to control the nobles. The court became the center of politics and religion in their respective nations. Nobles were required to live in the courts, and were as a result under constant watch by the king. Thus, the most important function of the court was that it allowed the king to control the nobles and prevent coup. In addition, however, the courts served an important social function, as the king's social circle was the nobles, and the court allowed him to interact with them. Court life was incredibly lavish, with events including hunting, mock battles, balls, parties, dances, celebratory dinners, gambling, and general socialization. The middle class loved this, and frequently copied the behaviors of the nobility from the courts.
5,129
Puzzles/Analogies. The word analogy itself coming from Greek words;ana- "upon, according to" and logos -"word","speech" Hence, it is a type of linguistic skills of grouping the words into their own groups. The analogy puzzles are used to identify the relationship between a pair of words. The format of analogies (in puzzles) are usually as followed: <br> 1. [A] is to [B] as [C] is to [D] <br> 2. [A]:[B] :: [C]:[D] Usually, only the first pair of words are provided [A] and [B] (at times [C] are also provided) and the subject must correlate [C] relationship to [D] For example: <br>(1) carrot:vegetable :: banana: _____________ As we can see, in the first pair of words, the word carrot are grouped into vegetable category, hence the word bananas in a way should be categorized into fruits category <br>(2) angry is to happy as fast is to ____________ The word angry are opposite to word happy, the second pair of word, the word opposite of fast is slow
303
Introduction to Paleoanthropology/Origin. Beginning of the 20th Century. In 1891, Eugene Dubois discovers remains of hominid fossils (which he will call "Pithecanthropus") on the island of Java, in southeast Asia. The two main consequences of this discovery: Yet, in South Africa, 1924, Raymond Dart accidentally discovered the remains of child (at Taung) during exploitation of a quarry and publishes them in 1925 as a new species - "Australopithecus africanus" (which means "African southern ape"). Dart, a British-trained anatomist, was appointed in 1922 professor of anatomy at the University of the Witwatersrand in Johannesburg, South Africa. Through this discovery, Dart: Nevertheless, Raymond Dart's ideas were not accepted by the scientific community at the time because: It took almost 20 years before Dart's ideas could be accepted, due to notable new discoveries: 1950s - 1970s. During the first half of the 20th century, most discoveries essential for paleoanthropology and human evolution were done in South Africa. After World War II, research centered in East Africa. The couple Mary and Louis Leakey discovered a major site at Olduvai Gorge, in Tanzania: Another major discovery of paleoanthropological interest comes from the Omo Valley in Ethiopia: Also in 1967, Richard Leakey starts survey and excavation on the east shore of Lake Turkana (Kenya), at a location called Koobi Fora: In 1972, a French-American expedition led by Donald Johanson and Yves Coppens focuses on a new locality (Hadar region) in the Awash Valley (Ethiopia): From 1976 to 1979, Mary Leakey carries out research at site of Laetoli, in Tanzania: 1980 - The Present. South Africa. Four australopithecine foot bones dated at around 3.5 million years were found at Sterkfontein in 1994 by Ronald Clarke: Since then, eight more foot and leg bones have been found from the same individual, who has been nicknamed "Little Foot". Eastern Africa. Recent discovery of new "A. boisei" skull is: Recent research suggests that the some australopithecines were capable of a precision grip, like that of humans but unlike apes, which would have meant they were capable of making stone tools. The oldest known stone tools have been found in Ethiopia in sediments dated at between 2.5 million and 2.6 million years old. The makers are unknown, but may be either early "Homo" or "A. garhi" A main question is, how have these species come to exist in the geographical areas so far apart from one another? Chad. A partial jaw found in Chad (Central Africa) greatly extends the geographical range in which australopithecines are known to have lived. The specimen (nicknamed Abel) has been attributed to a new species - "Australopithecus bahrelghazali". In June 2002, publication of major discovery of earliest hominid known: "Sahelanthropus tchadensis" (nickname: "Toumai").
810
Introduction to Paleoanthropology/Bones. Bone Identification and Terminology. Skull. Cranium: The skull minus the lower jaw bone. Brow, Supraorbital Ridges: Bony protrusions above eye sockets. Endocranial Volume: The volume of a skull's brain cavity. Foramen Magnum: The hole in the skull through which the spinal cord passes. Sagittal Crest: A bony ridge that runs along the upper, outer center line of the skull to which chewing muscles attach. Subnasal Prognathism: Occurs when front of the face below the nose is pushed out. Temporalis Muscles: The muscles that close the jaw. Teeth. Canines, Molars: Teeth size can help define species. Dental Arcade: The rows of teeth in the upper and lower jaws. Diastema: Functional gaps between teeth. Using Bones to Define Humans. Bipedalism. Fossil pelvic and leg bones, body proportions, and footprints all read "bipeds." The fossil bones are not identical to modern humans, but were likely functionally equivalent and a marked departure from those of quadrupedal chimpanzees. Australopithecine fossils possess various components of the bipedal complex which can be compared to those of chimpanzees and humans: Thus human bodies were redesigned by natural selection for walking in an upright position for longer distances over uneven terrain. This is potentially in response to a changing African landscape with fewer trees and more open savannas. Brain Size. Bipedal locomotion became established in the earliest stages of the hominid lineage, about 7 million years ago, whereas brain expansion came later. Early hominids had brains slightly larger than those of apes, but fossil hominids with significantly increased cranial capacities did not appear until about 2 million years ago. Brain size remains near 450 cubic centimetres (cc) for robust australopithecines until almost 1.5 million years ago. At the same time, fossils assigned to Homo exceed 500 cc and reach almost 900 cc. What might account for this later and rapid expansion of hominid brain size? One explanation is called the "radiator theory": a new means for cooling this vital heat-generating organ, namely a new pattern of cerebral blood circulation, would be responsible for brain expansion in hominids. Gravitational forces on blood draining from the brain differ in quadrupedal animals versus bipedal animals: when humans stand bipedally, most blood drains into veins at the back of the neck, a network of small veins that form a complex system around the spinal column. The two different drainage patterns might reflect two systems of cooling brains in early hominids. Active brains and bodies generate a lot of metabolic heat. The brain is a hot organ, but must maintain a fairly rigid temperature range to keep it functioning properly and to prevent permanent damage. Savanna-dwelling hominids with this network of veins had a way to cool a bigger brain, allowing the "engine" to expand, contributing to hominid flexibility in moving into new habitats and in being active under a wide range of climatic conditions. Free Hands. Unlike other primates, hominids no longer use their hands in locomotion or bearing weight or swinging through the trees. The chimpanzee's hand and foot are similar in size and length, reflecting the hand's use for bearing weight in knuckle walking. The human hand is shorter than the foot, with straighter phalanges. Fossil hand bones two million to three million years old reveal this shift in specialization of the hand from locomotion to manipulation. Chimpanzee hands are a compromise. They must be relatively immobile in bearing weight during knuckle walking, but dexterous for using tools. Human hands are capable of power and precision grips but more importantly are uniquely suited for fine manipulation and coordination. Tool Use. Fossil hand bones show greater potential for evidence of tool use. Although no stone tools are recognizable in an archaeological context until 2.5 million years ago, we can infer nevertheless their existence for the earliest stage of human evolution. The tradition of making and using tools almost certainly goes back much earlier to a period of utilizing unmodified stones and tools mainly of organic, perishable materials (wood or leaves) that would not be preserved in the fossil record. How can we tell a hominid-made artifact from a stone generated by natural processes? First, the manufacturing process of hitting one stone with another to form a sharp cutting edge leaves a characteristic mark where the flake has been removed. Second, the raw material for the tools often comes from some distance away and indicates transport to the site by hominids. Modification of rocks into predetermined shapes was a technological breakthrough. Possession of such tools opened up new possibilities in foraging: for example, the ability to crack open long bones and get at the marrow, to dig, and to sharpen or shape wooden implements. Even before the fossil record of tools around 2.5 Myrs, australopithecine brains were larger than chimpanzee brains, suggesting increased motor skills and problem solving. All lines of evidence point to the importance of skilled making and using of tools in human evolution. Summary. In this chapter, we learned the following: 1. Humans clearly depart from apes in several significant areas of anatomy, which stem from adaptation: 2. For most of human evolution, cultural evolution played a fairly minor role. If we look back at the time of most australopithecines, it is obvious that culture had little or no influence on the lives of these creatures, who were constrained and directed by the same evolutionary pressures as the other organisms with which they shared their ecosystem. So, for most of the time during which hominids have existed, human evolution was no different from that of other organisms. 3. Nevertheless once our ancestors began to develop a dependence on culture for survival, then a new layer was added to human evolution. Sherwood Washburn suggested that the unique interplay of cultural change and biological change could account for why humans have become so different. According to him, as culture became more advantageous for the survival of our ancestors, natural selection favoured the genes responsible for such behaviour. These genes that improved our capacity for culture would have had an adaptive advantage. We can add that not only the genes but also anatomical changes made the transformations more advantageous. The ultimate result of the interplay between genes and culture was a significant acceleration of human evolution around 2.6 million to 2.5 million years ago.
1,521
High School Mathematics Extensions/Logic/Problem Set/Solutions. Logic Problem Set Exercises. 1. 2. 3. 4. (This solution is due to Tom Lam). Let (x+y)w+z = a NAND b , where a and b can either be one of x,y,w,z or another NAND operator. formula_10 formula_11 Therefore formula_12 and formula_13 , both need further NAND operators. Let a = c NAND d , and let b = e NAND f. formula_14 formula_15 Therefore d=w, e=f=z,c=x+y.Let c = g NAND h. formula_16 formula_17 Now g=x' and h=y', we still need more NAND operators. Let g = i NAND j and let h = k NAND l. formula_18 formula_19 formula_20 formula_21 Therefore i=j=x and k=l=y. Now substitute all the variables back, you should get: Alternatively Each of AND, OR and NOT can be expressed in terms of NAND only. And therefore any boolean expression can be written down entirely in terms of NAND. This property is called the universality of NAND. Remember x NAND y = (xy)' Firstly, also, and Now and so
344
Introduction to Philosophy/Deontology. < Introduction to Philosophy Deontology is a set of moral theories which place themselves opposite consequentialism. While consequentialism determines right actions from good ends, deontology asserts that the end and the means by which it is arrived upon are intrinsically linked. A good end will come about as a result of good or right means. The most famous of deontologists is Immanuel Kant (1724 - 1804). His categorical imperative (divided into three formulations) determines a set of universal principles by which right action can be judged. The name "deontology" comes from the Greek "deon" which means "duty." Kant's goal in formulating deontology was to establish an ethical system that does not depend on anyone's subjective experience; rather, the good or evil of an action can be entirely determined by irrefutable logic. Thus, ethically correct behavior would be a duty whose truth no one could deny, just as no one can reasonably deny that two plus two makes four. Kant's categorical imperative tells us how logic can determine whether an action is right or wrong. It revolves around the idea of the maxim that cannot be made universal without defeating itself. A classic example is the statement, "I choose to lie about my car so that I will not be punished for arriving late at work." This may work for one individual, but if everyone adopted this principle (if lying were universalized), they would cancel out the concept of promising. In a world where lying was normal behavior, no one would believe anything they were told. This is the first formulation of the categorical imperative, that any action whose maxim logically defeats itself when universalized cannot be a moral action. John Rawls is a deontologist as well. His book, "A Theory of Justice" establishes that a system of wealth redistribution ought to be created such that it abides by a specific set of moral rules. Another way of looking at deontology is that it is opposed teleological theories such as consequentialism. In this sense Deontology is concerned with finding the right actions that one should perform to produce the 'good', while teleology is concerned with isolating the meaning of the 'good' and then stating that a right action is one which will produce said good. Thus as well as formulating his categorial imperative Kant also states that one is acting morally when one performs one's duty to fulfill the categorical imperative.
549
Organic Chemistry/Carboxylic acids. A carboxylic acid is characterized by the presence of the "carboxyl group" -COOH. The chemical reactivity of carboxylic acids is dominated by the very positive carbon, and the resonance stabilization that is possible should the group lose a proton. These two factors contribute both to acidity and to the group's dominant chemical reaction: nucleophilic substitution. =Preparation= 1) from alkenes R-CH=CHR + KMnO4 + OH- + Heat----> 2RCOOH 2) from ROH RCH2OH + OXIDIZING AGENT ----> RCOOH Aliphatic carboxylic acids are formed from primary alcohols or aldehydes by reflux with potassium dichromate (VI) acidified with sulphuric acid. 3) from toluene etc. toluene + KMnO4 ----> benzoic acid Alkyl benzenes (methyl benzene, ethyl benzene, etc) react with potassium manganate (VII) to form benzoic acid. All alkyl benzenes give the same product, because all but one alkyl carbon is lost. No acidification is needed. The reaction is refluxed and generates KOH. The benzoic acid is worked up by adding a proton source (such as HCl). 4) from methyl ketones RCOCH3 + NaOH + I-I ----> RCOO- + CHI3 5) from Grignard reagents RMgX + O=C=O ----> RCOOMgX RCOOMgX + HOH ----> RCOOH + MgX(OH) =Properties= Nomenclature. The systematic IUPAC nomenclature for carboxylic acids requires the longest carbon chain of the molecule to be identified and the -e of alkane name to be replaced with -oic acid. The traditional names of many carboxylic acids are still in common use. The systematic approach for naming dicarboxylic acids (alkanes with carboxylic acids on either end) is the same as for carboxylic acids, except that the suffix is -dioic acid. Common name Nomenclature of dicarboxylic acids is aided by the acronym OMSGAP (Om's Gap), where each letter stands for the first letter of the first seven names for each dicarboxylic acid, starting from the simplest. Acidity. Most carboxylic acids are weak acids. To quantify the acidities we need to know the pKa values: The pH above which the acids start showing mostly acidic behaviour: Ethanoic acid: 4.8 Phenol: 10.0 Ethanol: 15.9 Water: 15.7 Data from CRC Handbook of Chemistry & Physics, 64th edition, 1984 D-167-8 Except http://en.wikipedia.org/wiki/Trifluoroacetic_acid Clearly, the carboxylic acids are remarkably acidic for organic molecules. Somehow, the release of the H+ ion is favoured by the structure. Two arguments: The O-H bond is polarised by the removal of electrons to the carbonyl oxygen. The ion is stabilised by resonance: the carbonyl oxygen can accept the charge from the other oxygen. The acid strength of carboxylic acid are strongly modulated by the moiety attached to the carboxyl. Electron-donor moiety decrease the acid strength, whereas strong electron-withdrawing groups increase it. =Reactions= Acid Chloride Formation. Carboxylic acids are converted to acid chlorides by a range of reagents: SOCl2, PCl5 or PCl3 are the usual reagents. Other products are HCl & SO2, HCl & POCl3 and H3PO3 respectively. The conditions must be dry, as water will hydrolyse the acid chloride in a vigorous reaction. Hydrolysis forms the original carboxylic acid. CH3COOH + SOCl2 → CH3COCl + HCl + SO2 C6H5COOH + PCl5 → C6H5COCl + HCl + POCl3 3 CH3CH2COOH + PCl3 → 3 CH3CH2COCl + H3PO3 Esterification. Alcohols will react with acid chlorides or carboxylic acids to form esters. This reaction is catalyzed by acidic or basic conditions. See alcohol notes. C6H5COCl + CH3CH2OH → C6H5COOCH2CH3 + HCl With carboxylic acids, the condensation reaction is an unfavourable equilibrium, promoted by using non-aqueous solvent (if any) and a dehydrating agent such as sulfuric acid (non-nucleophilic), catalyzing the reaction). CH3COOH + CH3CH2CH2OH = CH3COOCH2CH2CH3 + H2O Reversing the reaction is simply a matter of refluxing the ester with plenty of aqueous acid. This hydrolysis produces the carboxylic acid and the alcohol. C6H5COOCH3 + H2O → C6H5COOH + CH3OH Alternatively, the reflux is done with aqueous alkali. The salt of the carboxylic acid is produced. This latter process is called 'saponification' because when fats are hydrolysed in this way, their salts are useful as soap. Anhydrides. See acid anhydride. Amides. Conceptually, an amide is formed by reacting an acid (an electrophile) with an amine compound (a nucleophile), releasing water. RCOOH + H2NR' → RCONHR' + H2O However, the acid-base reaction is much faster, which yields the non-electrophilic carboxylate and the non-nucleophilic ammonium, and no further reaction takes place. RCOOH + H2NR' → RCOO- + H3NR'+ To get around this, a variety of coupling reagents have been developed that first react with the acid or carboxylate to form an active acyl compound, which is basic enough to deprotonate an ammonium and electrophilic enough to react with the free base of the amine. A common coupling agent is dicyclohexylcarbodiimide, or DCC, which is very toxic. Acid Decarboxylation. On heating with sodalime (NaOH/CaO solid mix) carboxylic acids lose their –COOH group and produce a small alkane plus sodium carbonate: CH3CH2COOH + 2 NaOH →CH3CH3 + Na2CO3 + H2O Note how a carbon is lost from the main chain. The product of the reaction may be easier to identify than the original acid, helping us to find the structure. Ethanoic anhydride. Industrially, ethanoic anhydride is used as a less costly and reactive alternative to ethanoyl chloride. It forms esters and can be hydrolysed in very similar ways, but yields a second ethanoic acid molecule, not HCl The structure is formed from two ethanoic acid molecules… Polyester. Polyester can be made by reacting a diol (ethane-1,2-diol) with a dicarboxylic acids (benzene-1,4-dicarboxylic acid). n HO-CH2CH2-OH + n HOOC-C6H4-COOH → (-O-CH2CH2-O-OC-C6H4-CO-)n + n H2O Polyester makes reasonable fibres, it is quite inflexible so it does not crease easily; but for clothing it is usually combined with cotton for comfort. The plastic is not light-sensitive, so it is often used for net curtains. Film, bottles and other moulded products are made from polyester. Distinguishing carboxylic acids from phenols. Although carboxylic acids are acidic, they can be distinguished from phenol because: Only carboxylic acids will react with carbonates and hydrogencarbonates to form CO2 2 CH3COOH + Na2CO3 → 2 CH3COONa + H2O + CO2 C6H5COOH + NaHCO3 → C6H5COONa + H2O + CO2 Some phenols react with FeCl3 solution, giving a characteristic purple colour. Note: Click on the following icon to go back to the contents page.
2,147
Internet Technologies/VNC. Virtual Network Computing (VNC) is a remote desktop protocol to remote control another computer. VNC is used to transport the desktop environment of a graphical user interface from one computer to a viewer application on another computer on the network. There are clients and servers for many platforms including Linux, Microsoft Windows, Berkeley Software Distribution variants and MacOS X. In fact you would be hard pressed to not find a viewer available for any GUI operating system. The VNC protocol allows for complete platform independence. A VNC viewer on any operating system can connect to a VNC server on any other operating system. It is also possible for multiple clients to connect to a VNC server at the same time. Popular uses of the technology include remote tech support, and accessing your files on your work PC while at home or even on the road. There is even a Java viewer for VNC, so you can connect to a VNC server from your web browser without installing any software. The original VNC code is open source, as are many of the flavors of VNC available today. How it works. VNC is actually two parts, a client and a server. A server is the machine that is sharing its screen, and the client, or viewer is the program that is doing the watching and perhaps interacting with the server. VNC is actually a VERY simple protocol and is based one one and only one graphic primitive, "Put a rectangle of pixel data at a given x,y position". What this means is VNC takes small rectangles of the screen (actually the framebuffer) and transports them from the server to the client. This in its simplest form would cause lots of bandwidth to be used, and hence various methods have been invented to make this process go faster. There are now many different 'encodings' or methods to determine the most efficient way to transfer these rectangles. The VNC protocol allows the client and server to negotiate which encoding it will use. The simplest and lowest common denominator is the "raw encoding" method where the pixel data is sent in left-to-right scanline order, and after initial setup, then only transfers the rectangles that have changed. how to copy and paste. How do I copy-and-paste from applications running on a server (visible inside a local VNC window) to applications running locally (outside the VNC window) and back? Some people suggest using xcutsel or autocutsel as a work-around: On the VNC server side (inside the VNC window) run "codice_1". Leave it up and running. Others recommend autocutsel (or is it autcutsel?), pointing at the VNC FAQ. For more about the subtleties of cutting and pasting in the X Window System, see "X Selections, Cut Buffers, and Kill Rings." by Jamie Zawinski 2002 (especially helpful if you are writing X11 applications).
655
Numerical Methods/Equation Solving. An equation of the type formula_1 is either algebraic or transcendental. E.g, these equations are algebraic. and these are transcendental While roots can be found directly for algebraic equations of fourth order or lower, and for a few special transcendental equations, in practice we need to solve equations of higher order and also arbitrary transcendental equations. As analytic solutions are often either too cumbersome or simply do not exist, we need to find an approximate method of solution. This is where numerical analysis comes into the picture. Background. Initial Approximation. The last point about the interval is one of the most useful properties numerical methods use to find the roots. All of them have in common the requirement that we need to make an initial guess for the root. Practically, this is easy to do graphically. Simply plot the equation and make a rough estimate of the solution. Analytically, we can usually choose any point in an interval where a change of sign takes place. However, this is subject to certain conditions that vary from method to method. Convergence. A numerical method to solve equations will be a long process. We would like to know, if the method will lead to a solution (close to the exact solution) or will lead us away from the solution. If the method, leads to the solution, then we say that the method is convergent. Otherwise, the method is said to be divergent.i.e, in case of linear and non linear interpolation convergence means tends to 0. Rate of Convergence. Various methods converge to the root at different rates. That is, some methods are slow to converge and it takes a long time to arrive at the root, while other methods can lead us to the root faster. This is in general a compromise between ease of calculation and time. For a computer program however, it is generally better to look at methods which converge quickly. The rate of convergence could be linear or of some higher order. The higher the order, the faster the method converges. If formula_15 is the magnitude of the error in the formula_16th iteration, ignoring sign, then the order is formula_17 if formula_18 is approximately constant. It is also important to note that the chosen method will converge only if formula_19. Methods. Bisection Method. This is one of the simplest methods and is strongly based on the property of intervals. To find a root using this method, the first thing to do is to find an interval formula_12 such that formula_21. Bisect this interval to get a point formula_22. Choose one of formula_23 or formula_24 so that the sign of formula_25 is opposite to the ordinate at that point. Use this as the new interval and proceed until you get the root within desired accuracy. Example. Solve formula_26 correct up to 2 decimal places. formula_28 formula_29 formula_30 formula_31 formula_33 formula_34 Error Analysis. The maximum error after the formula_16th iteration using this process will be given as formula_37 As the interval at each iteration is halved, we have formula_38. Thus this method converges linearly. If we are interested in the number of iterations the "Bisection Method" needs to converge to a root within a certain tolerance than we can use the formula for the maximum error. Example. How many iterations do you need to get the root if you start with "a" = 1 and "b" = 2 and the tolerance is 10−4? The error formula_39 needs to be smaller than 10−4. Use the formula for the maximum error: Solve for "i" using log rules Hence 14 iterations will ensure an approximation to the root accurate to formula_45. Note: the error analysis only gives a bound approximation to the error; the actual error may be much smaller. False Position Method. The false position method (sometimes called the regula falsi method) is essentially same as the bisection method -- except that instead of bisecting the interval, we find where the chord joining the two points meets the X axis. The roots are calculated using the equation of the chord, i.e. putting formula_46 in The rate of convergence is still linear but faster than that of the bisection method. Both these methods will fail if "f" has a double root. Example. Consider "f"("x")="x"2-1. We already know the roots of this equation, so we can easily check how fast the regula falsi method converges. For our initial guess, we'll use the interval [0,2]. Since "f" is concave upwards and increasing, a quick sketch of the geometry shows that the chord will always intersect the "x"-axis to the left of the solution. This can be confirmed by a little algebra. We'll call our "n"th iteration of the interval ["a""n", 2] The chord intersects the "x"-axis when Rearranging and simplifying gives Since this is always less than the root, it is also "a""n"+1 The difference between "a""n" and the root is "e""n"="a""n"-1, but This is always smaller than "e""n" when "a""n" is positive. When "a""n" approaches 1, each extra iteration reduces the error by two-thirds, rather than one-half as the bisection method would. The order of convergence of this method is 2/3 and is linear. In this case, the lower end of the interval tends to the root, and the minimum error tends to zero, but the upper limit and maximum error remain fixed. This is not uncommon. Fixed Point Iteration (or Staircase method or x = g(x) method or Iterative method). If we can write "f"("x")=0 in the form "x"="g"("x"), then the point "x" would be a fixed point of the function "g" (that is, the input of "g" is also the output). Then an obvious sequence to consider is If we look at this on a graph we can see how this could converge to the intersection. Any function can be written in this form if we define "x"="g"("x"), though in some cases other rearrangements may prove more useful(must satisfy the condition |"g"("x")|<1). This method is useful for finding a positive root to the infinite series also. Fixed Point Theorem ("Statement") If "g" is continuous on["a,b"] and "a ≤ g(x) ≤ b" for all "x" in ["a,b"], then "g" has at least a fixed point in ["a,b"]. Further, suppose "g'(x)" is continuous on ("a,b") and that a positive constant c exists with |"g'(x)| ≤ c" <1, for all "x" in ("a,b"). Then there is a unique fixed point "α" of "g" in ["a,b"]. Also, the iterates "xn+1 = g(xn) n≥0" will converge to α for any choice of "x0" in ["a,b"]. Error analysis. We define the error at the "n"th step to be Then we have So, when |"g"′("x")|<l, this sequence converges to a root, and the error will be approximately proportional to "n". Because the relationship between "e""n"+1 and "e""n" is "linear", we say that this method "converges linearly", if it converges at all. When "g"("x")="f"("x")+"x" this means that if converges to a root, "x", of "f" then Note that this convergence will only happen for a certain range of "x". If the first estimate is outside that range then no solution will be found. Also note that although this is a necessary condition for convergence, it does not guarantee convergence. In the error analysis we neglected higher powers of "e""n", but we can only do this if "e""n" is small. If our initial error is large, the higher powers may prevent convergence, even when the condition is satisfied. If |"g"′("x")|<1 is true at the root, the iteration sequence will converge in some interval around the root, which may be smaller than the interval where |"g"′("x")|<1. If |"g"′("x")| isn't smaller than one at the root, the iteration will not converge to that root. Example. Lets consider formula_56, which we can see has a single root at "x"=1. There are several ways "f"("x")=0 can be written in the desired form, "x"="g"("x"). The simplest is 1):formula_57 In this case, formula_58, and the convergence condition is Since this is never true, this doesn't converge to the root. 2)An alternate rearrangement is This converges when Since this range does not include the root, this method won't converge either. 3)Another obvious rearrangement is In this case the convergence condition becomes Again, this region excludes the root. 4)Another possibility is obtained by dividing by "x"2+1 In this case the convergence condition becomes Consideration of this inequality shows it is satisfied if "x">1, so if we start with such an "x", this will converge to the root. Clearly, finding a method of this type which converges is not always straightforwards. Newton-Raphson. In numerical analysis, Newton's method (also known as the Newton–Raphson method or the Newton–Fourier method) is an efficient algorithm for finding approximations to the zeros (or roots) of a real-valued function. As such, it is an example of a root-finding algorithm. Any zero-finding method (Bisection Method, False Position Method, Newton-Raphson, etc.) can also be used to find a minimum or maximum of such a function, by finding a zero in the function's first derivative, see Newton's method as an optimization algorithm. Description of the method. The idea of the Newton-Raphson method is as follows: one starts with an initial guess which is reasonably close to the true root, then the function is approximated by its tangent line (which can be computed using the tools of calculus), and one computes the x-intercept of this tangent line (which is easily done with elementary algebra). This x-intercept will typically be a better approximation to the function's root than the original guess, and the method can be iterated. Suppose f : [a, b] → R is a differentiable function defined on the interval [a, b] with values in the real numbers R. The formula for converging on the root can be easily derived. Suppose we have some current approximation xn. Then we can derive the formula for a better approximation, xn+1 by referring to the diagram on the right. We know from the definition of the derivative at a given point that it is the slope of a tangent at that point. We can get better convergence if we know about the function's derivatives. Consider the tangent to the function: Near any point, the tangent at that point is approximately the same as "f"('x") itself, so we can use the tangent to approximate the function. The tangent through the point ("x""n", "f"("x""n")) is The next approximation, "x""n"+1, is where the tangent line intersects the axis, so where "y"=0. Rearranging, we find Error analysis. Again, we define the root to be "x", and the error at the "n"th step to be Then the error at the next step is where we've written "f" as a Taylor series round its root, "x". Rearranging this, and using "f"("x")=0, we get where we've neglected cubic and higher powers of the error, since they will be much smaller than the squared term, when the error itself is small. Notice that the error is squared at each step. This means that the number of correct decimal places "doubles" with each step, much faster than linear convergence. This sequence will converge if If "f"′ isn't zero at the root, then there will always be a range round the root where this method converges. If "f"′ is zero at the root, then on looking again at (1) we see that we get and the convergence becomes merely linear. Overall, this method works well, provided "f" does not have a minimum near its root, but it can only be used if the derivative is known. Example. Let's consider "f"("x")="x"2-"a". Here, we know the roots exactly, so we can see better just how well the method converges. We have This method is easily implemented, even with just pen and paper, and has been used to rapidly estimate square roots since long before Newton. The "n"th error is "e""n"="x""n"-√"a", so we have If "a"=0, this simplifies to "e""n"/2, as expected. If "a">0, "e""n"+1 will be positive, provided "e""n" is greater than -√"a", i.e provided "x""n" is positive. Thus, starting from any positive number, all the errors, except perhaps the first will be positive. The method converges when, so, assuming "e""n" is positive, it converges when which is always true. This method converges to the square root, "starting from any positive number", and it does so quadratically. Higher order methods. There are methods that converge even faster than Newton-Raphson. e.g
3,285
Korean/Lesson I3. "And" and "And?" "Or" or "Or"? One thing that varies in Korean is that there is a difference between an “and” for a verb and an “and” for a noun. In this lesson, we will learn these ands, ors, and buts. It just so happens that today 찬호 is introducing his friends to Joseph, so this is a perfect opportunity to use these forms! (Don't feel overwhelmed, there's only 3 ways to say each!) The above example has several new forms in it because of the differentiation between noun "and/or" & verb "and/or". We'll look at the examples and pick out new vocabulary, and then discuss new grammar separately.<br> 소개하다 means "to introduce." It's used really often when talking about friends and people you know, but it can also be used to refer to something like "introducing information." Following that, 기대하다 means "to await expectedly or excitedly." This can also be said 기대되다, which sometimes sounds more natural. Here we meet the noun connective particle 와 (“and”) and its alternative 과, used after consonants. More information can be learned about this in the following section, but its use is fairly straight forward. Nothing new here. <br> 오다 means “to come” (the form you see, "왔-" in this sentence is past tense.) but the connective verb suffix -고 (“and”) is connected to it (왔고). 에서 in this case means “from”. (So keep track! You now know it means “from” or “at”.) Finally, Joseph responds with 그래요 (“that’s right”). 가영 uses a phrase that is often heard in Korea: "고생 많다." This means "you have lots of struggles," but is used sort of like "must be difficult," a sort of compliment for the listener who might be going through hard times. The ending on this is "VS+군요" Which is a sort of exclamatory form. This will also be discussed in the next section. "한식" means "Korean food," a sort of contraction of "한국 음식," and "양식" is "Western food." Can you guess the contraction for this one? Joseph links the two with "N+(이)나" which is "or" for nouns. The verb form is "VS+거나" (discussed later, of course). "둘 다" means "both" Afterwards, Joseph uses the stand-alone word "하지만," meaning "however" or "but." The verb form of this is "VS+지만." It's simplicity doesn't merit any further discussion. «Lesson 2 | Lesson 3 | Lesson 4»
661
Statistics/Practise Problems/Summary Statistics. Questions: Answers:
19
Aros/Developer/Zune/Beginner. This Document is obsolete, please use Aros/Developer/Zune Prerequisites. Some knowledge of OO (object-oriented) programming is more than welcome when it comes to dealing with Zune or MUI for that matter. Having said that the principles and techniques used should be easy enough for even the complete beginner to grasp. Google may also help in the search for good introductory papers on this classic subject. Knowing AROS (or AmigaOS) APIs and concepts, such as the use of taglists - and of course BOOPSI, is essential to developing with Zune. Having a copy of the official "Amiga Reference Manuals" (aka RKM - Rom Kernel Manuals) may be very useful. Since Zune is a clone of 'MUI', all the documentation pertaining to it is also applicable to Zune. In particular, the latest available 'MUI developer kit' can be found here. In the LHA archive, 2 documents are of most interest: Additionally, this archive contains the MUI autodocs, which can also be used as reference documentations for all Zune classes. Finally, you will need a good "build" environment to develop under. AROS favours GCC, since the whole of the main system is generated with it. BOOPSI Primer. Concepts. There are four main concepts paramount to BOOPSI. They are CLASS, OBJECT, ATTRIBUTE and METHOD, and are the building blocks on which BOOPSI is implemented. Class. A class is defined by its name, its parent class and a dispatcher. The BOOPSI type for a class is .. Class * .. also known as IClass. Object. An object is an instance of class: each object has its specific data, but all objects of the same class share the same behavior. An object has several classes if we count the parents of its true class (the most derived one) up to the rootclass. BOOPSI type for an object is .. Object * It has no field you can directly access. Attribute. Attributes are related to the instance (private) data of each object. You can't access an objects internal data directly, so it provides a set of "attrbitues", you can set or get, to modify its internal state. An attribute is implemented as a Tag (ULONG value or'ed with TAG_USER). GetAttr() and SetAttrs() are used to modify an object's attributes. AROS also has macros SET() and GET(). To modify/read the state the object has its OM_GET or OM_SET "Method" invoked. Attributes can be one or more of the following: * Initialization-settable (I) : the attribute can be given as parameter at the object creation. * Settable (S) : You can set this attribute at any time (or at least, not only creation). * Gettable (G) : You can get the value of this attribute. Method. A BOOPSI method is a function which receives as parameters an object, a class and a message: * object: the object you act on * class: the considered class for this object. * message: contains a method ID which determines the function to call within a dispatcher, and is followed by its parameters. To send a message to an object, use the DoMethod() call. It will try to use the true class of the object to perform the chosen method first. If the class implements the method, it will perform the required operation. If the class DOESNT implement the method it will try the parent class, and so on up the chain until the message is handled or the rootclass is reached (in this case, the unknown message is silently discarded). The most minimal class could consist of 1 method - OM_NEW, although its use would be extremely limited. Examples. Let's see basic examples of this OOP framework: Getting an attribute. We'll query a MUI String object for its content: void f(Object *string) IPTR result; GetAttr(string, MUIA_String_Contents, &result); printf("String content is: %s\n", (STRPTR)result); or a pointer. An IPTR is always written in memory, so using a smaller type would lead to memory corruption! as any other attribute, is a ULONG (it's a Tag) Zune applications use more often the get() and XGET() macros instead: get(string, MUIA_String_Contents, &result); result = XGET(string, MUIA_String_Contents); Setting an attribute. Let's change the content of our string: SetAttrs(string, MUIA_String_Contents, (IPTR)"hello", TAG_DONE); * Pointers parameters must be casted to IPTR to avoid warnings. * After the object parameter, a taglist is passed to SetAttrs and thus must end with TAG_DONE. You'll find the set() macro useful: set(string, MUIA_String_Contents, (IPTR)"hello"); But it's only with SetAttrs() that you can set several attributes at once: SetAttrs(string, MUIA_Disabled, TRUE, MUIA_String_Contents, (IPTR)"hmmm...", TAG_DONE); Calling a method. Let's see the most called method in a Zune program, the event processing method called in your main loop: result = DoMethod(obj, MUIM_Application_NewInput, (IPTR)&sigs); * Parameters are not a taglist, and thus don't end with TAG_DONE. * You have to cast pointers to IPTR to avoid warnings. Hello world. Screenshot 'Hello World' First things first! I knew you would be all excited. Sources. Let's study our first real life example: // gcc hello.c -lmui #include <exec/types.h> #include <libraries/mui.h> #include <proto/exec.h> #include <proto/intuition.h> #include <proto/muimaster.h> #include <clib/alib_protos.h> int main(void) Object *wnd, *app, *but; // GUI creation app = ApplicationObject, SubWindow, wnd = WindowObject, MUIA_Window_Title, "Hello world!", WindowContents, VGroup, Child, TextObject, MUIA_Text_Contents, "\33cHello world!\nHow are you?", End, Child, but = SimpleButton("_Ok"), End, End, End; if (app != NULL) ULONG sigs = 0; // Click Close gadget or hit Escape to quit DoMethod(wnd, MUIM_Notify, MUIA_Window_CloseRequest, TRUE, (IPTR)app, 2, MUIM_Application_ReturnID, MUIV_Application_ReturnID_Quit); // Click the button to quit DoMethod(but, MUIM_Notify, MUIA_Pressed, FALSE, (IPTR)app, 2, MUIM_Application_ReturnID, MUIV_Application_ReturnID_Quit); // Open the window set(wnd, MUIA_Window_Open, TRUE); // Check that the window opened if (XGET(wnd, MUIA_Window_Open)) // Main loop while((LONG)DoMethod(app, MUIM_Application_NewInput, (IPTR)&sigs) != MUIV_Application_ReturnID_Quit) if (sigs) sigs = Wait(sigs | SIGBREAKF_CTRL_C); if (sigs & SIGBREAKF_CTRL_C) break; // Destroy our application and all its objects MUI_DisposeObject(app); return 0; Remarks. General. We don't manually open libraries, it's done automatically for us. GUI creation. We use a macro-based language to easily build our GUI. A Zune application has always 1 and only 1 Application object: app = ApplicationObject, An application can have 0, 1 or more Window objects. Most often a single one: SubWindow, wnd = WindowObject, Be nice, give a title to the window: MUIA_Window_Title, "Hello world!", A window must have 1 and only 1 child, usually a group. This one is vertical, that means that its children will be arranged vertically: WindowContents, VGroup, A group must have at least 1 child, here it's just a text: Child, TextObject, Zune accepts various escape codes (here, to center the text) and newlines: MUIA_Text_Contents, "\33cHello world!\nHow are you? :)", An End macro must match every xxxObject macro (here, TextObject): End, Let's add a second child to our group, a button! With a keyboard shortcut o indicated by an underscore: Child, but = SimpleButton("_Ok"), Finish the group: End, Finish the window: End, Finish the application: End; So, who still needs a GUI builder? :-) Error handling. If any of the object in the application tree can't be created, Zune destroys all the objects already created and application creation fails. If not, you have a fully working application: if (app != NULL) ... When you're done, just call MUI_DisposeObject() on your application object to destroy all the objects currently in the application, and free all the resources: MUI_DisposeObject(app); Notifications. Notifications are the simplest way to react on events. The principle? We want to be notified when a certain attribute of a certain object is set to a certain value: DoMethod(wnd, MUIM_Notify, MUIA_Window_CloseRequest, TRUE, Here we'll listen to the MUIA_Window_CloseRequest of our Window object and be notified whenever this attribute is set to TRUE. So what happens when a notification is triggered? A message is sent to an object, here we tell our Application to return MUIV_Application_ReturnID_Quit on the next event loop iteration: (IPTR)app, 2, MUIM_Application_ReturnID, MUIV_Application_ReturnID_Quit); As we can specify anything we want here, we have to tell the number of extra parameters we are supplying to MUIM_Notify: here, 2 parameters. For the button, we listen to its MUIA_Pressed attribute: it's set to FALSE whenever the button is being released (reacting when it's pressed is bad practice, you may want to release the mouse outside of the button to cancel your action - plus we want to see how it looks when it's pressed). The action is the same as the previous, send a message to the application: DoMethod(but, MUIM_Notify, MUIA_Pressed, FALSE, (IPTR)app, 2, MUIM_Application_ReturnID, MUIV_Application_ReturnID_Quit); Opening the window. Windows aren't open until you ask them to: set(wnd, MUIA_Window_Open, TRUE); If all goes well, your window should be displayed at this point. But it can fail! So don't forget to check by querying the attribute, which should be TRUE: if (XGET(wnd, MUIA_Window_Open)) Main loop. Let me introduce you my lil' friend, the ideal Zune event loop: ULONG sigs = 0; Don't forget to initialize the signals to 0 ... The test of the loop is the MUIM_Application_NewInput method: while((LONG) DoMethod(app, MUIM_Application_NewInput, (IPTR)&sigs) != MUIV_Application_ReturnID_Quit) It takes as input the signals of the events it has to process (result from Wait(), or 0), will modify this value to place the signals Zune is waiting for (for the next Wait()) and will return a value. This return value mechanism was historically the only way to react on events, but it was ugly and has been deprecated in favor of custom classes and object-oriented design. The body of the loop is quite empty, we only wait for signals and handle Ctrl-C to break out of the loop: if (sigs) sigs = Wait(sigs | SIGBREAKF_CTRL_C); if (sigs & SIGBREAKF_CTRL_C) break; Conclusion. This program gets you started with Zune, and allows you to toy with GUI design, but not more. Notification actions. Notification allows you to respond to events your application/gui or any other object might cause. Due to the attribute and method based nature of Zune, and a few special attributes, Most applications can be almost completely automated through the use of Notification(s). As seen in hello.c, you use MUIM_Notify to cause an object/class to react if a certain condition happens. If you want your application to react in a specific way to events, you can use one of these schemes: Zune Examples. See Aros/Developer/Zune/Examples
3,171
Geometry for Elementary School. __NOEDITSECTION__
14
Korean/Authors. ^ Korean ^ | Authors This page is meant to provide information about people who feel they have made a contribution to this book and wish to cooperate on its development. Please feel free to add yourself here if you match the description above. Authors. Hello, My name is Joe Chin. I am a 4th year Information Systems major at the University of California - Riverside. I have been taking Korean language for a little under a year in anticipation of my EAP program to Yonsei University in Seoul Korea. I hope that anyone wanting to learn the Korean language will find this wiki book a valuable resource. If the other authors read this, please email me at mailjoechin (-at-) gmail (dot) com, I would like to discuss the book and perhaps some direction for it. Thanks. "For the man I met at borders:"<br> - Integrate Korean Audio can be found here: http://languagelab.bh.indiana.edu/online.html<br> - Internet based learning: http://teenkorean.net (be sure to view it in internet explorer) I'm a Korean at Yonsei University, Seoul Korea. I don't exactly major in linguistics or anything, but i speak Korean natively- So, I hope that I can be of some help to all of you people out there, willing to learn Korean!! If you have any questions, suggestions, or questions with suggestions, feel free to visit my blog (http://beang.com) and write your opinion on the guestbook. Thanks. My name is Scott Stinson. I've been teaching English in Korea for the past 3 years and have been learning Korean for that time. While not yet fluent in Korean, I do hope that my contributions help make this Korean tutorial useful. My name is Michael Jun. I speak Korean natively and have a long experience learning foreign languages such as English, Chinese and French. I hope to improve the Korean tutorial and take it to next level. My name is Jihun Kang and go by Jeremy. I am a Korean living in Europe. I hope to contribute to build this online text books so that many people learn Korean easily! I'm a Korean and I majored in Chinese Language & Literature in Konkuk University (Seoul Campus). I also want to contribute anything for those who are willing to learn Korean. Normally, almost all Korean pronunciation came from Chinese characters. So, I can contribute something in this part.
561
Geometry for Elementary School/Introduction. Why geometry? Geometry is one of the most elegant fields in mathematics. It deals with visual shapes that we know from everyday life. Who should use this book? This book is intended for use by a parent (or a teacher or ward) and a child. It is recommended that the parent have some familiarity with geometry, but this is not necessary. The parent can simply read the chapter before teaching the child and then learn it together. Book guidelines. The classic book about geometry is . This book helped teach geometry for hundreds of years, so we feel that writing this book based on the Elements is a correct step. We will adapt parts of the book for children and modify the order of some topics in order to make the book clearer. The learning will be based on constructions and proofs. A construction is a method of creating a geometric object (such as a triangle) using a set of tools. In the case of this book, the tools we will be using are a compass and a ruler. A proof is a logical trail where we can prove one fact by starting with some given information and make a series of conclusions based on that information. Oftentimes it is more difficult to prove a result than to simply find the result. The constructions are useful for letting the child experience geometric ideas and get visual results. The proofs are a good way to understand geometry and are a good basis for future study of logic. Since the book is for children, we omit some of the proof details and use intuition instead of precise definition. On the other hand, we insist on correct and elegant proofs. Precise definitions and exact proofs can be found in regular geometry books and can be used to extend to material to some of the children. Notation. The notation that is used in the book is defined the first time it is used. However, in order to simplify its use, it is also summarized in the "notation" section of the "conventions" chapter at the end of the book. How to contribute to this book. This book uses British English as a primary language; however, there is a table provided at the end of this book that summarises all the American terms that readers may come across. Before using this book. Make sure that these resources are available before using this book:
501
Geometry for Elementary School/Constructing equilateral triangle. Introduction. In this chapter, we will show you how to draw an equilateral triangle. What does "equilateral" mean? It simply means that all three sides of the triangle are the same length. Any triangle whose vertices (points) are A, B and C is written like this: formula_1. And if it's equilateral, it will look like the one in the picture. Claim. The triangle formula_1 is an equilateral triangle. Problems with the proof. The construction above is simple and elegant. One can imagine how children, using their legs as compass, accidentally find it. However, Euclid’s proof was wrong. In mathematical logic, we assume some postulates. We construct proofs by advancing step by step. A proof should be made only of postulates and claims that can be deduced from the postulates. Some useful claims are given names and called theorems in order to enable their use in future proofs. There are some steps in its proof that cannot be deduced from the postulates. For example, according to the postulates he used, the circles formula_11 and formula_5 do not have to intersect. Although the proof was wrong, the construction is not necessarily wrong. One can make the construction valid, by extending the set of postulates. Indeed, in later years, different sets of postulates were proposed in order to make the proof valid. Using these sets, the construction that works so well using pencil and paper is also logically sound. This error of Euclid, the gifted mathematician, should serve as an excellent example of the difficulty in mathematical proof and also the difference between proof and our intuition.
389
Geometry for Elementary School/The Side-Angle-Side congruence theorem. In this chapter, we will discuss another congruence theorem, this time the Side-Angle-Side theorem. The angle is called the included angle. The Side-Angle-Side congruence theorem. Given two triangles formula_1 and formula_2 such that their sides are equal, hence: Then the triangles are congruent and their other angles and sides are equal too. Success! Proof. We will use the method of superposition – we will move one triangle to the other one and we will show that they coincide. We won’t use the construction we learnt to copy a line or a segment but we will move the triangle as whole.
163
Geometry for Elementary School/Copying a triangle. In this chapter, we will show how to copy a triangle formula_1 to other triangle formula_2 . The construction is an excellent example of the reduction technique – solving a problem by solution to a previously solved problem. Claim. The triangles formula_1 and formula_2 are congruent.
77
Geometry for Elementary School/Why are the constructions not correct?. In the previous chapters, we introduced constructions and proved their validity. Therefore, these constructions should work flawlessly. In this chapter, we will check whether the construction are indeed foolproof. Explanation. I must admit that I never could copy the segment accurately. Some times the segment I constructed was of the length 10.5cm, I did even worse. A more talented person might get better results, but probably not exact. How come the construction didn't work, at least in my case? Our proof of the construction is correct. However, the construction is done in an ideal world. In this world, the lines and circles drawn are also ideal. They match the mathematical definition perfectly. The circle I draw doesn't match the mathematical definition. Actually, many say that they don't match any definition of circle. When I try to use the construction, I'm using the wrong building blocks. However, the construction are not useless in our far from ideal world. If we use approximation of a circle in the construction, we are getting and approximation of the segment copy. After all, even my copy is not too far from the original. Note. In the Euclidian geometry developed by the Greek the rule is used only to draw lines. One cannot measure the length of segments using the rulers as we did in this test. Therefore our test should be viewed as a criticism of the use of Euclidian geometry in the real world and not as part of that geometry.
340
Geometry for Elementary School/Copying an angle. In this chapter, we will show how to copy an angle formula_1 to other angle formula_2. The construction is based on Book I, proposition 23. Claim. The angles formula_1 and formula_2 are equal. Proof. Note that any two points on the rays can be used to create a triangle.
86
Geometry for Elementary School/Some impossible constructions. In the previous chapters, we discussed several construction procedures. In this chapter, we will number some problems for which there is no construction using only ruler and compass. The problems were introduced by the Greek and since then mathematicians tried to find constructions for them. Only in 1882, it was proven that there is no construction for the problems. Note that the problems have no construction when we restrict ourself to constructions using ruler and compass. The problems can be solved when allowing the use of other tools or operations, for example, if we use Origami. The mathematics involved in proving that the constructions are impossible are too advanced for this book. Therefore, we only name the problems and give reference to the proof of their impossibility at the further reading section. Impossible constructions. Squaring the circle. The problem is to find a construction procedure that in a finite number of steps, to make a square with the same area as a given circle. Doubling the cube. To "double the cube" means to be given a cube of some side length "s" and volume "V", and to construct a new cube, larger than the first, with volume 2"V" and therefore side length ³√2"s". Trisecting the angle. The problem is to find a construction procedure that in a finite number of steps, constructs an angle that is one-third of a given arbitrary angle. Further reading. Proving that the constructions are impossible involves mathematics that is not in the scope of this book. The interested reader can use these links to learn why the constructions are impossible. The Four Problems of Antiquity have no solution since any solution would involve constructing a number that is not a constructible number. The numbers that should have been constructed in the problems are defined by these cubic equations. It is recommended to read the references in this order:
438
How to Write a Program. This book assumes you have read one or more books on how to program, and you have some degree of familiarity with one or more computer languages. Now you are sitting in front of your computer and want to start writing a program... and after several tries you find yourself stuck, in a blind corner or feel you are just going about this wrong. Now is the time to read "this" book.
89
How to Write a Program/Before you start. Before you prepare to start writing a new program using your text editor (or IDE), remember this: Rule 1: Don't write that Program! The world is full of bad code, we don't need your bad code! Therefore the first rule is Don't! Look for programs that do, or almost do what you want them to and use / modify them. Remember what wrote in : So, did I immediately launch into a furious whirl of coding up a brand-new POP3 client to compete with the existing ones? Not on your life! I looked carefully at the POP utilities I had in hand, asking myself "Which one is closest to what I want?" Because: 2. Good programmers know what to write. Great ones know what to rewrite (and reuse). While I don't claim to be a great programmer, I try to imitate one. An important trait of the great ones is constructive laziness. They know that you get an A not for effort but for results, and that it's almost always easier to start from a good partial solution than from nothing at all. Some good sources for finding already-written software include: Normally searching using a good search engine (by trying several keywords) and on Freshmeat would be enough. If you still want to be sure, you can try asking whether anyone can point you to such a program in a relevant channel in a popular network such as or different kinds of Internet forums. If you were able to find a program that almost does what you want this way, you may wish to move on to the How to Write a Program/Enhancing code Rule 2: Think about what you want the program to do before you start.. Writing the wrong program is the most expensive mistake you can make. So let us consider How to Write a Program/Requirements analysis
411
How to Write a Program/Requirements analysis. This in itself is the topic of "many" large books and the career of many well paid consultants. Google will guide you into that vast realm. And doubtless other wiki editors may expand hugely on the topic. But here I will merely state Rule 2 : Analysis Paralysis is the greatest danger your program will ever face. You may have heard that many programming projects fail or are delivered way over time and over budget. The dead truth is that most programs simply are never even started! They freeze up and die in an perpetual analysis phase. You certainly could do worse than generate the artifacts prescribed by the Rational Unified Process, but save that for when you really have something huge to do. Start by writing down, very simply, what you want the program to do. Don't get fancy now, half a page handwritten absolute maximum. Trust me. Lets try that now... " I want to drive the Povray ray tracer from ruby." Well, scratch that one out. It's a techno goal. As soon as you see specific technologies on your page, scribble them out and try again... " I want to draw cool pictures with mathematically precise placement of maybe thousands of objects." Ah. That is better. But too fuzzy. " Well, actually I was trying to draw a string in povray and I got frustrated with the scene description language and I wished for the power of my favourite language. But my favourite language doesn't do raytracing. I found the modelers like kpovraymodeler drove me nuts and weren't doing what I wanted." Oo, nice. This is what the customer was trying do. Rule 3: When someone, including yourself, tells you to "Write a Program to do...", make a few loose notes and then ignore him. Always, always, always get up, and go over to their work place and understand first what they are trying to do. I promise you from hundreds of experiences if you just obey them, and create the program they (and/or yourself) told you to, you will write the "wrong" program! Write down a couple of ways in which you want to interact with it. These are called "use cases" and Google will give you a million pages on them. I will have finished the program before you have read half the important pages on the topic... :-)
531
Dental Case Files. This text book contains a number of Dental Case files sorted by dental discipline. To allow the possibility to verify the information in the case file (ex. for research purposes) it is important that the author includes his or her name at the end of the case file. Here is an overview of the authors so far.
70
Dental Case Files/Authors. This Textbook on dental case files was started by Daniel S. Joffe, DDS. Underneath follows a list of authors in alphabetic order:
43
Serial Programming/MAX232 Driver Receiver. Applicability. This module is primary of interest for people building their own electronics with an RS-232 interface. Off-the-shelf computers with RS-232 interfaces already contain the necessary electronics, and there is no need to add the circuitry as described here. Introduction. Logic Signal Voltage. Serial RS-232 (V.24) communication works with voltages (between -15 V ... -3 V are used to transmit a binary '1' and +3 V ... +15 V to transmit a binary '0') which are not compatible with today's computer logic voltages. On the other hand, classic TTL computer logic operates between 0 V ... +5 V (roughly 0 V ... +0.8 V referred to as "low" for binary '0', +2 V ... +5 V for "high" binary '1' ). Modern low-power logic operates in the range of 0 V ... +3.3 V or even lower. So, the maximum RS-232 signal levels are far too high for today's computer logic electronics, and the negative RS-232 voltage can't be generated at all by the computer logic. Therefore, to receive serial data from an RS-232 interface the voltage has to be reduced, and the "0" and "1" voltage levels inverted. In the other direction (sending data from some logic over RS-232) the low logic voltage has to be "bumped up", and a negative voltage has to be generated, too. RS-232 TTL Logic -15 V ... -3 V <-> +2 V ... +5 V <-> 1 (idle state) +3 V ... +15 V <-> 0 V ... +0.8 V <-> 0 (start bit) All this can be done with conventional analog electronics, e.g. a particular power supply and a couple of or the once popular MC1488 (transmitter) and MC1489 (receiver) ICs. However, since more than a decade it has become standard in amateur electronics to do the necessary signal level conversion with an integrated circuit (IC) from the MAX232 family (typically a MAX232A or some clone). In fact, it is hard to find some RS-232 circuitry in amateur electronics without a MAX232A or some clone. We discuss the signal bits in more detail later in this book. MAX232 and MAX232A. The MAX232 from Maxim was the first IC which in one package contains the necessary drivers (two) and receivers (also two), to adapt the RS-232 signal voltage levels to TTL logic. It became popular, because it just needs one voltage (+5 V) and generates the necessary RS-232 voltage levels (approx. -10 V and +10 V) internally. This greatly simplified the design of circuitry. Circuitry designers no longer need to design and build a power supply with three voltages (e.g. -12 V, +5 V, and +12 V), but could just provide one +5 V power supply, e.g. with the help of a simple 78x05 voltage regulator. The MAX232 has a successor, the MAX232A. The ICs are almost identical, however, the MAX232A is much more often used (and easier to get) than the original MAX232, and the MAX232A only needs external capacitors 1/10th the capacity of what the original MAX232 needs. It should be noted that the MAX232(A) is just a driver/receiver. It does not generate the necessary RS-232 sequence of marks and spaces with the right timing, it does not decode the RS-232 signal, it does not provide a serial/parallel conversion. All it does is to convert signal voltage levels. Generating serial data with the right timing and decoding serial data has to be done by additional circuitry, e.g. by a or one of these small micro controllers (e.g. Atmel AVR, Microchip PIC) getting more and more popular. The MAX232 and MAX232A were once rather expensive ICs, but today they are cheap. It has also helped that many companies now produce clones (ie. Sipex). These clones sometimes need different external circuitry, e.g. the capacities of the external capacitors vary. It is recommended to check the data sheet of the particular manufacturer of an IC instead of relying on Maxim's original data sheet. The original manufacturer (and now some clone manufacturers, too) offers a large series of similar ICs, with different numbers of receivers and drivers, voltages, built-in or external capacitors, etc. E.g. The MAX232 and MAX232A need external capacitors for the internal voltage pump, while the MAX233 has these capacitors built-in. The MAX233 is also between three and ten times more expensive in electronic shops than the MAX232A because of its internal capacitors. It is also more difficult to get the MAX233 than the garden variety MAX232A. A similar IC, the MAX3232 is nowadays available for low-power 3 V logic. V+(2) is also connected to V via a capacitor (C3). V-(6) is connected to GND via a capacitor (C4). And GND(15) and V(16) are also connected by a capacitor (C5), as close as possible to the pins. A Typical Application. The MAX232(A) has two receivers (converts from RS-232 to TTL voltage levels) and two drivers (converts from TTL logic to RS-232 voltage levels). This means only two of the RS-232 signals can be converted in each direction. The old MC1488/1489 combo provided four drivers and receivers. Typically a pair of a driver/receiver of the MAX232 is used for and the second one for There are not enough drivers/receivers in the MAX232 to also connect the DTR, DSR, and DCD signals. Usually these signals can be omitted when e.g. communicating with a PC's serial interface. If the DTE really requires these signals either a second MAX232 is needed, or some other IC from the MAX232 family can be used (if it can be found in consumer electronic shops at all). An alternative for DTR/DSR is also given below. Maxim's data sheet explains the MAX232 family in great detail, including the pin configuration and how to connect such an IC to external circuitry. This information can be used as-is in own design to get a working RS-232 interface. Maxim's data just misses one critical piece of information: How exactly to connect the RS-232 signals to the IC. So here is one possible example: In addition one can directly wire DTR (DE9 pin 4) to DSR (DE9 pin 6) without going through any circuitry. This gives automatic (brain dead) DSR acknowledgment of an incoming DTR signal. Sometimes pin 6 of the MAX232 is hard wired to DCD (DE9 pin 1). This is not recommended. Pin 6 is the raw output of the voltage pump and inverter for the -10 V voltage. Drawing currents from the pin leads to a rapid breakdown of the voltage, and as a consequence to a breakdown of the output voltage of the two RS-232 drivers. It is better to use software which doesn't care about DCD, but does hardware-handshaking via CTS/RTS only. The circuitry is completed by connecting five capacitors to the IC as it follows. The MAX232 needs 1.0 µF capacitors, the MAX232A needs 0.1 µF capacitors. MAX232 clones show similar differences. It is recommended to consult the corresponding data sheet. At least 16 V capacitor types should be used. If electrolytic or tantalic capacitors are used, the polarity has to be observed. The first pin as listed in the following table is always where the plus pole of the capacitor should be connected to. The 5 V power supply is connected to Alternatives. Data Cables. With the rise of mobile phones so called "data cables" for these phones have also become popular. These are cables to connect the mobile phone to a serial interface of a computer. The interesting thing is that modern mobile phones work with 3.3 V logic, and older phones with 5 V logic on their data buses. So these data cables must and do convert the phone logic voltage levels to and from RS232 voltage levels. No-name data cables have become rather cheap (as opposite to original phone-brand data cables). The cheap cables with their voltage converters can be used as an alternative to home-made MAX232-based circuitry. The advantage is that the cables occupy much less space (the converter is usually inside the RS232 plug). Such a cable also saves the effort to solder a circuitry board. Another advantage, which can also be a disadvantage of such a data cable is that they usually take their power from the RS232 connector. This saves an external power supply, but can also cause problems, because the RS232 interface is not designed to power some logic and the DTE might not provide enough power. Another disadvantage is that many of these cables just support RX and TX (one receiver, one driver), and not two drivers/receivers as the MAX232. So there is no hardware handshake possible. Finally, when using such a cable it should be made sure that they convert to the desired voltage (3.3 V or 5 V). (Ab)using a USB <-> RS-232 Converter. USB to Serial interface cables often have two components: a USB transceiver that outputs serial data; and a voltage shifter to produce standards-compliant RS-232 voltages. It is often possible to throw away (ignore, desolder, cut-out) the USB part of these cables, connect an external 5 V power source (or abuse the RS-232 interface) to replace the power coming from the USB bus and to just use the RS-232 level-shifter. All this is probably as much work as using a MAX232A, although you get one RS-232 connector for free. If you consider a USB cable, it is also worthwhile to consider using USB directly, instead of RS-232. Many USB transceiver chips can be integrated directly into circuits, eliminating the need for voltage-shifting components. Parts such as the FTDI FT232BM even have an input allowing designers to select 5 V or 3.3 V output levels. Most of these USB transceiver chips are available as surface-mount components only. But some vendors offer DIP-sized preassembled modules, often at competitive prices, and often with free or cheap drivers or driver development environments. See for more on USB hardware, interfacing with USB devices and programming USB devices. MAX232N. A Texas Instruments MAX232 (not A) second-source. The N indicates the package (plastic dip), and not any special electric characteristics. This is a non-A MAX232, therefore it needs at least 1µF capacitors. It can sometimes be found rather cheap. TI also offers MAX3232s and a number of other RS-232 drivers/receivers, like MC148x. Linear Technology LT1181A. The LT1181A from Linear Technology is very similar to the MAX232A. It has the exact same pin-layout, also uses 0.1µF capacitors, and can in general replace a MAX232A. However, for the hobbyist it is typically a little bit more difficult to get one, and they tend to be slightly more expensive than the original Maxim MAX232A. Intersil HIN202. The Intersil HIN202 is yet another IC very similar to the MAX232A. It also has the same pin-layout (DIL package), uses 0.1 µF capacitors and can replace a MAX232A. The HIN202 is especially interesting when more I/O lines are needed (four pairs), since the manufacturer specifies that two HIN202's can share a single V+ and a single V- capacitor. So the resulting circuit saves two capacitors. MC1488 and MC1489. The MC1488/MC1489 ICs have already been mentioned. However, they are no real alternative to a MAX232 these days. A combo of these ICs has twice as many drivers/receivers, but the MC1488 driver requires a +12 V, -12 V power supply, and the MC1489 receiver a +5 V power supply. That makes three power supplies instead of one for the MAX232. Unless the required ±12 V supply lines are already available in a circuit, it is recommended to use either two MAX232s, or a single MAX238.
3,156
Geometry for Elementary School/Points. A point is a dot that is so small that its height and width are actually zero! This may seem too small. So small that no such thing could ever really exist. But it does fit with our intuition about the world. Even though everything in the physical world around us of things larger than atoms, it is still very useful to talk about the centers of these atoms, or electrons. A point can be thought of as the limit of dots whose size is decreasing. A point is so small that even if we divide the size of these dots by 100, 1,000 or 1,000,000 it would still be much larger than a point. A point is considered to be "infinitely small". In order to get to the size of a point we should keep dividing the circle size by two – forever. Don't try it at home. A point has no length, width, or depth. In fact, a point has no size at all. A point seems to be too small to be useful. Luckily, as we will see when discussing lines we have plenty of them. It may be best to think of a point as a location, as in a location where two lines cross. Why define a point as an infinitely small dot? For one thing it has a very precise location, not just the center of a rough dot, but the point itself. Another reason is that if the drawing is made much bigger or smaller the point stays the same size. A point which is an infinitely small dot would be too small to see, so we must use a big old visible normal dot, or where two lines cross to represent it and its approximate location on paper. When we name a point, we always use an uppercase letter. Often we will use formula_1 for "point" if we can, and if have more than one dot, we will work our way through the alphabet and use formula_2, formula_3, and so on. However, nowadays many people will start with any letter they like, although the formula_1 still remains the best way. If some points are on the same line, we call them 'colinear'. If they are on the same plane, they are 'coplanar'. Two points are always colinear. But a point can be collinear with several points.Two to three points are always coplanar. Of course this is tautological since the definition of a 'line' is 'two connected points', and the definition of a 'plane' is 'the surface specified by three points'. Exercises. <quiz display=simple> - Length + Location - Volume - Area - On the same surface as each other. - On the same circle as each other. + On the same line as each other. + On the same flat surface as each other. - On the surface of the same cube. - On the surface of the same cone as each other. </quiz>
659
Geometry for Elementary School/Lines. Lines. A line is as wide as a point, infinitely thin, having an infinite number of points, (in a straight row), extending forever in both the directions. Remember that this is impossible to be constructed in real life, so usually we would simply draw a line (with thickness!) with an arrow on both ends. Any two lines can intersect at only a single point. Lines that are on the same plane are 'coplanar'. Line segments. A line segment, or segment, is a part of a line, which has two endpoints. The endpoints give the line segment a fixed, or finite length. Line segments AB, and CD, can bewritten as formula_1 , and formula_2 Rays. A ray is a line segment that has only one endpoint. A ray is infinite in one direction. That means that it goes on forever in one direction. As they are impossible to construct in real life, usually we will just draw a line with an arrowhead on one end. They can be expressed as formula_3. Intersecting lines. Two lines intersect when they cross each other. They form vertically opposite angles, which we will learn later. The point where the lines intersect is called the point of intersection. If the angles produced are all right angles, the lines are called perpendicular lines. If two lines never intersect, they are called parallel lines. Parallel lines will be discussed in detail later. Usually if two lines are not parallel, they intersect each other. The definition of intersecting lines can be called two lines that cross each other a only one point. This point is called the point of intersection. Axiom: there is only a single straight line between two points. Axiom: there is only a single straight line between two points.
394
Korean/Lesson I10. «Lesson 9 | Lesson 10 | Lesson 11»
26
Diplomatic History/World War I. World War I marks a dramatic watershed in the history of Europe and the world. Although it started as a minor skirmish in the Balkans it soon escalated to the point that most of the world's major powers were involved in a war more destructive than any the world had ever known. The social and diplomatic consequences were enormous and felt across the world. Empires fell, political systems were severely tested, and many of the old mindsets were destroyed. Immediate Origins. The reasons for the outbreak of war in August 1914 are diverse and varied. However, the immediate reasons for the outbreak are relatively simple. Gavrillo Princip, the member of a group working for Bosnian independence from Austria-Hungary shot the heir to the throne of that state, Archduke Franz Ferdinand and his wife. At the same time, many members of the Austro-Hungarian government were looking for an excuse to attack Serbia, and because of connections between the cause of Princip's slavic nationalist group and the supposed aims of the Serbian government, they felt this was the perfect excuse. They presented a list of demands to the Serbian government, which it was inclined to accept except for one demand which was expressly designed to be rejected. As a result Austria-Hungary declared war on Serbia. The complicating factor in this situation was the interest of Russia in the region. Due to feelings of community, known as pan-slavism, which unified the people of Russia and Serbia. Russia felt that it could not allow an attack on Serbia to go unchecked. Although they made this clear, Austria-Hungary had assurances from Germany that they would support them in the case of a Russian attack, so the leadership felt confident that they could still defeat Serbia in time to handle a Russian attack. However, because of Russia's defence treaty with France, Germany was more concerned by France than Russia. As a result they initiated a plan to defeat France before turning to Russia that involved an attack against Belgium. In the 19th century Britain had made assurances to Belgium that it would protect Belgium's neutrality, so after the German invasion Britain also declared war on Germany. Thus, through a series of alliances all of Europe's major powers were brought into conflict, ostensibly over the boundaries of Serbia.
520
PHP Programming/The for Loop. The for loop. The for loop is one of the basic looping structures in most modern programming languages. Like the ', for loops execute a given code block until a certain condition is met. Syntax. The basic syntax of the for loop in PHP is similar to the C syntax: for ([initialization]; [condition]; [step]) "Initialization" happens the first time the loop is run. It is used to initialize variables or perform other actions that are to be performed before the first execution of the body of the loop. The "condition" is evaluated before each execution of the body of the loop; if the condition is true, the body of the loop will be executed, if it is false, the loop is exited and program execution resumes at the first line after the body of the loop. "Step" specifies an action that is to be performed after each execution of the loop body. The loop can also be formatted without using , according to personal preference: for ($i = 0; $i < 5; $i++) { echo "$i<br />"; Explanation. Within the for loop, it is indicated that $i starts as 0. When the loop runs for the first time, it prints the initial value of $i, which in the case of the example is 0. For every loop, the variable $i is increased by one (denoted by the $i++ incrementing step). When $i reaches 5 it is no longer less than 5 and therefore the loop stops. Do note that the initialisation, condition and step for the for-loop can be left empty. In this case, the loop will continue indefinitely and subsequently a break execution can be utilised to stop the loop. NOTE: In contrast to other languages like C, C#, C++, or Java, the variable used in the for loop may have been initialised first in the line with the for statement, but it continues to exist after the loop has finished. Using for loops to traverse arrays. In the section on while loops, the sort() example uses a while loop to print out the contents of the array. Generally programmers use for loops for this kind of job. Example. NOTE: Use of indices like below is highly discouraged. Use the key-value for-loop construct. $menu = array("Toast and jam", "Bacon and eggs", "Homefries", "Skillet", "Milk and cereal"); // Note to self: get breakfast after writing this article $count = count($menu); for ($i = 0; $i < $count; $i++) { echo ($i + 1 . ". " . $menu[$i] . "<br />"); Again, this can be formatted without concatenation, if you prefer: for ($i = 0; $i < $count; $i++) { $j = $i + 1; echo "$j. {$menu[$i]}<br />"; Explanation. $count = count($menu); We define the count before the for loop for more efficient processing. This is because each time the for loop is run (while $i < $count) it evaluates both sides of the equation and executes any functions. If we put $i < count($menu), this would evaluate count($menu) each time the process is executed, which is costly when dealing with large arrays. for ($i = 0; $i < $count; $i++) This line sets up the loop. It initializes the counter, $i, to 0 at the start, adds one every time the loop goes through, and checks that $i is less than the size of the array. This can also be done using a second initialization. for ($i = 0, $count = count($menu); $i < $count; $i++) { echo ($i + 1 . ". " . $menu[$i] . "<br />"); The echo statement is pretty self-explanatory, except perhaps the bit at the start, where we echo $i + 1. We do that because, as you may recall, arrays start at 0 and end at n - 1 (where n is their length), so to get a numbered list starting at one, we have to add one to the counter each time we print it. Of course, as I mentioned before, both pieces of code produce the following output: Believe it or not, there's actually a way to traverse arrays that requires even "less" typing. (And isn't that the goal?) Check out the for another way of doing what we did here.
1,077
Special Relativity/Aether. Introduction. Many students confuse Relativity Theory with a theory about the propagation of light. According to modern Relativity Theory the constancy of the speed of light is a consequence of the geometry of spacetime rather than something specifically due to the properties of photons; but the statement "the speed of light is constant" often distracts the student into a consideration of light propagation. This confusion is amplified by the importance assigned to interferometry experiments, such as the Michelson-Morley experiment, in most textbooks on Relativity Theory. The history of theories of the propagation of light is an interesting topic in physics and was indeed important in the early days of Relativity Theory. In the seventeenth century two competing theories of light propagation were developed. Christiaan Huygens published a wave theory of light which was based on Huygen's principle whereby every point in a wavelike disturbance can give rise to further disturbances that spread out spherically. In contrast Newton considered that the propagation of light was due to the passage of small particles or "corpuscles" from the source to the illuminated object. His theory is known as the corpuscular theory of light. Newton's theory was widely accepted until the nineteenth century. In the early nineteenth century Thomas Young performed his Young's slits experiment and the interference pattern that occurred was explained in terms of diffraction due to the wave nature of light. The wave theory was accepted generally until the twentieth century when quantum theory confirmed that light had a corpuscular nature and that Huygen's principle could not be applied. The idea of light as a disturbance of some medium, or aether, that permeates the universe was problematical from its inception (US spelling: "ether"). The first problem that arose was that the speed of light did not change with the velocity of the observer. If light were indeed a disturbance of some stationary medium then as the earth moves through the medium towards a light source the speed of light should appear to increase. It was found however that the speed of light did not change as expected. Each experiment on the velocity of light required corrections to existing theory and led to a variety of subsidiary theories such as the "aether drag hypothesis". Ultimately it was experiments that were designed to investigate the properties of the aether that provided the first experimental evidence for Relativity Theory. The aether drag hypothesis. The aether drag hypothesis was an early attempt to explain the way experiments such as Arago's experiment showed that the speed of light is constant. The aether drag hypothesis is now considered to be incorrect. According to the aether drag hypothesis light propagates in a special medium, the aether, that remains attached to things as they move. If this is the case then, no matter how fast the earth moves around the sun or rotates on its axis, light on the surface of the earth would travel at a constant velocity. The primary reason the aether drag hypothesis is considered invalid is because of the occurrence of stellar aberration. In stellar aberration the position of a star when viewed with a telescope swings each side of a central position by about 20.5 seconds of arc every six months. This amount of swing is the amount expected when considering the speed of earth's travel in its orbit. In 1871, George Biddell Airy demonstrated that stellar aberration occurs even when a telescope is filled with water. It seems that if the aether drag hypothesis were true then stellar aberration would not occur because the light would be travelling in the aether which would be moving along with the telescope. If you visualize a bucket on a train about to enter a tunnel and a drop of water drips from the tunnel entrance into the bucket at the very centre, the drop will not hit the centre at the bottom of the bucket. The bucket is the tube of a telescope, the drop is a photon and the train is the earth. If aether is dragged then the droplet would be travelling with the train when it is dropped and would hit the centre of bucket at the bottom. The amount of stellar aberration, "α" is given by: So: The speed at which the earth goes round the sun, v = 30 km/s, and the speed of light is c = 300,000,000 m/s which gives "α" = 20.5 seconds of arc every six months. This amount of aberration is observed and this contradicts the aether drag hypothesis. In 1818, Augustin Jean Fresnel introduced a modification to the aether drag hypothesis that only applies to the interface between media. This was accepted during much of the nineteenth century but has now been replaced by special theory of relativity (see below). The aether drag hypothesis is historically important because it was one of the reasons why Newton's corpuscular theory of light was replaced by the wave theory and it is used in early explanations of light propagation without relativity theory. It originated as a result of early attempts to measure the speed of light. In 1810, François Arago realised that variations in the refractive index of a substance predicted by the corpuscular theory would provide a useful method for measuring the velocity of light. These predictions arose because the refractive index of a substance such as glass depends on the ratio of the velocities of light in air and in the glass. Arago attempted to measure the extent to which corpuscles of light would be refracted by a glass prism at the front of a telescope. He expected that there would be a range of different angles of refraction due to the variety of different velocities of the stars and the motion of the earth at different times of the day and year. Contrary to this expectation he found that there was no difference in refraction between stars, between times of day or between seasons. All Arago observed was ordinary stellar aberration. In 1818 Fresnel examined Arago's results using a wave theory of light. He realised that even if light were transmitted as waves the refractive index of the glass-air interface should have varied as the glass moved through the aether to strike the incoming waves at different velocities when the earth rotated and the seasons changed. Fresnel proposed that the glass prism would carry some of the aether along with it so that "...the aether is in excess inside the prism". He realised that the velocity of propagation of waves depends on the density of the medium so proposed that the velocity of light in the prism would need to be adjusted by an amount of 'drag'. The velocity of light formula_3 in the glass without any adjustment is given by: The drag adjustment formula_5 is given by: Where formula_7 is the aether density in the environment, formula_8 is the aether density in the glass and formula_9 is the velocity of the prism with respect to the aether. The factor formula_10 can be written as formula_11 because the refractive index, n, would be dependent on the density of the aether. This is known as the Fresnel drag coefficient. The velocity of light in the glass is then given by: This correction was successful in explaining the null result of Arago's experiment. It introduces the concept of a largely stationary aether that is dragged by substances such as glass but not by air. Its success favoured the wave theory of light over the previous corpuscular theory. The Fresnel drag coefficient was confirmed by an interferometer experiment performed by Fizeau. Water was passed at high speed along two glass tubes that formed the optical paths of the interferometer and it was found that the fringe shifts were as predicted by the drag coefficient. The special theory of relativity predicts the result of the Fizeau experiment from the velocity addition theorem without any need for an aether. If formula_13 is the velocity of light relative to the Fizeau apparatus and formula_14 is the velocity of light relative to the water and formula_9 is the velocity of the water: which, if v/c is small can be expanded using the binomial expansion to become: This is identical to Fresnel's equation. It may appear as if Fresnel's analysis can be substituted for the relativistic approach, however, more recent work has shown that Fresnel's assumptions should lead to different amounts of aether drag for different frequencies of light and violate Snell's law (see Ferraro and Sforza (2005)). The aether drag hypothesis was one of the arguments used in an attempt to explain the Michelson-Morley experiment before the widespread acceptance of the special theory of relativity. The Fizeau experiment is consistent with relativity and approximately consistent with each individual body, such as prisms, lenses etc. dragging its own aether with it. This contradicts some modified versions of the aether drag hypothesis that argue that aether drag may happen on a global (or larger) scale and stellar aberration is merely transferred into the entrained "bubble" around the earth which then faithfully carries the modified angle of incidence directly to the observer. References The Michelson-Morley experiment. The Michelson-Morley experiment, one of the most important and famous experiments in the history of physics, was performed in 1887 by Albert Michelson and Edward Morley at what is now Case Western Reserve University, and is considered to be the first strong evidence against the theory of a luminiferous aether. Physics theories of the late 19th century postulated that, just as water waves must have a medium to move across (water), and audible sound waves require a medium to move through (air), so also light waves require a medium, the "luminiferous aether". The speed of light being so great, designing an experiment to detect the presence and properties of this aether took considerable thought. Measuring aether. A depiction of the concept of the “aether wind”. Each year, the Earth travels a tremendous distance in its orbit around the sun, at a speed of around 30 km/second, over 100,000 km per hour. It was reasoned that the Earth would at all times be moving through the aether and producing a detectable "aether wind". At any given point on the Earth's surface, the magnitude and direction of the wind would vary with time of day and season. By analysing the effective wind at various different times, it should be possible to separate out components due to motion of the Earth relative to the Solar System from any due to the overall motion of that system. The effect of the aether wind on light waves would be like the effect of wind on sound waves. Sound waves travel at a constant speed relative to the medium that they are travelling through (this varies depending on the pressure, temperature etc (see sound), but is typically around 340 m/s). So, if the speed of sound in our conditions is 340 m/s, when there is a 10 m/s wind relative to the ground, into the wind it will appear that sound is travelling at 330 m/s (340 - 10). Downwind, it will appear that sound is travelling at 350 m/s (340 + 10). Measuring the speed of sound compared to the ground in different directions will therefore enable us to calculate the speed of the air relative to the ground. If the speed of the sound cannot be directly measured, an alternative method is to measure the time that the sound takes to bounce off of a reflector and return to the origin. This is done parallel to the wind and perpendicular (since the direction of the wind is unknown before hand, just determine the time for several different directions). The cumulative round trip effects of the wind in the two orientations slightly favors the sound travelling at right angles to it. Similarly, the effect of an aether wind on a beam of light would be for the beam to take slightly longer to travel round-trip in the direction parallel to the “wind” than to travel the same round-trip distance at right angles to it. “Slightly” is key, in that, over a distance such as a few meters, the difference in time for the two round trips would be only about a millionth of a millionth of a second. At this point the only truly accurate measurements of the speed of light were those carried out by Albert Abraham Michelson, which had resulted in measurements accurate to a few meters per second. While a stunning achievement in its own right, this was certainly not nearly enough accuracy to be able to detect the aether. The experiments. Michelson, though, had already seen a solution to this problem. His design, later known as an interferometer, sent a single source of white light through a half-silvered mirror that was used to split it into two beams travelling at right angles to one another. After leaving the splitter, the beams travelled out to the ends of long arms where they were reflected back into the middle on small mirrors. They then recombined on the far side of the splitter in an eyepiece, producing a pattern of constructive and destructive interference based on the length of the arms. Any slight change in the amount of time the beams spent in transit would then be observed as a shift in the positions of the interference fringes. If the aether were stationary relative to the sun, then the Earth's motion would produce a shift of about 0.04 fringes. Michelson had made several measurements with an experimental device in 1881, in which he noticed that the expected shift of 0.04 was not seen, and a smaller shift of about 0.02 was. However his apparatus was a prototype, and had experimental errors far too large to say anything about the aether wind. For a measurement of the aether wind, a much more accurate and tightly controlled experiment would have to be carried out. The prototype was, however, successful in demonstrating that the basic method was feasible. He then combined forces with Edward Morley and spent a considerable amount of time and money creating an improved version with more than enough accuracy to detect the drift. In their experiment the light was repeatedly reflected back and forth along the arms, increasing the path length to 11m. At this length the drift would be about .4 fringes. To make that easily detectable the apparatus was located in a closed room in the basement of a stone building, eliminating most thermal and vibrational effects. Vibrations were further reduced by building the apparatus on top of a huge block of marble, which was then floated in a pool of mercury. They calculated that effects of about 1/100th of a fringe would be detectable. The mercury pool allowed the device to be turned, so that it could be rotated through the entire range of possible angles to the "aether wind". Even over a short period of time some sort of effect would be noticed simply by rotating the device, such that one arm rotated into the direction of the wind and the other away. Over longer periods day/night cycles or yearly cycles would also be easily measurable. During each full rotation of the device, each arm would be parallel to the wind twice (facing into and away from the wind) and perpendicular to the wind twice. This effect would show readings in a sine wave formation with two peaks and two troughs. Additionally if the wind was only from the earth's orbit around the sun, the wind would fully change directions east/west during a 12 hour period. In this ideal conceptualization, the sine wave of day/night readings would be in opposite phase. Because it was assumed that the motion of the solar system would cause an additional component to the wind, the yearly cycles would be detectable as an alteration of the magnitude of the wind. An example of this effect is a helicopter flying forward. While on the ground, a helicopter's blades would be measured as travelling around at 50 km/h at the tips. However, if the helicopter is travelling forward at 50 km/h, there are points at which the tips of the blades are travelling 0 km/h and 100 km/h with respect to the air they are travelling through. This increases the magnitude of the lift on one side and decreases it on the other just as it would increase and decrease the magnitude of an ether wind on a yearly basis. The most famous failed experiment. Ironically, after all this thought and preparation, the experiment became what might be called the most famous failed experiment to date. Instead of providing insight into the properties of the aether, Michelson and Morley's 1887 article in the American Journal of Science reported the measurement to be as small as one-fortieth of the expected displacement but "since the displacement is proportional to the square of the velocity" they concluded that the measured velocity was approximately one-sixth of the expected velocity of the Earth's motion in orbit and "certainly less than one-fourth". Although this small "velocity" was measured, it was considered far too small to be used as evidence of aether, it was later said to be within the range of an experimental error that would allow the speed to actually be zero. Although Michelson and Morley went on to different experiments after their first publication in 1887, both remained active in the field. Other versions of the experiment were carried out with increasing sophistication. Kennedy and Illingsworth both modified the mirrors to include a half-wave "step", eliminating the possibility of some sort of standing wave pattern within the apparatus. Illingsworth could detect changes on the order of 1/300th of a fringe, Kennedy up to 1/1500th. Miller later built a non-magnetic device to eliminate magnetostriction, while Michelson built one of non-expanding invar to eliminate any remaining thermal effects. Others from around the world increased accuracy, eliminated possible side effects, or both. All of these with the exception of Dayton Miller also returned what is considered a null result. Morley was not convinced of his own results, and went on to conduct additional experiments with Dayton Miller. Miller worked on increasingly large experiments, culminating in one with a 32m (effective) arm length at an installation at the Mount Wilson observatory. To avoid the possibility of the aether wind being blocked by solid walls, he used a special shed with thin walls, mainly of canvas. He consistently measured a small positive effect that varied, as expected, with each rotation of the device, the sidereal day and on a yearly basis. The low magnitude of the results he attributed to aether entrainment (see below). His measurements amounted to only ~10 kps instead of the expected ~30 kps expected from the earth's orbital motion alone. He remained convinced this was due to "partial" entrainment, though he did not attempt a detailed explanation. Though Kennedy later also carried out an experiment at Mount Wilson, finding 1/10 the drift measured by Miller, and no seasonal effects, Miller's findings were considered important at the time, and were discussed by Michelson, Hendrik Lorentz and others at a meeting reported in 1928 (ref below). There was general agreement that more experimentation was needed to check Miller's results. Lorentz recognised that the results, whatever their cause, did not quite tally with either his or Einstein's versions of special relativity. Einstein was not present at the meeting and felt the results could be dismissed as experimental error (see Shankland ref below). In recent times versions of the MM experiment have become commonplace. Lasers and masers amplify light by repeatedly bouncing it back and forth inside a carefully tuned cavity, thereby inducing high-energy atoms in the cavity to give off more light. The result is an effective path length of kilometers. Better yet, the light emitted in one cavity can be used to start the same cascade in another set at right angles, thereby creating an interferometer of extreme accuracy. The first such experiment was led by Charles H. Townes, one of the co-creators of the first maser. Their 1958 experiment put an upper limit on drift, including any possible experimental errors, of only 30 m/s. In 1974 a repeat with accurate lasers in the triangular Trimmer experiment reduced this to 0.025 m/s, and included tests of entrainment by placing one leg in glass. In 1979 the Brillet-Hall experiment put an upper limit of 30 m/s for any one direction, but reduced this to only 0.000001 m/s for a two-direction case (ie, still or partially entrained aether). A year long repeat known as Hils and Hall, published in 1990, reduced this to 2x10. Fallout. This result was rather astounding and not explainable by the then-current theory of wave propagation in a static aether. Several explanations were attempted, among them, that the experiment had a hidden flaw (apparently Michelson's initial belief), or that the Earth's gravitational field somehow "dragged" the aether around with it in such a way as locally to eliminate its effect. Miller would have argued that, in most if not all experiments other than his own, there was little possibility of detecting an aether wind since it was almost completely blocked out by the laboratory walls or by the apparatus itself. Be this as it may, the idea of a simple aether, what became known as the "First Postulate", had been dealt a serious blow. A number of experiments were carried out to investigate the concept of aether dragging, or "entrainment". The most convincing was carried out by Hamar, who placed one arm of the interferometer between two huge lead blocks. If aether were dragged by mass, the blocks would, it was theorised, have been enough to cause a visible effect. Once again, no effect was seen. Walter Ritz's Emission theory (or ballistic theory), was also consistent with the results of the experiment, not requiring aether, more intuitive and paradox-free. This became known as the "Second Postulate". However it also led to several "obvious" optical effects that were not seen in astronomical photographs, notably in observations of binary stars in which the light from the two stars could be measured in an interferometer. The Sagnac experiment placed the MM apparatus on a constantly rotating turntable. In doing so any ballistic theories such as Ritz's could be tested directly, as the light going one way around the device would have different length to travel than light going the other way (the eyepiece and mirrors would be moving toward/away from the light). In Ritz's theory there would be no shift, because the net velocity between the light source and detector was zero (they were both mounted on the turntable). However in this case an effect "was" seen, thereby eliminating any simple ballistic theory. This fringe-shift effect is used today in laser gyroscopes. Another possible solution was found in the Lorentz-FitzGerald contraction hypothesis. In this theory all objects physically contract along the line of motion relative to the aether, so while the light may indeed transit slower on that arm, it also ends up travelling a shorter distance that exactly cancels out the drift. In 1932 the Kennedy-Thorndike experiment modified the Michelson-Morley experiment by making the path lengths of the split beam unequal, with one arm being very long. In this version the two ends of the experiment were at different velocities due to the rotation of the earth, so the contraction would not "work out" to exactly cancel the result. Once again, no effect was seen. Ernst Mach was among the first physicists to suggest that the experiment actually amounted to a disproof of the aether theory. The development of what became Einstein's special theory of relativity had the Fitzgerald-Lorentz contraction derived from the invariance postulate, and was also consistent with the apparently null results of most experiments (though not, as was recognised at the 1928 meeting, with Miller's observed seasonal effects). Today relativity is generally considered the "solution" to the MM null result. The Trouton-Noble experiment is regarded as the electrostatic equivalent of the Michelson-Morley optical experiment, though whether or not it can ever be done with the necessary sensitivity is debatable. On the other hand, the 1908 Trouton-Rankine experiment that spelled the end of the Lorentz-FitzGerald contraction hypothesis achieved an incredible sensitivity. References Mathematical analysis of the Michelson Morley Experiment. The Michelson interferometer splits light into rays that travel along two paths then recombines them. The recombined rays interfere with each other. If the path length changes in one of the arms the interference pattern will shift slightly, moving relative to the cross hairs in the telescope. The Michelson interferometer is arranged as an optical bench on a concrete block that floats on a large pool of mercury. This allows the whole apparatus to be rotated smoothly. If the earth were moving through an aether at the same velocity as it orbits the sun (30 km/sec) then Michelson and Morley calculated that a rotation of the apparatus should cause a shift in the fringe pattern. The basis of this calculation is given below. Consider the time taken formula_19 for light to travel along Path 1 in the illustration: Rearranging terms: further rearranging: formula_22 hence: formula_23 Considering Path 2, the light traces out two right angled triangles so: Rearranging: So: It is now easy to calculate the difference (formula_27 between the times spent by the light in Path 1 and Path 2: If the apparatus is rotated by 90 degrees the new time difference is: because formula_30 and formula_31 exchange roles. The interference fringes due to the time difference between the paths will be different after rotation if formula_27 and formula_33 are different. This difference between the two times can be calculated if the binomial expansions of formula_35 and formula_36 are used: So: If the period of one vibration of the light is formula_40 then the number of fringes (formula_41), that will move past the cross hairs of the telescope when the apparatus is rotated will be: Inserting the formula for formula_43: But formula_45 for a light wave is the wavelength of the light ie: formula_46 so: If the wavelength of the light is formula_48 and the total path length is 20 metres then: So the fringes will shift by 0.4 fringes (ie: 40%) when the apparatus is rotated. However, no fringe shift is observed. The null result of the Michelson-Morley experiment is nowdays explained in terms of the constancy of the speed of light. The assumption that the light would have a velocity of formula_50 and formula_51 depending on the direction relative to the hypothetical "aether wind" is false, the light always travels at formula_52 between two points in a vacuum and the speed of light is not affected by any "aether wind". This is because, in {special relativity} the Lorentz transforms induce a {length contraction}. Doing over the above calculations we obtain: It is now easy to recalculate the difference formula_27 between the times spent by the light in Path 1 and Path 2: If the apparatus is rotated by 90 degrees the new time difference is: The interference fringes due to the time difference between the paths will be different after rotation if formula_27 and formula_33 are different. Note: if the rest length formula_61 then formula_62 and then formula_63 and formula_64 and, more importantly, formula_65. This is why Michelson took great pains in equalizing the arms of the interferometer. Wave propagation in moving medium. To date, it is pointed out that the medium of light in Michelson-Morley experiment is the air. And the velocity of medium is zero. Therefore, After apparatus rotated 90°, there is no interference movement. Coherence length. The coherence length of light rays from a source that has wavelengths that differ by formula_69 is: If path lengths differ by more than this amount then interference fringes will not be observed. White light has a wide range of wavelengths and interferometers using white light must have paths that are equal to within a small fraction of a millimetre for interference to occur. This means that the ideal light source for a Michelson Interferometer should be monochromatic and the arms should be as near as possible equal in length. The calculation of the coherence length is based on the fact that interference fringes become unclear when light rays are about 60 degrees (about 1 radian or one sixth of a wavelength (formula_71)) out of phase. This means that when two beams are: metres out of step they will no longer give a well defined interference pattern. Suppose a light beam contains two wavelengths of light, formula_73 and formula_74, then in: cycles they will be formula_72 out of phase. The distance required for the two different wavelengths of light to be this much out of phase is the coherence length. Coherence length = number of cycles x length of each cycle so: coherence length = formula_77 . Lorentz-Fitzgerald Contraction Hypothesis. After the first Michelson-Morley experiments in 1881 there were several attempts to explain the null result. The most obvious point of attack is to propose that the path that is parallel to the direction of motion is contracted by formula_78 in which case formula_27 and formula_33 would be identical and no fringe shift would occur. This possibility was proposed in 1892 by Fitzgerald. Lorentz produced an "electron theory of matter" that would account for such a contraction. Students sometimes make the mistake of assuming that the Lorentz-Fitzgerald contraction is equivalent to the Lorentz transformations. However, in the absence of any treatment of the time dilation effect the Lorentz-Fitzgerald explanation would result in a fringe shift if the apparatus is moved between two different velocities. The rotation of the earth allows this effect to be tested as the earth orbits the sun. Kennedy and Thorndike (1932) performed the Michelson-Morley experiment with a highly sensitive apparatus that could detect any effect due to the rotation of the earth; they found no effect. They concluded that both time dilation and Lorentz-Fitzgerald Contraction take place, thus confirming relativity theory. If only the Lorentz-Fitzgerald contraction applied then the fringe shifts due to changes in velocity would be: formula_81. Notice how the sensitivity of the experiment is dependent on the difference in path length formula_82 and hence a long coherence length is required. Recent Michelson-Morley experiments. Optical tests of the isotropy of the speed of light have become commonplace. New technologies, including the use of lasers and masers, have significantly improved measurement precision. More recent experiments still, using other types of experiment such as optical resonators (Eisele "et al."), have shown that the speed of light is constant to within formula_83 m/s .
7,113
XML - Managing Data Exchange/CSS. Learning objectives. Upon completion of this chapter, for CSS you will be able to Introduction. CSS (Cascading Style Sheets) is a language that describes the presentation form of a structured document. An XML or an HTML based document does not have a set style, but it consists of structured text without style information. How the document will look when printed on paper and viewed in a browser or maybe a cellphone is determined by a style sheet. A good way of making a document look consistent and easy to update is by using CSS, which Wikipedia is a good example of. History of CSS. Style sheets have been around in one form or another since the beginnings of HTML in the early 1990s. Various browsers included their own style language which could be used to customize the appearance of web documents. Originally, style sheets were targeted towards the end-user; early revisions of HTML did not provide many facilities for presentational attributes, so it was often up to the user to decide how web documents would appear. As the HTML language grew, however, it came to encompass a wider variety of stylistic capabilities to meet the demands of web developers. With these capabilities, style sheets became less important, and an external language for the purposes of defining style attributes was not widely accepted until the development of CSS. The concept of Cascading Style Sheets was originally proposed in 1994 by Håkon Wium Lie. Bert Bos was at the time working on a browser called Argo which used its own style sheets; the two decided to work together to develop CSS. A number of other style sheet languages had already been proposed, but CSS was the first to incorporate the idea of "cascading" -- the capability for a document's style to be inherited from more than one "style sheet." This permitted a user's preferred style to override the site author's specified style in some areas, while inheriting, or "cascading" the author's style in other areas. The capability to cascade in this way permits both users and site authors added flexibility and control; it permitted a mixture of stylistic preferences. Håkon's proposal was presented at the "Mosaic and the Web" conference in Chicago in 1994, and again with Bert Bos in 1995. Around this time, the World Wide Web Consortium was being established; the W3C took an interest in the development of CSS, and organized a workshop toward that end. Håkon and Bert were the primary technical staff on the project, with additional members, including Thomas Reardon of Microsoft, participating as well. By the end of 1996, CSS was nearly ready to become official. The CSS level 1 Recommendation was published in December 1996. Early in 1997, CSS was assigned its own working group within the W3C. The group began tackling issues that had not been addressed with CSS level 1, resulting in the creation of CSS level 2, which was published as an official Recommendation in May 1998. CSS level 3 is still under development as of 2005. Why use CSS? Cleaner Looking Code. A mass of HTML tags which manage design elements generally obscure the content of a page, making the code harder to read and maintain. Using CSS, the content of the page is separated from the design, making content production in formats such as HTML, XHTML, and XML as easy as possible. Pages Will Load Faster. Non-CSS design typically consists of more code than a CSS-designed website. In a non-CSS design, the information about the design is reloaded every time a visitor accesses a new page. Additionally, the finer points of design are executed awkwardly. For example, a common method of defining the spacing of a web page is to use blank GIF images inside tables. Using CSS keeps content and design separated, so much less code will be needed. The CSS file loads only once per session, and is saved locally in the user's cache. All information about dimensions is defined in this stylesheet, rendering awkward constructions like blank GIF images unnecessary. Although an increasing amount of Internet users have broadband, the size of a web page can be important to users who are limited to dial-up connections. Suppose a dial-up user accesses a company's website, and this visitor experiences lengthy loading times. It is quite possible that the visitor would stop their visit or form an opinion of this company as "slow." In this way, a seemingly small difference could mean added revenue. Furthermore, bandwidth is not free and most webhosting firms limit the amount used. In fact, many hosts charge based on bandwidth usage, so less code could also reduce costs. Redesign Becomes Trivial. When used properly, CSS is a very powerful tool that gives a web architect complete control over a site's presentation. It is a notation in which the rules of a design are governed. This becomes very useful for a large website which requires a consistent appearance for every type of element (such as a title, a subtitle, a piece of code, or a paragraph). For example, suppose a company has a 1,200 page website which took many months to complete. The company then undergoes a rebranding and thus the font, the background, the style of hyperlinks, and so forth needs to be updated with the new corporate design. If the site was engineered properly using CSS, this change would be as simple as editing the appropriate lines of a single CSS file (assuming it is an external stylesheet). If CSS is not used, the code that manages the appearance is stored in each of the pages. In order to update the design in this case, each file would have to be updated "individually". Accessibility. People with lowered vision or users with special web browsers, e.g. people that are blind, will probably like a CSS designed website better than one not designed using CSS. Because CSS allows you to define the reading order separately from the visual layout it makes it easier for the special web browsers to read the page. Bear in mind that anyone who wears glasses or contact lenses can be considered to have lower vision. Many designers lock the font size in pixels which prevents the user changing the font size. Good CSS design allows the user to increase or decrease the font size at will making pages more usable. A significant number of web surfers like to use a magnification of 300% or more. Giving the user the opportunity to change the font size will not make any difference for the normal user, but it can make a difference for people that have lowered vision. Ask yourself the question: who is the website made for? The visitors or the designer? Websites designed with CSS tend to display better than table-based designs in the web browsers used in PDAs and cellphones. The use of cellphones for browsing will probably continue to increase. A table-based design will make web pages inaccessible to these users. Be careful with your CSS designs. Misuse of absolute positioning and absolute rather than relative sizes can make your webpages less accessible rather than more accessible. A good table design is better than a bad CSS design. Better results in search engines. Extensive use of tables confuses the search engines, they can actually get problems separating content from code. The search engine robots start reading on the top of the page, and they want to find out how relevant the webpage is as fast as possible. Again, less code will make it easier for the search engines to find code that's relevant, and it will probably give your webpage a better ranking. Disadvantages of CSS. The use of CSS for styling has few disadvantages. However some browsers, especially older ones, will sometimes present the page incorrectly. When I was gathering information for this chapter it became clear to me that many experts think that formatting XML with CSS is not the future of the web. The main view is that XSL will be the new standard. So make sure you read through the previous chapter of this book one more time. The formatting parts of XSL and CSS will be quite similar. For example, you will be able to use all CSS1 and CSS2 properties and values in XSL with the same meaning as in CSS. CSS levels. The first CSS specification to become an official W3C Recommendation is CSS level 1, published in December 1996. Among its capabilities is support for: The W3C maintains the CSS1 Recommendation. CSS level 2 was developed by the W3C and published as a Recommendation in May 1998. A superset of CSS1, CSS2 includes a number of new capabilities, among them the absolute, relative, and fixed positioning of elements, the concept of media types, support for aural style sheets and bidirectional text, and new font properties such as shadows. The W3C maintains the CSS2 Recommendation. CSS level 2 revision 1 or CSS 2.1 fixes errors in CSS2, removes poorly-supported features and adds already-implemented browser extensions to the specification. It's currently a Candidate Recommendation. CSS level 3 is currently under development. The W3C maintains a CSS3 progress report. CSS Syntax and Properties. The following section contains a list of some of the most common CSS properties. A complete list can be found here. The syntax for the use of CSS in an XML document is the same as that for HTML. The difference is in how you link your CSS file to the XML document. To do this you have to write codice_1 before the root element of your XML document, where "X.css" of course is the name of the CSS file. As mentioned earlier in this chapter, CSS is a set of rules that determines how elements in a document will be shown. The rule has two parts: a selector and a group of one or more declarations surrounded by braces (curly brackets): The selector is normally the tag you wish to style. Here is an example of a simple rule containing a single declaration:<br> Result: All h1-elements in the document are shown with the text color red. The general syntax. Rules are usually defined like this: The declaration is formed like this: Remember that there can be several declarations in one rule. A common mistake is to mix up colons, which separate the property and value of a declaration, and semicolons, which separate declarations. A selector chooses the elements for which the rule applies and the declaration sets the value for the different properties of the elements that are chosen. Back to our example: In our example: The property "color" gets the value "red " Multiple declarations can be written either on a single line or over several lines, because whitespace collapses: or Details of the properties defined by CSS can be found at CSS Programming#CSS1 Properties. Summary. Cascading Style Sheets (CSS), are used with webpages to define the view of information saved in HTML or XML. While XML and HTML create and preserve a documents structure, CSS is used to define the appearance and placement of objects within the document as well as its content. All of this information is saved in a separate file, the .css file. In the CSS file are textsize, background color, text types, e.g defined. The placement of pictures and other animations are also defined in the css file. If CSS is used correctly it would make a webpage a lot easier to create and even more important, to maintain. Because you will only have to make changes in the css file to make the whole website change. <br style="clear:both"> References and useful links. References:<BR> Useful links:<BR> Exercises. Exercise 1. Using the CSS file provided below, create a price list for books as an XML document. <?xml version="1.0"?> Exercise1.css: Exercise 2. Create a personal homepage, where you introduce yourself. The page should contain one header, one footer, and navigation as a list of links. Solutions. Solutions CSS Challenges. Copy and paste the HTML, then take up the challenge to create a stylesheet to match the picture!
2,759
DataPerfect/User Formulas. DataPerfect Formulas Date. Add months to a date value. Description Notes about the formula Calculate age to the month. Description Notes about the formula Calculate age to the year. (Estimated Calculation). Description Notes about the formula Calculate Exact Age to the day. (Most Accurate). Description Notes about the formula Convert date to numeric field (without dashes). Description Notes about the formula Convert date to words and numbers. Description Notes about the formula Convert day of week to wor. Description Notes about the formula Convert month to string. Description Notes about the formula Convert string to date. Description Notes about the formula Extract last two digits from date field. Description Notes about the formula Find leap year. Description Notes about the formula Reverse date sort. Description Notes about the formula IF THEN. Example of nesting IF statements. Description Notes about the formula Numeric. Extracting decimals.. Description Notes about the formula Reverse digit orders.. Description Notes about the formula Round up to the nearest nickel.. Description Notes about the formula Sum field value for records in a report.. Description Notes about the formula Range Check. Disjointed.. Description Notes about the formula Styles.. Description Notes about the formula Search. Date and account number extractions.. Description Notes about the formula Find all records or specific records only.. Description Notes about the formula Search for a wildcard character (* or ?).. Description Notes about the formula Strings. Canadian postal codes.. Description Notes about the formula Capitalize first character of every word.. Description Notes about the formula Capitalize the first letter of a field.. Description Notes about the formula Determine Country.. Description Notes about the formula Dot Leaders.. Description Notes about the formula Extract from phone number field.. Description Notes about the formula Remove leading blanks from an alphanumeric field. Description Notes about the formula Time. Calculate hour.. Description Notes about the formula Convert military to regular time (string format).. Description Notes about the formula Source notes. This wiki page ws generated from the DataPerfect FORMULA database. The formatting for the formula descriptions have been revised.
581
Graphic Design/Web Design. Today, Web site Design is a very important part of current Graphic Design. The Eight C's of web design are basic things to keep in mind while designing web sites. These eight tips could also apply to other parts of Graphic design.
57
Organic Chemistry/Spectroscopy. There are several spectroscopic techniques which can be used to identify organic molecules: infrared (IR), mass spectroscopy (MS) UV/visible spectroscopy (UV/Vis) and nuclear magnetic resonance (NMR). IR, NMR and UV/vis spectroscopy are based on observing the frequencies of electromagnetic radiation absorbed and emitted by molecules. MS is based on measuring the mass of the molecule and any fragments of the molecule which may be produced in the MS instrument. UV/Vis Spectroscopy. UV/Vis spectroscopy is an absorption spectroscopy technique that utilizes electromagnetic radiation in the 10 nm to 700 nm range. The energy associated with light between these wavelengths can be absorbed by both non-bonding n-electrons and π-electrons residing within a molecular orbital. This absorption of energy causes the promotion of an electron from the highest occupied molecular orbital (HOMO) to the lowest unoccupied molecular orbital (LUMO). More specifically, the electron is said to undergo either an n→π* transition or π→π* transition. It is also this absorption of visible light energy that gives rise to the perceived color of pigmented compounds. As a result of this phenomena, UV/Vis is a technique often employed in organic chemistry in order to identify the presence of free electrons or double (π) bonds within a molecule. The wavelength that is most absorbed by the molecule is known as the λ-max, and it is this number that can be used to make comparative analysis of different molecules. This spectroscopic technique can also be used to identify some molecules with large conjugated π-systems or carbonyl groups by utilizing the Woodward-Fieser rules, which relies on a set of empirical values to predict the λ-max of a compound. NMR Spectroscopy. Nuclear Magnetic Resonance (NMR) Spectroscopy is one of the most useful analytical techniques for determining the structure of an organic compound. There are two main types of NMR, 1H-NMR (Proton NMR) and 13C-NMR (Carbon NMR). NMR is based on the fact that the nuclei of atoms have a quantized property called spin. When a magnetic field is applied to a 1H or 13C nucleus, the nucleus can align either with (spin +1/2) or against (spin -1/2) the applied magnetic field. These two states have different potential energies and the energy difference depends on the strength of the magnetic field. The strength of the magnetic field about a nucleus, however, depends on the chemical environment around the nucleus. For example, the negatively charged electrons around and near the nucleus can shield the nucleus from the magnetic field, lowering the strength of the effective magnetic field felt by the nucleus. This, in turn, will lower the energy needed to transition between the +1/2 and -1/2 states. Therefore, the transition energy will be lower for nuclei attached to electron donating groups (such as alkyl groups) and higher for nuclei attached to electron withdrawing groups (such as a hydroxyl group). In an NMR machine, the compound being analyzed is placed in a strong magnetic field and irradiated with radio waves to cause all the 1H and 13C nuclei to occupy the higher energy -1/2 state. As the nuclei relax back to the +1/2 state, they release radio waves corresponding to the energy of the difference between the two spin states. The radio waves are recorded and analyzed by computer to give an intensity versus frequency plot of the sample. This information can then be used to determine the structure of the compound. Aromatics in H-NMR Electron Donating Groups vs. Electron Withdrawing Groups On monosubstituted rings, electron donating groups resonate at high chemical shifts. Electron donating groups increase the electron density by releasing electrons into a reaction center, thus stabilizing the carbocation. An example of an electron donating group is methyl (-CH3). Accordingly, electron withdrawing groups are represented at low chemical shifts. Electron withdrawing groups pull electrons away from a reacting center. This can stabilize an electron rich carbanion. Some examples of electron withdrawing groups are halogens (-Cl, -F) and carboxylic acid (-COOH). Looking at the H NMR spectrum of ethyl benzene, we see that the methyl group is the most electron withdrawing, so it appears at the lowest chemical shift. The aromatic phenyl group is the most electron donating, so it has the highest chemical shift. Disubstituted Rings The sum of integrated intensity values for the entire aromatic region shows how many substituents are attached to the ring, so a total value of 4 indicates that the ring has 2 substituents. When a benzene ring has two substituent groups, each exerts an influence on following substitution reactions. The site at which a new substituent is introduced depends on the orientation of the existing groups and their individual directing effects. For a disubstituted benzene ring, there are three possible NMR patterns. Note that para-substituted rings usually show two symmetric sets of peaks that look like doublets. The order of these peaks is dependent on the nature of the two substituents. For example, the three NMR spectra of chloronitrobenzene isomers are below: =Mass Spectrometry= A mass spectroscope measures the exact mass of ions, relative to the charge. Many times, some form of separation is done beforehand, enabling a spectrum to be collected on a relatively pure sample. An organic sample can be introduced into a mass spectroscope and ionised. This also breaks some molecules into smaller fragments. The resulting mass spectrum shows: 1) The heaviest ion is simply the ionised molecule itself. We can simply record its mass. 2) Other ions are fragments of the molecule and give information about its structure. Common fragments are: =Infrared spectroscopy.= Absorbing infrared radiation makes covalent bonds vibrate. Different types of bond absorb different wavelengths of infrared: Instead of wavelength, infrared spectroscopists record the wavenumber; the number of waves that fit into 1 cm. (This is easily converted to the energy of the wave.) For some reason the spectra are recorded backwards (from 4000 to 500 cm-1 is typical), often with a different scale below 1000 cm-1 (to see the fingerprint region more clearly) and upside-down (% radiation transmitted is recorded instead of the absorbance of radiation). The wavenumbers of the absorbed IR radiation are characteristic of many bonds, so IR spectroscopy can determine which functional groups are contained in the sample. For example, the carbonyl (C=O) bond will absorb at 1650-1760cm-1. Summary of absorptions of bonds in organic molecules. w:Infrared Spectroscopy Correlation Table Absorptions listed in cm-1. Typical method. A beam of infra-red light is produced and split into two separate beams. One is passed through the sample, the other passed through a reference which is often the substance the sample is dissolved in. The beams are both reflected back towards a detector, however first they pass through a splitter which quickly alternates which of the two beams enters the detector. The two signals are then compared and a printout is obtained. A reference is used for two reasons: =References & notes= SDBS is a free on-line database of Spectral analysis including many IR, NMR and MS graphs.
1,775
Calculus/Higher Order Derivatives. The second derivative, or second order derivative, is the derivative of the derivative of a function. The derivative of the function formula_1 may be denoted by formula_2 , and its double (or "second") derivative is denoted by formula_3 . This is read as "formula_4 double prime of formula_5", or "The second derivative of formula_1". Because the derivative of function formula_4 is defined as a function representing the slope of function formula_4 , the double derivative is the function representing the slope of the first derivative function. Furthermore, the third derivative is the derivative of the derivative of the derivative of a function, which can be represented by formula_1 . This is read as "formula_4 triple prime of formula_5", or "The third derivative of formula_1" . This can continue as long as the resulting derivative is itself differentiable, with the fourth derivative, the fifth derivative, and so on. Any derivative beyond the first derivative can be referred to as a higher order derivative. Notation. Let formula_1 be a function in terms of formula_5 . The following are notations for higher order derivatives. Warning: You should not write formula_15 to indicate the formula_16-th derivative, as this is easily confused with the quantity formula_1 all raised to the nth power. The Leibniz notation, which is useful because of its precision, follows from Newton's dot notation extends to the second derivative, formula_19 , but typically no further in the applications where this notation is common. Examples. Find the third derivative of formula_20 with respect to formula_5. Repeatedly apply the Power Rule to find the derivatives. Find the 3rd derivative of formula_25 with respect to formula_5. For applications of the second derivative in finding a curve's concavity and points of inflection, see "Extrema and Points of Inflection" and "Extreme Value Theorem". For applications of higher order derivatives in physics, see the "Kinematics" section.
460
Sanskrit/Introduction. Sanskrit is an ancient language of the Indo-European family, from which is descended many languages found in northern India. The language is also known as 'devabhāṣā' (language of the gods) or 'devavaani' (voice of the gods). Since the 10th century CE, Sanskrit has been primarily written in the Devanaagari script, but it is common for the language to be printed or written in Indian vernacular scripts. The word Sanskrit is derived from the word 'saṃskṛtá' meaning "polished" or "perfectly done". The earliest form of Sanskrit is known as Vedic Sanskrit and was spoken by the people of India in the second millennium BCE. Classical Sanskrit, mostly associated with religion, philosophy, and literature, emerged later in the 5th century BCE after PaaNini's extensive grammar codified the language. Sanskrit has a rich literary tradition whose most famous poets and playwrights include is Kaalidaasa, mostly known for his compositions Meghadutam, Kumaarasambhavam, Abhijnaanasakuntalam," and "Raghuvamsam. The Vedas are written in the Sanskrit language and are the roots of Vedic culture. They are considered to be a collection of great knowledge and spiritual puissance. Samskrita Bharati, an NGO, has taken the initiative to popularize the Sanskrit language.
340
German/Level I/Das Fest. <br clear="all"> Lesson I.10: Das Fest This lesson deals with the Christmas time in the German language countries, where you learn some traditions and vocabularies about Christmas. You'll also learn about "there is" and "there are" in German and about the dative case. Dialogue. Read and listen to the following dialogue between mother and daughter: Roswitha and Anja. Both of them want to decorate for Christmas. Weihnachten in Deutschland. In Germany the advent season begins on Sunday four weeks before Christmas. It's the day where many families decorate their houses or flats, begin to bake some biscuits and start to sing some Christmas carols. One typical decoration is the advent wreath, which has four candles - one candle is lit in the first week, two candles in the second week, etc. - and normally stands on the dining table or on the coffee table. Another tradition, especially for children, is the advent calendar that you hang on the wall. They've often got 24 doors and you're only allowed to open one a day. Other typical Christmas decorations are a crib, a Räuchermann - a wooden figure that blows flavour of incense cones - in Northern Germany a Moosmann, Christmas pyramids and Schwibbogen and nutcrackers and poinsettias and much more. Most Christmas markets start in the first week of Advent. There you can buy some little Christmas presents, decorations, ride some carnival rides, and often drink some hot spiced wine - the children drink punch for children, listen to carolers and enjoy a warm, snowy atmosphere. On the 6th of December, German children celebrate St. Nicholas Day. The children put a boot in front of the door and wait until St. Nicholas brings little presents that are often sweets, walnuts, apples, tangerines and oranges. Bad children get birching by Knecht Ruprecht (which is now forbidden in Germany). Pupils do a secret Santa with other pupils on the last school days before the Christmas holidays, which are often two or three weeks long. St. Nicholas looks similar to Santa Claus who brings big presents on the evening of the 24th of December; in Southern Germany Christkind brings the presents. Most families decorate their Christmas trees on this day with Christmas baubles and tinsel and candles and so forth. After the Christmas dinner, the whole family sits next to the Christmas tree and exchanges gifts. Weihnachtsessen. das Plätzchen, der Keks cookie die Ausstecher cookie cutter das Nudelholz rolling pin die Vanillekipferl vanilla cornets der Lebkuchen gingerbread das Lebkuchenhaus gingerbread house die Kokosmakrone coconut macaroon die Spitzbuben jammy dodgers, linzer eye die Pfeffernuss spice nut der Christstollen stollen die Marzipankartoffel marzipan potato die Weihnachtsgans Christmas goose der Weihnachtskarpfen Christmas carp der Truthahn turkey Würstchen und Kartoffelsalat sausages and potato salad das Spekulatius almond biscuit der Baumkuchen pyramid cake der Mürbeteig shortcrust der Springerle springerle das Bethmännchen bethmännchen der Zimtstern star-shaped cinnamon biscuit das Früchtebrot fruitcake der Bratapfel roast apple der Dominostein domino die Zuckerstange candy cane der Glühwein hot spiced wine der Kinderpunsch punch for children das Kenkentjüch kenkentjüch die gebrannte Mandeln roasted almonds das Weihnachtsessen Christmas dinner das Hirschhornsalz salt of harts horn der Zimt cinnamon der Puderzucker icing powdered sugar das Aroma flavour So in Swabian they call it "Plätzle" or "Brötle" and in Bavaria "Platzerl". In Switzerland they call it "Guetsli". In Austria and Bavaria they call it "Nudelwalker" and in Switzerland "Wallholz".
1,031
Serial Programming/Forming Data Packets. Just about every idea for communicating between computers involves "data packets", especially when more than 2 computers are involved. The idea is very similar to putting a check in an envelope to mail to the electricity company. We take the data (the "check") we want to send to a particular computer, and we place it inside an "envelope" that includes the address of that particular computer. A packet of data starts with a preamble, followed by a header, followed by the raw data, and finishes up with a few more bytes of transmission-related error-detection information -- often a . We will talk more about what we do with this error-detection information in the next chapter, Serial Programming/Error Correction Methods. The accountant at the electricity company throws away the envelope when she gets the check. She already knows the address of her own company. Does this mean the "overhead" of the envelope is useless ? No. In a similar way, once a computer receives a packet, it immediately throws away the preamble. If the computer sees that the packet is addressed to itself, and has no errors, then it discards the wrapper and keeps the data. The header contains the destination address information used by all the routers and switches to send the complete packet to the correct destination address, like a paper envelope bears the destination address used by the postal workers that carry the mail to the correct destination address. Most protocol use a header that, like most paper mail envelopes, also include the source address and a few other bits of transmission-related information. Unfortunately, there are dozens of slightly different, incompatible protocols for data packets, because people pick slightly different ways to represent the address information and the error-detection information. ... gateways between incompatible protocols ... Packet size tradeoffs. Protocol designers pick a maximum and minimum packet size based on many tradeoffs. Start-of-packet and transparency tradeoffs. Unfortunately, it is impossible for any communication protocol to have all these nice-to-have features: Some communication protocols break transparency, requiring extra complexity elsewhere -- requiring higher network layers to implement work-arounds such as w:binary-to-text encoding or else suffer mysterious errors, as with the w:Time Independent Escape Sequence. Some communication protocols break "8-bit" -- i.e., in addition to the 256 possible bytes, they have "extra symbols". Some communication protocols have just a few extra non-data symbols -- such as the "long pause" used as part of the Hayes escape sequence; the "long break" used as part of the SDI-12 protocol; "command characters" or "control symbols" in 4B5B coding, 8b/10b encoding; etc. Other systems, such as 9-bit protocols, transmit 9 bit symbols. Typically the first 9-bit symbol of a packet has its high bit set to 1, waking up all nodes; then each node checks the destination address of the packet, and all nodes other than the addressed node go back to sleep. The rest of the data in the packet (and the ACK response) is transmitted as 9 bit symbols with the high bit cleared to 0, effectively 8 bit values, which is ignored by the sleeping nodes. (This is similar to the way that all data bytes in a MIDI message are effectively 7 bit values; the high bit is set only on the first byte in a MIDI message). Alas, some UARTs make it awkward, difficult, or impossible to send and receive such 9-bit characters. Some communication protocols break "unique start" -- i.e., they allow the no-longer-unique start-of-packet symbol to occur elsewhere -- most often because we are sending a file that includes that byte, and "simple copy" puts that byte in the data payload. When a receiver is first turned on, or when cables are unplugged and later reconnected, or when noise corrupts what was intended to be the real start-of-packet symbol, the receiver will incorrectly interpret that data as the start-of-packet. Even though the receiver usually recognizes that something is wrong (checksum failure), a single such noise glitch may lead to a cascade of many lost packets, as the receiver goes back and forth between (incorrectly) interpreting that data byte in the payload as a start-of-packet, and then (incorrectly) interpreting a real start-of-packet symbol as payload data. Even worse, such common problems may cause the receiver to lose track of where characters begin and end. Early protocol designers believed that once synchronization has been lost, there must be a unique start-of-packet character sequence required to regain synchronization. Later protocol designers have designed a few protocols, such as CRC-based framing, that not only break "unique start" -- allow the data payload contain the same byte sequence as the start-of-packet, supporting simple-copy transparency -- they don't even need a fixed unchanging start-of-packet character sequence. In order to keep the "unique start" feature, many communication protocols break "simple copy". This requires a little extra software and a little more time per packet than simply copying the data -- which is usually insignificant with modern processors. The awkwardness comes from (a) making sure that the entire process -- the transmitter encoding/escaping a chunk of raw data into a packet payload that must not include the start-of-packet byte, and the receiver decoding/unescaping the packet payload into a chunk of raw data -- is completely transparent to any possible sequence of raw data bytes, even if those bytes include one or more start-of-packet bytes, and (b) since the encoded/escaped payload data inevitably requires more bytes than the raw data, we must make sure we don't overflow any buffers even with the worst possible expansion, and (c) unlike "simple copy" where a constant bitrate of payload data bits results in the same constant goodput of raw data bits, we must make sure that the system is designed to handle the variations in payload data bitrate or raw data bit goodput or both. Some of this awkwardness can be reduced by using consistent-overhead byte stuffing (COBS). rather than variable-overhead byte stuffing techniques such as the one used by SLIP. Calculate the CRC and append it to the packet *before* encoding both the raw data and the CRC with COBS. preamble. Two popular approaches to preambles are:
1,463
Prolog/Introduction. This section covers the installation of a prolog compiler, loading your first program, and querying it. It then explains how to use facts and variables in your programs and queries. Getting Started. Before anything can be done, a prolog compiler and a text editor need to be installed on your system. A text editor will allow you to write your prolog programs and the prolog compiler (also known as the interpreter) will allow you to execute them. "Prolog Compilers" The following prolog implementations are free (at least for personal or educational use, be sure to read the terms). Simply download one and install it according to the instructions on the website: The following implementations aren't free: "Text Editors" The programs that you will write are simple text files, to be read and written with any text editor. Some prolog implementations come with their own editor, but for those that don't here's a list of text editors. These provide the basic function that are useful for writing prolog programs, such as indentation, bracket matching and some can even be adjusted to highlight the syntax of your prolog code. First Steps. Once you've installed your prolog implementation, it's time to write the first program and load it into the interpreter (mainly to see if everything works). Fire up your text editor and create a text file with just the following line in it: human(john). Be precise, capitalization is important in prolog, as is the period. This will be your program (also known as the database or the knowledge base). Give it a nice name like prolog1.pl and save it. "Note: The extension pl isn't officially associated with prolog and can cause conflicts if you're also programming in Perl, which uses .pl as well. If this is a problem you can use pro or pr or anything you like just as well." Now start your prolog interpreter. Most prolog interpreters will show you a window with some startup information and then the line with a cursor behind it. There is usually a menu for loading files into the interpreter. If there isn't, you can type the following to load your file: consult('FILEPATH'). And press enter. Once again, be precise, no capitals and remember the dot. Replace FILEPATH with the name and directory of your file. For instance if your file is located in C:\My Documents\Prolog\prolog1.pl then use consult('c:/my documents/prolog/prolog1.pl'). or the shorthand ['c:/my documents/prolog/prolog1.pl']. Note that the slashes are the other way around, since the backslash (\) has special meaning in Prolog (and most other languages). If you are using a UNIX based system such as Linux, the commands may look something like this consult('/home/yourName/prolog/prolog1.pl'). ['/home/yourName/prolog/prolog1.pl']. Your interpreter will now hopefully tell you that the file is loaded correctly. If it doesn't, consult the help file or manual of your implementations on how to consult files. Also, you can tell prolog interpreter to load file automatically, if running it with the key codice_1, like this: prolog -s /home/yourName/prolog/prolog1.pl After some information you will see To see if everything is working, type human(john). (don't forget the period) and press Enter. Prolog will answer with true. Type human(Who). , press Enter and Prolog will answer Who = john. To exit prolog, type halt. Syntax, Facts and Queries. The line human(john). in the previous example was a prolog sentence in the form of a predicate. This type of sentence is called a fact. Predicates consist of one word of one or more characters, all lowercase, possibly followed by a number of terms. The following are examples of valid predicates: human(john) father(david, john) abc(def,ghi,jkl,m) tree p(a ,f ,d) The terms (the 'words' within parentheses) can take many forms, but for now we will stick to constants. These are words, again all lowercase. The first character of both a predicate and a constant needs to be a letter. Using predicates we can add facts to a program: human(john). human(suzie). human(eliza). man(david). man(john). woman(suzie). woman(eliza). parent(david, john). parent(john, eliza). parent(suzie, eliza). Note the period '.' behind each line to show that the line is over. This is very important, if you forget it, your interpreter will not understand the program. You should also be aware that the names chosen for the predicates and terms do not actually mean anything to the prolog interpreter. They're just chosen to show what meaning you have for the program. We could easily replace the word "human" with the word "spaceship" everywhere and the interpreter wouldn't know the difference. If we load the above program into the interpreter we can run a "query" on it. If you type human(john). Prolog will answer true. and if you type woman(john). Prolog will answer false. This also seems fairly obvious, but it's important to see it the right way. If you ask Prolog codice_2, it means you are asking Prolog if this statement is true. Clearly Prolog can't see from the statement whether it's true, so it consults your file. It checks all the lines in the program to see if anyone matches the statement and answers Yes if it finds one. If it doesn't, it answers codice_3 . Note that if you ask ?- human(david). Prolog will answer codice_3, because we have not added that fact to the database. This is important: if Prolog can't prove something from the program, it will consider it not true. This is known as the "closed world assumption". Variables. We'll update the program with human(david), so that all people in the database are human, and either a man or a woman What we have now is still not a very expressive language. We can gain a lot more expressiveness by using "variables" in our query. A variable is a word, just like terms and predicates, with the exception that it starts with an uppercase letter and can have both upper and lowercase characters after that. Consider the following query human(A). Now, the term of the predicate is a variable. Prolog will try to bind a term to the variable. In other words, you are asking Prolog what A needs to be for human(A) to be true. ?- human(A). Prolog will answer A = david ; Which is true, because the database contains the line human(david). If you press enter, Prolog will answer Yes and give you back your cursor. If you press semicolon codice_5 Prolog will show you the rest of the possibilities A = john ; A = suzie ; A = eliza. After eliza, there are no further possibilities. If you query Prolog with more than one variable it will show you all instantiations of the variables for which the query is true: When prolog is asked a query with a variable it will check all lines of the program, and attempt to "unify" each predicate with the query. This means that it will check if the query matches the predicate when the variables are instantiated a certain way. It can unify human(A) with human(john) by making A john, but it can't unify man(A) with human(john), because the predicates don't match. If we want to make it even more difficult for prolog we can use two predicates in our query, for instance: Now we are asking prolog for a human A who has a parent B. The comma means "and", indicating that both predicates need to be true, for the query to be true. To check this query, prolog will first find an instantiation to make the first predicate true--say it make A equal to john--and then it will try to make the second predicate true--with A equal to john. If it has found two instantiations for A and B that make both predicates true, it will return them to you. You can press Enter to end the program, or a semi-colon to see more options. Prolog may make a choice for A, to satisfy the first predicate that doesn't work with the second. Say it chooses A = suzie to satisfy human(A); no choice for B will satisfy parent(B, suzie), so prolog will give up its choice of suzie for A, and try another name. This is called backtracking. In the example above, prolog will first find human(david) in the program and unify A with david. To make the second predicate true, it needs to find an instantiation for parent(B, david). It can't find any, so it will look for a new instantiation of human(A). It tries the next option: A = john. Now it needs to instantiate parent(B, john). It finds B = david in the line parent(david, john) and reports back to you A = john B = david If you press semicolon it will try to find a new instantiation for the second predicate. If that fails it will try to find a new instantiation for the first predicate and so forth until it runs out of options. There is one special variable, called the "anonymous" variable, for which the underscore (_) character is used. When you use this character in a query, you basically say that you don't care how this variable is instantiated, i.e. you don't care which term it's bound to, as long as it's bound to something. If you ask Prolog Prolog will answer A = david; A = john; A = suzie; It will not tell you how it instantiates _. However if you ask Prolog This will not be true by default, Prolog needs to find an instantiation for all three anonymous variables in the database, such as abc(d,e,f). Since the predicate abc isn't in the database at all, the query fails. You can use the anonymous variable in your database as well. Placing human(_). in your database will mean that any term, whether it already exists or not, is human. So the queries and would be true with the above fact in the database. Here the anonymous variable is used to state a property of all objects, instead of just one. If we want to state that a specific group of objects has a certain property, we need rules. The next section deals with this. Examples. The following program describes the public transport systems of some cities: We can ask prolog if there is a city which has both a tram system and a subway: Exercises. (x) Find a Family Tree somewhere, or make one up (a real one will make it easier to check your answers). Implement part of the tree (around ten people) in a prolog program using the predicates woman/1, man/1, parent/2. The number behind the predicate describes how many arguments the predicate takes. So parent/2 describes a predicate like parent(john, mary). You can peruse w:Category:Family_trees for a suitable family tree. Write prolog queries for the following commands and questions. Don't worry if some people are returned more than once. We'll discover how to deal with this later on. Can you think of a way to display those women that do not have a father listed in the database? Can you describe what you would need to write such a query? The answers to select excercises can be found here: Prolog/Introduction/Answers References. next: Rules
2,770
Physics with Calculus/Mechanics/Velocity and Acceleration. Conservation of mass : The total mass in a closed system remains constant. Mass can not be created or destroyed[*]. It can, however, change forms. Density is defined as mass divided by volume formula_1 [*] Usually. There are exceptions in some extreme situations.
80
Conplanet/Geography. Conworld : Conplanet : Geography As with most conplanet creation, what you will do here depends on the degree of realism you want. This page assumes that you are going for a high level of realism; diversions from this are at your own discretion. Oceans and Continents. On the Earth, the only real example we have of large bodies of water, ocean formation is tied closely with volcanism and tectonics. It appears that oceans are largely formed through water vapour expelled by volcanoes, as well as possibly from infalling comets. The exact shapes of the continents are seemingly random, but there do seem to be tendencies: Prevailing Currents and Winds. Having a good understanding of prevailing winds will also help you to model your planet's weather. Mountain Ranges. Mountains form on converging plate boundaries. Coastal Ranges are created by subducting oceanic plates thrusting up more magma below the crust. Volcanic peaks are formed by diverging or converging boundaries where magma is channelled through the crust. Age of mountains will have an effect on their appearance. For example: Sharp, angular summits will not exist in an ancient, eroded mountain chain, but will form in a newer, more active converging boundary. Rivers. Notable characteristics about rivers: That is why several important rivers start in the Himalayas and flow through India to the Pacific (the nearest ocean) rather than flowing through central Asia to the Arctic Ocean. (This is not exclusively true, though; the Mississippi, for example, starts from a rather nondescript lake in Minnesota and flows to the Gulf of Mexico, all the while going through relatively dry land. However, the Mississippi still travels downslope, and does not go to the relatively more distant Hudson Bay.) Common mistakes of conworld creators in placing realistic rivers are: Elevation. That the height of certain geographical features varies widely from those of others seems self-explanatory, but it is a feature of Congeography that many conworlders forget to include. Knowing the elevation of certain areas on a given conmap can help with mapping rivers (as mentioned above), modeling a place's weather, and can even indirectly affect the culture of that area (due to advantages or disadvantages in location).
526
Distributed Systems. Memory model. Every process in Unix has an address space that looks like this: global vars—text segment—heap—stack Remember the low memory is at the top, and the high memory is at the bottom. So when items are added to the stack, and it grows upwards, each element has a lower memory address. Unix processes. fork() is your friend. You should know what this does: main() int i, cpid; cpid = fork(); if (cpid < 0) exit(1); else if (cpid == 0) { /* this is the child process. */ for (i=0; i<1000; i++) printf("child: %d\n", i); exit(0); else { /* this is parent process */ for (i=0; i<1000; i++) printf("parent: %d\n", i); waitpid(cpid); return 0; The call to fork creates a new process with an entire new address space. That means a copy is made of everything: new global vars, new text code, new heap, and new stack. Even object dynamically allocated with malloc are copied. Remember that the pointers to those dynamically-allocated objects are set based on relative addressing, so even the new copied memory is allocated somewhere else, the pointers into the heap still work. process states. The four most popular UNIX process states are ready, waiting, running, and zombie. The first three are easy. Zombie processes are child processes that have completed and are waiting for their parents to clean them up. Usually parents clean up completed (zombie) child processes by calling wait or waitpid. If the parent process dies before the child process finishes, eventually the root process will come around and clean them up. Pipes. simple example. So, after fork(), if each process has its own memory space, how to set up communication? With pipes! Each process may have its own file descriptor table, but they still reference the same files. Example pipe code, where child reads and parent writes: "NOTE: feel free to add in all that error-checking goodness, but try to keep the point obvious" int p[2]; pipe(p); /* call by address. */ cpid = fork(); if (cpid < 0) exit(1); /* child process */ if (cpid == 0) { close(p[1]); /* child won't need to write, so close. */ read(p[0], buf, len); close(p[0]); _exit(0); /* read man 3 exit and man _exit to understand the difference. */ /* parent */ else { close(p[0]); /* parents can be so close-minded! */ write(p[1], buf, len); close(p[1]); waitpid(cpid); Study and understand the above code. Implementing a very simple shell with pipes. We've all done stuff like this at the command line: $ ls -l | sort | head But how does that work? In MS-DOS, this functionality was implemented sort of like this: ls -l > /tmpfile1 sort /tmpfile1 > /tmpfile2 head /tmpfile2 The three tasks were run sequentially, rather than concurrently. This is bad if the output of the first program is enormous, and this setup certainly can't handle an infinite stream of data. You can implement the ls -l | sort | head stuff in C. To keep it manageable, the following code implements "ls | wc -l" instead: int pipes[2]; pipe(pipes); int pid = fork(); if(pid < 0) /* Check for error */ fprintf(stderr, "fork unsuccessful."); exit(1); if(0 == pid) /* We are the child */ /* * We don't need the writing end of the pipe. This closes it. close(pipes[1]); /* * Duplicate our output pipe to file descriptor 0. * This means that anything that would normally be read from stdin * is now read from the pipe. dup2(pipes[0], 0); /* * We've duplicated the pipe, so there is no need to leave this copy * hanging around. close(pipes[0]); /* * Execute the "wc -l" command. This replaces our process. execlp("wc", "wc", "-l", (char*)0); else /* We must be the parent */ /* * We don't need the reading end of the pipe. This closes it. close(pipes[0]); /* * This duplicates the pipe into stdout. This means that standard * output is redirected into the pipe. dup2(pipes[1], 1); /* * Since we've duplicated this end of the pipe, we won't need it * anymore. close(pipes[1]); /* * We're done setting up. This replaces us with the "ls" part of the * command. execlp("ls", "ls", (char*)0); "TODO: talk more about how execv works." Implementing I/O redirection with pipes. Another classic: ls -l > out.txt How can you do? "TODO: finish" err? Threads. Threads share memory, processes don't. They both allow programs to do multiple things simultaneously (concurrently). Most of the time on Linux, when people talk about threads, they're really talking about POSIX threads, aka pthreads. Simple pthread example. Example pthread code: #include <stdio.h> #include <pthread.h> #include <unistd.h> int i; /* i is global, so it is visible to all functions. */ static void *foo() int k; for (k=0; k<10; k++) { sleep(1); printf("thread foo: %d.\n", i++); return NULL; static void *bar() int k; for (k=0; k<10; k++) { sleep(1); printf("thread bar: %d.\n", i++); return NULL; int main() { int x; pthread_t t1, t2; x = pthread_create(&t1, NULL, foo, NULL); if (x != 0) printf("pthread foo failed.\n"); x = pthread_create(&t2, NULL, bar, NULL); if (x != 0) printf("pthread bar failed.\n"); pthread_join(t1, NULL); pthread_join(t2, NULL); printf("all pthreads finished.\n"); return 0; This is a slice of the output: $ ./pthreadfun thread foo: 0. thread bar: 1. thread foo: 2. thread foo: 18. thread bar: 19. all pthreads finished. Hopefully the main point is jumping out at you. Foo and bar alternately increment up the global integer i. Exercise. Rewrite the above code to use fork() and create two processes rather than two threads. Then find what is the maximum value of i and explain the difference with this threaded version. Kernel threads vs. user threads. pthreads are kernel-space threads, meaning that the OS scheduler is aware of the multiple threads. Each thread has an entry in the OS schedule table. There are also user-level threads, where the kernel is unaware of the threads. The process must handle scheduling itself. User-level threads require two-level scheduling: OS scheduler handles all processes Process handles all threads. One user-level thread can block all the other threads for the process if the programmer isn't careful. If a user-level thread makes some blocking system call, then the OS will suspend that process until that system call returns. User threads have the advantage of allowing switching between threads in a process much more quickly than kernel-level threads. The process does the context-switching itself. Solaris hybrid threads. Sun Solaris version 2.3 offered hybrid threads which attempt to combine the speed of user-level threads with the advantages of kernel-level threads. The hybrid threads minimize the OS scheduling cost of switching between threads at the kernel level, but they also allow a thread to make a blocking system call without blocking the entire process. The system allowed for threads and lightweight processes. The OS scheduler only saw the lightweight processes. Each thread is attached to a lightweight process. Threads can share a process, so one application with multiple threads may appear to the OS scheduler as a single process. In a program with two threads both bound to a single process, if thread 1 makes a blocking system call, then the OS will block the process. Meanwhile, in order to allow thread 2 to still run, the OS will create new process, detach thread 2 from the first process, and then attach it to the new process. critical sections and mutexes. If threads are going to share vars, you need to be careful they don't both try to alter memory simultaneously. Consider this classic C code: i++; This looks like one operation (add one to i), but it translates to three operations in assembly code: lda i; load the value stored in var i into the eax register. inc; increment up the eax register. sta i; store the value in the eax register into the var i. "Read the C programming appendix in this book for more information on how to convert a C program to assembly using gcc." Since threads share memory, it is possible for one thread to start to do something, then get suspended by the OS scheduler. Imagine if two threads exist and i is a global integer, initially set to 99. Each thread wants to increment up i by one, so each thread will do the three assembly steps above. After we are finished, we expect to find i set to 101. First thread 1 starts: lda i; eax register holds 99. inc i; now, eax holds 100. Now, imagine at this point the OS scheduler suspends thread 1 and starts thread 2: lda i; load 99 from i into eax. This is bad!!! inc i; increment eax to 100. sta i; store 100 into i. And then, the OS allows thread 1 to finish: sta i; store eax into i. but eax only has 100! So now you see how two threads accessing common variables can be risky. The worst part about this bug is that it will be inconsistent and hard to reproduce. The i++ statement is called a critical section because we need to makes sure that only one thread has access to the variable in that section. One solution is to use some kind of lock, like this, in each thread: lock(ilock); i++; unlock(ilock); Then if thread1 gets suspended, and thread2 starts, it will block on the lock(ilock) call. To do this for real, we can use mutexes. A mutex is like a semaphore that only has values of 1 or 0. The threads in this code will lock and unlock a mutex before trying to increment up i, so you avoid the above problem. #include <stdio.h> #include <pthread.h> pthread_mutex_t ilock = PTHREAD_MUTEX_INITIALIZER; int i; static void *f1() pthread_mutex_lock(&ilock); i++; pthread_mutex_unlock(&ilock); return NULL; static void *f2() pthread_mutex_lock(&ilock); i++; pthread_mutex_unlock(&ilock); return NULL; int main() pthread_t t1, t2; int x; i = 99; x = pthread_create(&t1, NULL, f1, NULL); if (x != 0) printf("pthread foo failed.\n"); x = pthread_create(&t2, NULL, f2, NULL); if (x != 0) printf("pthread bar failed.\n"); pthread_join(t1, NULL); pthread_join(t2, NULL); printf("all pthreads finished.\n"); printf("i set to %d.\n", i); return 0; That's it! Another solution to this problem is to use a "increment" instruction that really is atomic. (The "lock" and "unlock" functions use just such an atomic instruction). Such atomic functions (and how they can be used to avoid data corruption and deadlocks) are discussed at . Producer and Consumer model. This hobgoblin shows up all the time in the real world. The chef makes bowls of soup for the diner. The chef has to wait for the diner to ask for soup, and the diner has to wait for the chef to announce the soup is ready. If you ain't careful, you can end up with a deadlock which is just as nasty as it sounds. A deadlock involves two processes (p1 and p2) and two resources (r1 and r2). Each process needs exclusive access to both resources to finish. The deadlock happens when p1 holds r1 and waits for r2 to be released, and p2 holds r2 and waits for r1. As you can see, neither process can continue. This pseudocode illustrates how to set up a producer/consumer system and not fear deadlocks: static soupbuffer buff; semaphore moreplease, soupisready; producerthread() lock(moreplease); wait until the kids ask for more. cooksoup(buff); refill the soup pot. unlock(soupisready); announce that soup is ready. consumerthread() lock(soupisready); wait until the soup is ready. eatsoup(buff); empty the soup pot. unlock(moreplease); ask for more soup. Shared memory. System V provides shared memory. One process can allocate some memory as shared memory, then other processes can attach to that memory, and these processes can communicate. one process can use shmget to allocate memory. then that process can use shmat to aim a pointer at that memory. another process can use shmat to attach a pointer to that same memory. shmdt will detach the pointer from shared memory. shmctl can be used to free the allocated memory. You can use codice_1 to check on any allocated shared memory, which is a nice way to check if your program freed all allocated memory. codice_2 will delete allocated shared memory. Signals. Signals are neat. When you hit Ctrl+C to kill a process, you're really sending SIGINT to that process. Similarly, Ctrl+Z sends SIGTSTP and `kill -9 "pid"` sends the 9th signal, SIGKILL. The full listing of available signals is in the seventh section of the manual pages (`man -s 7 signal`). signal handlers. When a process receives a signal, the corresponding function is invoked. In the case of SIGKILL, the function causes the code to exit. Many of these signals can be trapped and reassigned to a custom function as illustrated in the following code: #include <stdio.h> #include <signal.h> #include <unistd.h> void sighandler(void) printf("i am un-SIGINT-able!\n"); int main() signal(SIGINT, sighandler); perror("signal"); while (1) { printf("sleeping...\n"); sleep(1); return 0; Compile and run the program above and then try to kill it with Ctrl+c. Now, try to kill the program with `kill -s SIGKILL "pid"`. When Ctrl+c is pressed, the process is sent the SIGINT signal; however instead of terminating, the custom function sighandler is invoked. Since the SIGKILL signal is not trapped, the code exited normally. The signal() function will not let you write a handler for SIGKILL. In HP-UX, after the signal arrives, the signal handler is forgotten for that signal. So, often inside the signal handler, it re-registers it self as the handler: void sighandler(void) printf("i am un-SIGINT-able!\n"); signal(SIGINT, sighandler); In Linux, this is not necessary. sigalarm. The SIGALRM signal can be raised by the alarm() function, so it's a good way to generate a signal inside your program. The below code illustrates: signal(SIGALRM, onintr); alarm(3); while (1) ; /* run forever. */ void onintr() printf("3 second violation.\n"); alarm(3); This program will run until you kill it. Signals and setjmp/longjmp. This code calls the SIGALARM handler when the process receives the SIGALARM signal, and then inside the handler, jumps out of the handler without returning. jmp_buf env; main() setjmp(env); signal(SIGALRM, onintr); void onintr() printf("SIGALRM handler...\n"); longjmp(env); There's an issue that might not be obvious. After a process gets a signal, then it calls the signal handler. If another signal comes in while the first signal is still being processed, the process will ignore that second signal. The OS "blocks" the signal while the handler is running. When the handler finishes, as part of its return, the OS allows the process to get new signals. If we jump out of a function with longjmp, then the signal handler will never return, and the process will never hear any new signals of that type coming in. However, don't worry! You can manually unblock the signal and then jump out. This alternate handler shows how: void handler() sigset_t set; sigempty(&set); sigaddset(&set, SIGALRM); sigprocmask(SIG_UNBLOCK, &set, NULL); printf("3 second\n"); longjmp(env); You can use the functions sigsetjmp and siglongjmp to jump out of a signal handler and optionally reset the signal mask. The below code shows an example. You'll have to use Ctrl-C to end the program: #include <stdio.h> #include <signal.h> #include <setjmp.h> #include <unistd.h> sigjmp_buf env; void sh() printf("handler received signal.\n"); siglongjmp(env, 99); int main () int k; k = sigsetjmp(env, 1); if (k == 0) { printf("setting alarm for 4 seconds in the future.\n"); signal(SIGALRM, sh); alarm(4); while (1) { printf("sleeping...\n"); sleep(1); else { printf("k: %d.\n", k); alarm(3); printf("set alarm for another 3 seconds in the future.\n"); while (1) { printf("sleeping...\n"); sleep(1); return 0; Signals and pause. If you want to wait for an event, but don't want to resort to some kind of spin lock, you can use the codice_3 function to suspend a running process until a signal occurs. Example code: #include <signal.h> #include <unistd.h> #include <stdio.h> void sh() printf("Caught SIGALRM\n"); int main() signal(SIGALRM, sh); printf("Registered sh as a SIGALRM signal handler.\n" "BTW, sh is %d and &sh is %d.\n" , (int) sh, (int) &sh); alarm(3); printf("Alarm scheduled in three seconds. Calling pause()...\n"); printf("RAHUL IS IN WIKIPEDIA WEBSITE AT CHITKARA UNIVERSITY NEAR CHANDIGARH"); pause(); printf("pause() must have returned.\n"); return 0; Also notice that the second print statement casts the function sh to an integer, and also casts the address of sh as an integer. Here's the output of the program: $ ./pausefun Registered sh as a SIGALRM signal handler. BTW, sh is 134513700 and &sh is 134513700. Alarm scheduled in three seconds. Calling pause()... Caught SIGALRM pause() must have returned. Notice that sh after being cast to an integer is the same as the address of sh. That 134513700 is the memory address in the text section of the sh function. Although it may seem a little strange at first, from the OS point of view, functions are really just another kind of data. Message passing. message passing is distinct from communication via pipes because the receiver can select a particular message from the message queue. With pipes, the receiver just grabs some number of bytes off the front. The type arg allows priority scheduling. If the 4th arg of msgrcv is > 0, then system will find the first message with the type equal to that specified 4th arg. If type is < 0, then system will find the first message with the lowest type where type is below the absolute value of type. If we have these messages: Name:Type A:400 B:200 C:300 D:200 E:100 Then type -250 would return B. The following two programs show message passing. mpi1.c: #include <sys/types.h> #include <sys/ipc.h> #include <sys/msg.h> #include <stdio.h> #include <string.h> #include <unistd.h> #define KEY1 9870 #define KEY2 7890 #define PERM 0666 typedef struct mymsgbuf int type; char msg[64]; } mtype; int main() int msg1, msg2, i; mtype m1, m2; msg1 = msgget(KEY1, PERM | IPC_CREAT); msg2 = msgget(KEY2, PERM | IPC_CREAT); msgrcv(msg2, &m2, 64, 0, 0); printf("mpi1 received msg: \"%s\" from message queue %d.\n", m2.msg, msg2); m1.type = 20; strcpy(m1.msg, "I doubt"); printf("mpi1 about to send \"%s\" to message queue %d.\n", m1.msg, msg1); i = msgsnd(msg1, &m1, 64, 0); printf("mpi1 sent \"%s\" to message queue %d with return code %d.\n", m1.msg, msg1, i); sleep(1); msgctl(msg1, IPC_RMID, (struct msqid_ds *) 0); msgctl(msg2, IPC_RMID, (struct msqid_ds *) 0); return 0; mpi2.c: #define KEY1 9870 #define KEY2 7890 #define PERM 0666 typedef struct mymsgbuf int type; char msg[64]; } mtype; int main() int msg1, msg2, i; mtype m1, m2; m1.type = 10; strcpy(m1.msg, "Indians will win..."); /* attach to message queues. */ msg1 = msgget(KEY1, PERM); msg2 = msgget(KEY2, PERM); if (msg1 == -1 || msg2 == -1) perror("mpi2 msgget"); /* send message to mpi1. */ printf("mpi2 about to send message \"%s\" to message queue %d.\n", m1.msg, msg2); msgsnd(msg2, &m1, 64, 0); /* send message to mpi2. */ strcpy(m2.msg, "..."); msgrcv(msg2, &m2, 64, 0, 0); i = msgrcv(msg1, &m2, 64, 0, 0); if (i == -1) perror("mpi2 msgrcv"); printf("mpi2 received msg: \"%s\" from message queue %d with return code %d.\n", m2.msg, msg1, i); /* cleanup. */ msgctl(msg1, IPC_RMID, (struct msqid_ds *) 0); msgctl(msg2, IPC_RMID, (struct msqid_ds *) 0); return 0; Now compile mpi1.c into mpi1 and compile mpi2.c as mpi2: $ gcc -o mpi1 mpi1.c $ gcc -o mpi2 mpi2.c Now run mpi1 in background and then run mpi2: $ ./mpi1 & [1] 29151 $ ./mpi2 mpi2 about to send message "Indians will win..." to message queue 2195457. mpi1 received msg: "Indians will win..." from message queue 2195457. mpi1 about to send "I doubt" to message queue 2162688. mpi1 sent "I doubt" to message queue 2162688 with return code 0. mpi2 received msg: "I doubt" from message queue 2162688 with return code 64. [1]+ Done ./mpi1 All about the stack. This section describes what happens in the stack and registers when one function calls another function and passes a few parameters. Imagine we have this C code: void foo(x, y, z) int i, j; int a, b, c; a = 4; b = 5; c = 6; foo(a, b, c); Behind the scenes, first c gets pushed, then b, then a, then the return address of the calling function. This is what the stack looks like (the stuff at the top was the most recently pushed object): -8 j -4 i 0 ebp +4 return address +8 a +12 b +16 c segfaults revealed! We've all seen this lovely error: $ ./try Segmentation fault Most of the time, you find out something like is the problem: char ss[3]; strcpy(ss, "this is a way too long string!\n"); After this command: char ss[3]; the stack can fit three characters into one 4-byte word. -4 ; this is the space (4 bytes, 1 word) allocated for ss[3]. 0 ebp +4 r.a. Now watch what happens when the strcpy() command fills in the stack: -4 't', 'h', 'i' ,'s' ; we can fit the first 4 chars here, but the rest spill over. 0 ' ', 'i', 's', ' a' ; ACK! we just overwrote the old base pointer! +4 ' ', 'w', 'a', 'y', ; now we're doomed; when this function calls it will try to hop to ; the return address stored here, but that has been overwritten. So hopefully it is more clear what a segfault is. Your program is trying to hop to a memory address outside the process memory, and the OS puts the smack down. Networking. OSI reference. 7 layers: In Unix, the first three (application, presentation, and session) are often bundled together. Imagine that a network looks like this: A---Router1---Router2---B. If a process (application) on A wants to communicate with a process on B, this is sort of what happens: The transport layer adds A's port and B's RPC server port. The network layer adds A's IP address and B's IP address. Any communication from A must go through the two routers first. The data link layer adds A's mac address and router 1's mac address. The physical layer converts the frame into the physical signal. When the message is received by router 1, the frame is read in and sent to the bottom of the OSI model. Router 1's mac address is removed and replaced with Router 2's mac address. The UNIX file system. Inodes. Each directory is actually a file. That file lists the names of files and subdirectories in that directory. It associates names to inodes. Each inode has the following data: Addressing. Since each inode has 10 direct address pointers, if a file is small enough to fit on 10 data blocks or less, then the inode can use direct addressing. If we assume that we use 4 bytes to specify the disk address, then we can have 232 different addresses. If we set the data block size to 4 kb, then one data block can hold 1000 addresses. So, with single indirect addressing, we can point to a file that uses 1000 data blocks of data. Therefore, the maximum allowed file size for single indirect addressing (assuming 4kb data blocks and 4byte addresses) is 4 megabytes (1000 data blocks * 4 kb per data block). Easy formula: data block size / disk address = max file size. Of course, in double or triple (or quadruple, quintuple, etc.) addressing, you just gotta take the analysis out further. There are advantages and disadvantages of small vs. large data blocks. Lots of small files and big data blocks causes low utilization. For example, your codice_4 file may only have 300 bytes of data, but it has to use at least one data block, so if your data block is 4kb in size, well, that's a lot of wasted space. However, small blocks lead to big files being scattered out across lots of data blocks, and so the hard drive reader has to skip all over the place gathering up the data. Large data blocks reduce fragmentation. As the number of blocks per file increases, disk IO cost increases. Linking. Each directory has a table that maps file names to inode numbers. A file can have any number of names associated with it. Each inode keeps track of its link count. The command codice_5 edits the directory and removes the entry that maps foo.txt to a particular inode. If that inode has zero links, then it is deleted. You might want to read the man page for ln at this point to understand the difference between soft and hard links. RAID. Compared to a monolithic system, a distributed system can have a much lower chance that the system will fail today—even though the distributed system has a much higher chance that one part will fail today. High-availability distributed systems use techniques inspired by RAID (redundant array of inexpensive disks) to tolerate a failure in any one part, without loss of data or functionality. For more details, see The set-user-ID bit. The set-user-ID bit (sometimes shortened to setuid or simply suid) alters how the OS handles permissions. For example, the codice_6 command allows a user to change her login shell, like from bash to tcsh. The login shell for each user is stored in /etc/passwd, but users don't have write access. So how can a user run chsh and change that file? The answer is with suid bits. Look at the permissions on chsh: $ ls -l `which chsh` -rwsr-xr-x 1 root root 28088 2004-11-02 16:51 /usr/bin/chsh So, what is that s in there for? That, my friend, is the set-user-ID bit, which means that this program runs as if it were executed by root. A program with the setuid bit set (or the similar setgid bit) will be run with the privileges of the program's owning user or group respectively. Remote Procedure Calls. Remote procedure calls (RPC) are a way hiding all the details of running a process on a far away remote machine. If A wants to make a remote procedure call on host B, then A needs B's RPC server port. An RPC frame includes: The command codice_7 will list what RPC services are running on the machine. Generally developing RPC program involves eight steps: rpcgen -C rdb.x will generate 4 output files: codice_9 gcc -o rdb rdb.c rdb_clnt.c rdb_xdr.c gcc -o rdb_svc rdb_svc_proc.c rdb_svc.c rdb_xdr.c Distributed Algorithms. Properties of distributed algorithms A simple example of how non-global clocks cause problems: Synchronization. Physical clocks agree with "real time" like Universal Coordinated Time. Logical clocks don't worry about being accurate to some global standard. Instead, all machines in a system try to agree with each other. Keeping a local physical clock synchronized is challenging because even if the machine can contact the authoritative time server, there is some propagation delay. How often does a machine need to request an update from a time server? Assume that the machine has a clock formula_1 that increments at rate formula_2. The UTC clock formula_3 increments at rate formula_4. If formula_5 then the local physical clock C is perfect and doesn't need to be updated. If formula_6 then the local physical clock C runs at a faster rate. If formula_7 then the local physical clock C runs at a slower rate then the time server. formula_8 represents the maximum drift rate specified by a manufacturer. formula_9 formula_10 represents the maximum tolerable error. every formula_11 period, a machine must request an update. As the error tolerance formula_10 increases, update frequency falls. As drift rate formula_8 goes up, the update frequency goes up also. Appendix A: C language overview. setjmp and longjump. Use these to do what C++, python, Java, etc. does with throwing exceptions. First call setjmp to set a location to jump to. Then other functions can call longjump to exit the current function and how to the location specified in setjmp. functions with variable-length arguments. In other words, how does printf(...) work? conditional compilation. One example: main() #ifdef DEBUG printf("this is a debug statement!\n"); #endif $ gcc -DDEBUG prog.c; ./a.out this is a debug statement! $gcc prog.c; ./a.out In the second compilation, the printf block ain't included. Another similar trick is used to prevent the same header file from being included repeatedly. It is common for one C program to have multiple .c files which include multiple .h files. You can put this code at the top of your .h file to prevent the same .h file from being included redundantly: #ifndef _FOO_H #define _FOO_H 1 /* function foo is a worn-out cliche. */ int foo(); #endif make and makefiles. Four type of statements are allowed in Makefiles: macros, dependency rules, commands, and comments. Here's a trivial example makefile: CC = /usr/bin/gcc CFLAGS = -Wall -pedantic -g #saves me the trouble of typing all those flags. proxy: proxy.c proxy.h $(CC) $(CFLAGS) proxy proxy.c debug-proxy: proxy.c proxy.h $(CC) $(CFLAGS) -DDEBUG -o proxy proxy.c clean: /bin/rm -f *~ httpd a.out httpd.log proxy proxy.log CACHE.* I can run any statement by typing make and the name. Or by default, make will run the first rule it hits if it gets no arg. $ make # runs make proxy $ make clean #runs /bin/rm -f ... Macros are accessible after being declared by either $(CFLAGS) or ${CFLAGS}. $CFLAGS is not valid. If the macro is one letter, like A = a.out Then $A is valid. But you should stay away from that anyway, because lots of the single-char macros already have meaning. Each tab underneath a dependency rule starts a new shell, so if you want to change shell stuff, you gotta do it all on one line: bogus: echo $$PWD; cd ..; echo $$PWD; echo $$PWD; The output: $ make bogus echo $PWD; cd ..; echo $PWD /home/student/wiwilson/cis620 /home/student/wiwilson echo $pwd BTW, if you want to suppress the default echoing of the commands, put an @ at the beginning: bored: echo "meh" @echo "snah" And here's the output: $ make bored echo "meh" meh snah How one make command can recursively call make in subdirectories: SRCDIR = src1 src2 OBJ = src1/f1.o src2/f2.o ${SRCDIR}: /tmp cd $@; make This confused me at first. The $@ symbol refers to the target for the rule, in this case, the ${SRCDIR}. You can use makedepend inside a makefile to resolve all that hairy .h stuff automatically: CFILES = main.c foo.c bar.c depend: makedepend $(CFILES) # don't edit below here. "TODO: show example before and after Makefile." If you have lots and lots of files and you want to compile them all into objects, this rule works: .c.o: $(CC) -c $< Converting C to assembly. gcc -S try.c This will convert your program to assembly code, using GNU syntax. gcc will create a file called try.s gcc -S -masm=intel try.c This will convert your code to assembly code using intel syntax. This may only work on intel CPUs. You can convert try.s to an executable like so: gcc try.s And you can even compile the executable for use with gdb: gcc -g try.s And then you can step through the underlying assembly code in gdb. Hurray! Appendix B: Solutions to Exercises. 4.1.1 POSIX Threads. The task for this exercise was to rewrite the example code to use fork() rather than threads. You should have something similar to this: #include <stdio.h> #include <unistd.h> #include <sys/types.h> int main() /* * The name of our process. This lets us tell the difference when we * do output. const char* name; /* * Our counter variable. Because this is all in one function, it need not * be global. int i; i = 0; pid_t pid = fork(); if(pid == 0) name = "Child"; else name = "Parent"; /* Our output loop. */ int j; for(j = 0; j < 10; j++) sleep(1); printf("%s process: %d\n", name, i); i++; return 0; You were also asked in what way the output was different, and why. In the version that made use of POSIX Threads, the counter was incremented twice each second: once by each thread. When fork() was used, it was only incremented once. The reason for this is that when fork() is called, the child process writes to its own copy of the counter. When threads are used, however, both threads share their memory space. This means that they are both writing to the same copy of the counter, and it is incremented twice. Contributors. "Please, give yourself credit if you've made any contributions!" Started this textbook in Fall 2004, while enrolled in a graduate comparative operating systems interfaces course at Cleveland State University.
9,749
Applicable Mathematics/Systems of Equations. A good deal of real world problems can be represented by various equations. Often, we will have more than one equation for a given problem. =Substitution= Substitution uses letters, such as x, y, or z, as representations of unknown values. These letters are used in both equations and expressions as tools to solve many different types of problems. In some cases, the value of the letter is known. If so, by using the substitution method, a numerical value replaces the letter given. Then, after the letter is replaced by a number, the expression or equation is simplified. Introductory Examples. People & Feet. In a room of people we know there are twice as many feet as people, and we can represent this with the equation formula_1, where formula_2 represents the number of feet, and formula_3 represents the number of people. Knowing there are 20 people, then we can make another equation formula_4. Listing out the two equations we have: So we know there are 40 feet. Houses & Floorspace. Other situations can get more complex though. Suppose that your neighbor's house is 1.5 times as large as yours, but if you don't count your neighbor's 50 square unit basement, they are the same size. How many square units are the respective houses? Let formula_5 be the area of your entire floor, and formula_6 the area of your neighbor's floor. The problem can represented by the two equations. Here, you have two equations as before, but this time they both have two variables, while in the last example, one equation had one variable and the other had two. However, you can still use substitution though you just have to substitute with the equation's expression instead of a constant. Now we substitute formula_6 in the second equation for the right-hand side of the first equation: Now we can substitute the area of your floor into the first equation to get the area of your neighbours floor: Systems of Linear Equations. Above we covered two real world examples (albeit simplified) that systems of equations are useful for solving. Equations composed of two or more linear functions are called Linear equations. Sets of these linear equations are called System of linear equations. Simply put, linear equations can only be solved if the number of Variables is equal to or less than the number of equations provided. The most common way of solving systems of equations is to use substitution, as shown above. However, we can represent the equations using matrices, allowing us to see patterns easier and perform operations more easily. Lets start with a set of 3 equations: The matrix on the left represents the coefficients of the variables, and is called a coefficient matrix. On the right the right-hand side of the equation is included in the matrix, giving us what is called an augmented matrix. Solving the above equation would look like the table on the left below normally, whereas the matrix solution would look like the table on the right.
666
Beginning Mathematics/Logic and Deductive Reasoning. Logic has a long history, and is one of the central tenets of western thought. Going back to the ancient philosophers, logic has been the attempt to devise a system for ascertaining truth via thought. Logic falls under the category of deductive reasoning, meaning it is used to establish truth based upon already established truths, called postulates or axioms. The oldest and most influential axiom system would be from Euclid's Elements, the classic geometry text, which ranks as one of the most important books of all time. Euclid starts off his first book with 23 definitions and 5 postulates. The postulates basically describe what can be done with an unmarked ruler and a compass. For instance, the first postulate says that a line can be drawn between two points. The second says that lines can be extended, and the third says that circles can be drawn with a center and a given radius.
217
General Astronomy/The Celestial Sphere. If you look out from an empty field into a dark sky, you will get the impression that you are standing on a flat plate, enclosed by a giant dome. Depth perception fails us for the distant objects we see in the sky. This creates the appearance that all of the stars have the same distance. The stars appear to move together across the sky during the night, rising in the east and setting in the west, as if they are affixed to the inside of a dome. Because of this, many ancient civilizations believed that a dome really did enclose the Earth. Only a few centuries ago astronomers came to realize that the stars are actually very far away, scattered throughout the Milky Way Galaxy, rather than attached to the inside of a vast sphere. The old idea remains useful, however. The concept of the celestial sphere provides a simple way of thinking about the appearance of the stars from Earth without the complication of a more realistic model of the universe. Working with the celestial sphere offers a convenient way of describing what we see from Earth. When we refer to the celestial sphere, we are imagining that everything we see on the sky is set on the inside of a huge spherical shell that surrounds the Earth. We will use the reference points of the celestial sphere as the basis for several coordinate systems used to place celestial locations with respect to one another and to us. The celestial sphere is an imaginary hollow globe that encloses the Earth. The sphere has no defined size. It can be taken to be infinite (or at least really big), with an infinitesimal Earth at the center. The observer is always taken to be at the center of the celestial sphere, even though the observer isn't at the center of the Earth. Our particular position among the stars gives us a particular view. Brighter stars appear closer; stars in nearly the same direction appear nearby each other, even if they are separated by great distances. Our first and most basic look out into the universe is completely stripped of any depth perception. The celestial sphere can be seen from either of two perspectives. In one perspective, the celestial sphere itself remains still while the Earth turns inside it. In the other perspective, the Earth stands still and the celestial sphere rotates once per day. To an observer on Earth, these two perspectives appear the same. As we think about how we would expect to perceive the rotation of the Earth, we can use this second perspective to guide us. Everything we see in the sky, we see as though projected onto the celestial sphere. The stars in the constellation Orion, for example, are at a variety of distances, but the differences are imperceptible to us on Earth. Orion's pattern would disappear if we could view it from any other angle or if we could perceive the depth, because the stars would project differently. Because depth perception is lost, measurements of size are much more difficult. The Sun and the Moon look about the same size in the sky, even though the Sun is really much larger. The Sun appears to be the same size as the moon because the Sun is much farther away simply because the Sun is both 400 times larger in diameter and 400 times farther away than the Moon. Although we can't easily measure the "physical" sizes of celestial objects, we can measure their "apparent" sizes. We do this by measuring the angle an object subtends in the sky. The Sun and the Moon, for example, subtend an angular diameter of half a degree. Most objects in the sky are smaller than this, so it is often convenient to use a smaller measure of angle. For this purpose, astronomers use arc minutes and arc seconds. There are sixty arc minutes in a degree, and sixty arc seconds in an arc minute. Angles this small are near or beyond the limits of ordinary human vision, but they become useful when using a telescope to make observations. For casual stargazing, observers think about much larger angles. You can easily measure these angles when stargazing by using your hand as your ruler. From arm's length, your index finger has a width of about one degree, your palm measures about ten degrees across, and your full finger-span, including your thumb, is about 25°. This can be useful for estimating the position of a star in the sky, or for gauging the angular separation of two stars. While the apparent movement of a star across the sky each night, with the celestial sphere, is great, the measurement of an object's movement across the Celestial Sphere as the object drifts through space, is called proper motion, and is measured in arc seconds per year. To begin thinking about the view of the sky from Earth, we will identify a few points of reference that are fixed to the ground and of importance to astronomers. Some of these are widely known from common experience. To any observer, regardless of location, these markers stay in the same positions relative to the observer. The zenith is always directly overhead, the horizon is always level, and so on. Observers standing at different places on Earth will have a different view of the sky. An observer in Singapore might see the Sun at the zenith while another observer in New York would not see the Sun at all. These reference points change with the location of the observer. There are also reference points that are fixed in the sky. These fixed reference points don't move with respect to the stars, but different observers see them in different positions. They are the basis for the fixed coordinate systems that we discuss later. For now, we will identify only the two most useful of these — the celestial poles and the celestial equator. The celestial equator is an extension of the Earth's equator onto the celestial sphere. If you stand on Earth's equator, the celestial equator will always be directly overhead and pass through the zenith. It will run from the East point up to the zenith and down again to the West point. Anywhere you stand on Earth, the celestial equator will intersect the East and West points on the horizon. The nearer you are to the equator, the nearer the celestial equator come to the zenith. At the North Pole or the South Pole, the celestial equator lines up with the horizon. Like the celestial equator, the celestial poles are an extension of the Earth's pole onto the celestial sphere. The North Pole extends out into space to create the North Celestial Pole. Likewise the South Pole creates the South Celestial Pole. In the Northern Hemisphere, only the North Celestial Pole is visible because the South Celestial Pole is below the horizon. In the Southern Hemisphere, only the South Celestial Pole is visible. At the equator, the North Celestial and South Celestial Poles would lie on the horizon where the meridian intersects the horizon. Polaris is called the "North Star." It can be found at the last star on the "handle" of the Small Dipper. The last two stars of the "cup" of the Big Dipper are called the Guardians (or "Pointers"), and point to Polaris in the sky. Polaris is special because the Earth's North Pole points almost exactly towards it. This means that Polaris will always appear to be due north to any observer, and it will always stay in the same position on the sky. Often, beginning stargazers assume that Polaris must be a very bright or prominent star. This is not really the case. Polaris is only remarkable because it is almost exactly in line with Earth's axis of rotation. Because of this, Polaris always remains at nearly the same place in the sky. For example, Shakespeare made reference to Polaris in the play "Julius Caesar": Though it must be pointed out that Shakespeare actually got it wrong. At the time Shakespeare wrote Julius Caesar Polaris was indeed the pole star but in Julius Ceasar's time Polaris was not the pole star. The fact that Polaris always stays in the same position due north has given it much fame. It also makes Polaris a useful reference point for navigation — Using geometry, it is easy to show that the angle Polaris or the celestial pole makes with the horizon is equal to the observer's latitude. In the diagram, the angle formula_1 is the observer's latitude. The pole and the equator are at right angles, so or formula_3 Since the angles in a triangle add to 180°, we know that When we combine these two equations, we have formula_5. The angles formula_6 and formula_7 are alternate interior angles, so and which means that the angle between the pole and the horizon is the same as the observer's latitude. This fact was once used by navigators at sea, who could easily find their latitude by measuring the position of Polaris. Like many things in astronomy, the celestial sphere can be very difficult to visualize because of its three dimensional geometry. A visit to a planetarium or a session under the night sky can be very helpful to you in developing a conceptual understanding of the celestial sphere. In the absence of the opportunity for these, it can be helpful to try to draw diagrams such as the one at the beginning of this section for yourself. To begin drawing a celestial sphere such as the one above, you only need to know the latitude of the observer. Then imagine that the spot where the observer is standing is the "top of the world"; draw circle for the earth, and draw an observer standing at the top. Now draw a much larger circle around that; this represents the celestial sphere. Since our observer is always on top of the Earth, the features on the celestial sphere that are defined relative to the ground will always be in the same position on the sphere. The zenith is the point directly above the observer's head, at the top of the celestial sphere. The next important reference is the horizon. The horizon will be horizontal on the diagram. Remember that the celestial sphere has no specific size relative to the Earth, regardless of how you've drawn it. Draw the horizon across the middle of the celestial sphere, so that it's center is the same as the center of Earth. Markers such as the horizon are always idealized, so it doesn't matter whether your observer's view of the sky is actually cut off at the position marked by the horizon. The next reference points we'd like to place are the North Celestial Pole and the South Celestial Pole. Think about what the orientation of the pole should be given the observer's latitude. If the observer is at the equator, the pole should go horizontally through the Earth. If the observer is at one of the poles, the pole should go through the Earth vertically. Extend the Earth's poles out to the celestial sphere and mark the intersections as the North Celestial Pole and the South Celestial Pole. If we're in the northern hemisphere, the North Celestial Pole will be above the northernmost point on the horizon, and the South Celestial Pole will be on the opposite side of the celestial sphere, below the horizon. If we're in the southern hemisphere, the situation is reversed. Remember to check that the angle the horizon makes with the pole is about the same as the observer's latitude. For any given latitude, one can build an appropriate celestial sphere. First, consider the sky in relation to the earth. Take the north and south poles and extend them into the sky; these become the north and south celestial poles. The Earth's equator can be projected outward to form the celestial equator. We'll get something that looks like the picture above. When you're done, you should have a celestial sphere very like the one at the top of this section. A celestial sphere forms the basis for the application of many coordinate systems. For example, the horizon and the celestial meridian together form the reference circles for giving the position of stars in terms of altitude and azimuth, making it easier for one to find them on the night sky. The celestial sphere is also a natural system for describing the motion of the sun. In order to explore these concepts, however, it is necessary to understand just how the celestial sphere changes for an observer at a given latitude. As we consider the daily rotation of the Earth, we'll see that your perception of the daily motion depends very much on your latitude. As you look at the sky, your mind will naturally identify obvious patterns. The Big Dipper and Orion are two very prominent groupings of stars, and others stand out all over the celestial sphere. These asterisms are guideposts to the night sky. You can use them to keep your bearings when you look at the sky. The appearance of the night sky has remained much the same for millennia. Many of the ancient civilizations across the globe invented stories about the sky. Often, the groups of stars are called constellations. Constellations have a very long history in astronomy, dating back thousands of years. Early in the twentieth century, a list of constellations was formally established by the International Astronomical Union, a widely recognized body of astronomers. The IAU identified constellations that would be used in astronomy and defined specific boundaries to unambiguously establish which constellations each star belonged to. It's easy to learn a few of the most prominent constellations so that you can find your way around the night sky. Beginning with a few easy-to-find landmarks you can find the rest by using familiar stars as guideposts. Another useful guide in the sky is the ecliptic. The ecliptic is an imaginary line in the sky that the sun draws. The ecliptic is even with the plane of the Earth's orbit around the sun; thus, all of the main planets and the moon should be found relatively close or on the ecliptic, because the solar system is mostly flat. Also, along the ecliptic are the 12 constellations of the zodiac. Thus, by finding some of the main zodiac constellations in the night sky, one can determine if certain objects they see may or may not be planets by whether or not they lie on the ecliptic.
3,192
General Astronomy/Coordinate Systems. Suppose you are an astronomer in America. You observe an exciting event (say, a supernova) in the sky and would like to tell your colleagues in Europe about it. Suppose the supernova appeared at your zenith. You can't tell astronomers in Europe to look at their zenith because their zenith points in a different direction. You might tell them which constellation to look in. This might not work, though, because it might be too hard to find the supernova by searching an entire constellation. The best solution would be to give them an exact position by using a coordinate system. On Earth, you can specify a location using latitude and longitude. This system works by measuring the angles separating the location from two great circles on Earth (namely, the equator and the prime meridian). Coordinate systems in the sky work in the same way. The equatorial coordinate system is the most commonly used. The equatorial system defines two coordinates: right ascension and declination, based on the axis of the Earth's rotation. The declination is the angle of an object north or south of the celestial equator. Declination on the celestial sphere corresponds to latitude on the Earth. The right ascension of an object is defined by the position of a point on the celestial sphere called the vernal equinox. The further an object is east of the vernal equinox, the greater its right ascension. A coordinate system is a system designed to establish positions with respect to given reference points. The coordinate system consists of one or more reference points, the styles of measurement (linear measurement or angular measurement) from those reference points, and the directions (or axes) in which those measurements will be taken. In astronomy, various coordinate systems are used to precisely define the locations of astronomical objects. Latitude and longitude are used to locate a certain position on the Earth's surface. The lines of latitude (horizontal) and the lines of longitude (vertical) make up an invisible grid over the earth. Lines of latitude are called parallels. Lines of longitude aren't completely straight (they run from the exact point of the north pole to the exact point of the south pole) so they are called meridians. 0 degrees latitude is the Earth's middle, called the equator. 0 degrees longitude was tricky because there really is no middle of the earth vertically. It was finally agreed that the observatory in Greenwich, U.K. would be 0 degrees longitude due to its significant role in scientific discoveries and creating latitude and longitude. 0 degrees longitude is called the prime meridian. Latitude and longitude are measured in degrees. One degree is about 69 miles. There are 60 minutes (') in a degree and 60 seconds (") in a minute. These tiny units make GPS's (Global Positioning Systems) much more exact. There are a few main lines of latitude:the Arctic Circle, the Antarctic Circle, the Tropic of Cancer, and the Tropic of Capricorn. The Antarctic Circle is 66.5 degrees south of the equator and it marks the temperate zone from the Antarctic zone. The Arctic Circle is an exact mirror in the north. The Tropic of Cancer separates the tropics from the temperate zone. It is 23.5 degrees north of the equator. It is mirrored in the south by the Tropic of Capricorn. Horizontal coordinate system. One of the simplest ways of placing a star on the night sky is the coordinate system based on altitude and azimuth, thus called the Alt-Az or horizontal coordinate system. The reference circles for this system are the horizon and the celestial meridian, both of which may be most easily graphed for a given location using the celestial sphere. In simplest terms, the altitude is the angle made from the position of the celestial object (e.g. star) to the point nearest it on the horizon. The azimuth is the angle from the northernmost point of the horizon (which is also its intersection with the celestial meridian) to the point on the horizon nearest the celestial object. Usually azimuth is measured eastwards from due north. So east has az=90°, south has az=180°, west has az=270° and north has az=360° (or 0°). An object's altitude and azimuth change as the earth rotates. Equatorial coordinate system. The equatorial coordinate system is another system that uses two angles to place an object on the sky: right ascension and declination. Ecliptic coordinate system. The ecliptic coordinate system is based on the ecliptic plane, i.e., the plane which contains our Sun and Earth's average orbit around it, which is tilted at 23°26' from the plane of Earth's equator. The great circle at which this plane intersects the celestial sphere is the ecliptic, and one of the coordinates used in the ecliptic coordinate system, the ecliptic latitude, describes how far an object is to ecliptic north or to ecliptic south of this circle. On this circle lies the point of the vernal equinox (also called the first point of Aries); ecliptic longitude is measured as the angle of an object relative to this point to ecliptic east. Ecliptic latitude is generally indicated by formula_1 , whereas ecliptic longitude is usually indicated by formula_2 . Galactic coordinate system. As a member of the Milky Way Galaxy, we have a clear view of the Milky Way from Earth. Since we are inside the Milky Way, we don't see the galaxy's spiral arms, central bulge and so forth directly as we do for other galaxies. Instead, the Milky Way completely encircles us. We see the Milky Way as a band of faint starlight forming a ring around us on the celestial sphere. The disk of the galaxy forms this ring, and the bulge forms a bright patch in the ring. You can easily see the Milky Way's faint band from a dark, rural location. Our galaxy defines another useful coordinate system — the galactic coordinate system. This system works just like the others we've discussed. It also uses two coordinates to specify the position of an object on the celestial sphere. The galactic coordinate system first defines a galactic latitude, the angle an object makes with the galactic equator. The galactic equator has been selected to run through the center of the Milky Way's band. The second coordinate is galactic longitude, which is the angular separation of the object from the galaxy's "prime meridian," the great circle that passes through the Galactic center and the galactic poles. The galactic coordinate system is useful for describing an object's position with respect to the galaxy's center. For example, if an object has high galactic latitude, you might expect it to be less obstructed by interstellar dust. Transformations between coordinate systems. One can use the principles of spherical trigonometry as applied to triangles on the celestial sphere to derive formulas for transforming coordinates in one system to those in another. These formulas generally rely on the spherical law of cosines, known also as the cosine rule for sides. By substituting various angles on the celestial sphere for the angles in the law of cosines and by thereafter applying basic trigonometric identities, most of the formulas necessary for coordinate transformations can be found. The law of cosines is stated thus: To transform from horizontal to equatorial coordinates, the relevant formulas are as follows: where formula_6 is the right ascension, formula_7 is the declination, formula_8 is the local sidereal time, formula_9 is the altitude, formula_10 is the azimuth, and formula_11 is the observer's latitude. Using the same symbols and formulas, one can also derive formulas to transform from equatorial to horizontal coordinates: Transformation from equatorial to ecliptic coordinate systems can similarly be accomplished using the following formulae: where formula_6 is the right ascension, formula_7 is the declination, formula_1 is the ecliptic latitude, formula_2 is the ecliptic longitude, and formula_20 is the tilt of Earth's axis relative to the ecliptic plane. Again, using the same formulas and symbols, new formulas for transforming ecliptic to equatorial coordinate systems can be found:
1,944
General Astronomy/Types of Galaxies. Estimates of the number of galaxies in the universe range from 10 billion to over 100 billion. According to Hubble's Law, galaxies are red-shifted. This means that they're moving away from us. Young galaxies are more blue, while old galaxies are more red. As spiral galaxies mature, the center becomes red. The arms are bluish. Morphology and Classification. The Hubble Sequence. The most common form of classification for galaxies is based on a system which categorises them by their visible structure. This is known as the Hubble Sequence, and was developed by Edwin Hubble in the 1920s. Galaxies are organized in a form resembling a tuning fork, often called the Hubble Tuning Fork Diagram. It is typically drawn with elliptical galaxies on the left, lenticular galaxies in the middle, and two branches of spiral galaxies on the right: one for un-barred spirals, and one for barred spirals. Elliptical galaxies, also known as "early-type" galaxies, have an ellipsoidal form, with a fairly even distribution of stars throughout. They are mostly featureless and have no form of visible disk. Examples of ellipticals include M87, whose black hole was the first to be imaged in high-resolution, and ESO 383-76, which is one of the largest galaxies ever discovered. They are denoted by the letter "E", with the number giving the degree of eccentricity: "E0" galaxies are nearly spherical, while "E7" are greatly elongated. Lenticular galaxies appear to have a disk-like structure with a central spherical "bulge" projecting from it, but they do not show any spiral structure. They were originally introduced as a theoretical intermediate class between ellipticals and spirals before being confirmed by observations. An example of a lenticulat galaxy is NGC 2787. These are given the class "S0". Spiral galaxies, also known as "late-type" galaxies, have a central "bulge" and an outlying "disk"; the disk is notable for having spiral "arms" within it, centered on the bulge. "Sa" galaxies have very "tightly wound" arms, while "Sc" galaxies are very loose spirals. Barred spiral galaxies have a similar sort of spiral structure to spiral galaxies, but instead of emanating from the bulge, the arms project out from the ends of a "bar" running through the bulge, like ribbons on either end of a baton. Again, "SBa" to "SBc" refer to how "tightly wound" these arms are. Examples of spiral galaxies include: the Milky Way, our home galaxy; Andromeda (M31), the closest galaxy to us; and the Pinwheel Galaxy (M101). Finally, there are irregular galaxies, which show no clearly discernable or regular shape. Some examples of these include the Large and Small Magellanic Clouds, which can be seen from the Earth's southern hemisphere. These are given the class "Irr". Links to Galaxy Evolution. Hubble based his classification on photographs of the galaxies through the telescopes of the time. He originally believed that elliptical galaxies were an early form, which might later evolved into spirals; our current understanding suggests that the situation is roughly opposite. More modern observations of galaxies have given us the following information about these types: From this, astronomers have constructed a theory of galaxy evolution which suggests that ellipticals are, in fact, the result of collisions between spiral and/or irregular galaxies, which strip out much of the gas and dust and randomize the orbits of the stars.
852
General Astronomy/Yearly Motions. Why do we have seasons? A little thought will suggest that it can't have much to do with the Earth's distance from the sun, as that would affect the Southern and Northern Hemispheres at the same time. (In fact, the Earth is slightly nearer to the sun around December than at other times of the year.) Why, then are there seasons? Every year the Earth completes one orbit of the Sun. We see this observe the effect as the change of the seasons and the movement of the constellations. Over the course of a year, the Sun moves through a great circle on the celestial sphere, tracing out the same path year after year. This path is called the ecliptic. The ecliptic is not only the path of the Sun in the sky, it also marks the plane of the Earth's orbit of the Sun. The planets orbit the Sun in different planes but near to the ecliptic. The axis of the Earth's rotation is tilted by 23½° with respect to the plane of the ecliptic. Globes are typically built with an inclined rotation axis. The 23½° North latitude is marked as the Tropic of Cancer and the 23½° South latitude is marked as the Tropic of Capricorn. In the Northern Hemisphere, the Sun will pass directly over head only between June 20 and 22th along the Tropic of Cancer. That day is called the day of the summer maximum or Solstice in the Northern Hemisphere. In the Southern Hemisphere, the Sun will pass directly over head only once between December 20 and 23rd along the Tropic of Capricorn. Anywhere between the Tropic of Cancer and the Tropic of Capricorn, the Sun will pass directly overhead at least twice during the year, but the Sun will never pass overhead for people living outside the tropics. Within the Tropics, over the course of a year, the Sun's position in the sky changes, beginning in the southern sky about December 21, moving to the northern sky in mid-year, and ending the year back in the southern sky. The tilt of Earth's rotation axis causes a 23½° tilt of the great circle of the ecliptic with respect to the Earth's equator. The ecliptic and the equator intersect at two points, but are otherwise separated by up to 23½°. When the sun lies at one of the intersections, it is directly overhead somewhere on the equator. This occurs at the equinox, and the points on the sky where the equinox intersects the equator are also called equinoxes. Once every year, the Sun passes through the equator going north. This happens in late March — the "vernal" or "spring" equinox. The "autumnal" equinox occurs when the Sun passes through the equator in late September. On the equinox days, the day and night are equally long. This is the origin of the name equinox, which is from Latin for "equal night." On the day of the equinox, the Sun rises due east and sets due west. It doesn't rise to directly overhead, though, except for observers on the Equator. The equinoxes are the only days of the year that have twelve hours of daylight and twelve hours of dark. After the vernal equinox, moving into Northern summertime, the Sun begins rising in the northeast and sets in the northwest. Days in the Northern Hemisphere become longer, while days in the Southern Hemisphere become shorter. The points at which the Sun is at its greatest distance from the equator are called the solstices. The solstices mark the longest and shortest day of the year. The longest day of the year is the summer solstice and the shortest day is the winter solstice. In the Northern Hemisphere, the summer solstice occurs when the Sun is farthest north, while the winter solstice occurs at the Sun's southernmost point. In the Southern Hemisphere, the solstices are reversed. Viewed from space, we see that the Earth's tilt changes the exposure of different parts of the Earth to the Sun. Observers in the Northern Hemisphere will see the Sun at its lowest position in the southern sky, about December 21. They see it this way because the Southern Hemisphere is tilted towards the Sun and the Northern Hemisphere is tilted away. About June 22, the situation is reversed, with the Northern Hemisphere pointed toward the Sun, and the Sun will be in its extreme high point in the sky at solar noon. For an observer in the Southern Hemisphere, the Sun will appear at its lowest point in the sky in the north, about June 22, while the Sun will appear at its high point in the sky about December 21. One effect of this phenomenon is that during the months of Northern Hemisphere summer, the North Pole will be able to receive sunlight twenty-four hours a day. The Sun will remain visible through much of the autumn, passing below the horizon at the autumnal equinox. As winter sets in at the North Pole, the Sun will not be seen for six months, while that portion of the Earth is tilted away from the Sun. As one moves toward the Earth's equator from either pole, this effect becomes less severe. The nearer one is to the equator, the less difference there will be between the number of hours of illumination and night hours. At the equator, there's practically no difference between the length of day all through the year. Clearly, the annual motion of the Earth around the Sun is the cause of Earth's seasons. What effect gives rise to this seasonal change is less obvious. At first glance, one might think that winter occurs when the Earth is farther from the Sun. If we realize the Northern and Southern Hemispheres have winter at different times of year, we see that this can't be right. Also, the Earth's orbit is very nearly circular. The change in the Earth's orbital distance is much too small to have a noticeable effect on Earth's climate. Certainly the length of time each day during which sunlight falls on a particular location has a great deal to do with the seasonal changes in temperature. However, another effect less obvious, but more influential is the angle at which the sunlight hits a region. At the equator, there is little difference throughout the year as the Sun varies by 23.5 degrees on either side of the vertical. The length of the ray's path on Dec 21 at solar noon is increased by a factor of only 1.1 from a direct vertical path and the reduction of the sunlight is small. The more direct radiation gives the maximum amount of heat and energy to the earth where it falls, and therefore these areas will receive the most warmth. Away from the equator, however, the Earth's tilt means that sunlight is not received so directly and a greater amount of the Sun's energy is blocked by the longer path it takes through the atmosphere. At 50 degrees north latitude, the path the Sun's rays travel through the atmosphere on Dec 21, at solar noon, will be increased by a factor of 3.5 from a direct vertical path. In general, the rays will come at an angle that depends on the time of day, the latitude of the region from the equator, and the position of Earth in its orbit. The constellations in the ecliptic, the zodiac, have a long history in the tradition of astrology. In most newspapers, you can read a (completely unscientific) prediction of your future or some personal advice, specific to your birthday. Each entry is associated with one of the constellations of the zodiac and a range of birth dates. In the tradition of astrology, the constellation the Sun occupied on your birthday, your "sign," reveals information about your personality and your future. Interestingly, the dates given for each constellation in the newspaper don't match the Sun's position in the sky for those dates. There is a mismatch between the date in the newspaper and the real position of the Sun of a little more than a month. The mismatch appears because the dates corresponding to each sign were set thousands of years ago. Over the course of thousands of years, the Earth "wobbles" on its axis, causing the calendar and the positions of the stars in the sky to shift. This wobble is caused by the pull on the equator by the sun and moon, and is called precession. It affects the positions of all the constellations with respect to the equinoxes and the pole. The precession of the Earth is like the movement of a top. If you spin a top with the axis tilted, the axis will slowly rotate as the top spins. Likewise, the Earth's axis remains tilted at 23½°, but the orientation of this tilt changes over the course of thousands of years. Since precession changes the direction in which Earth's pole points, it also changes which star is the North Star, if any. Earlier, we quoted Shakespeare, who referenced Polaris in "Julius Caesar", describing it as the northern star. Strictly, this would be incorrect. Polaris was not "fixed" in the sky in Julius Caesar's time because Earth's axis was pointed differently, toward the Big Dipper. Precession is a slow drift, and a difficult motion to detect. The motion of the stars from precession only becomes noticeable to the unaided eye after many, many years of careful observation, although it becomes very quickly noticeable through a telescope. The Greek astronomer Hipparchus was the first to measure the precession by comparing his own observations to observations collected a century and a half before. Precession changes the position of the Earth in its orbit for the solstices and the equinoxes. As the Earth's axis turns over, the moment when it points most closely towards the Sun changes, and so the seasons change. If a calendar didn't account for this, the seasons would drift as the axis precessed. Eventually, the Northern Hemisphere would be cold in July and warm in January, and the Southern Hemisphere would have warm July weather and cold January weather. The calendar takes the extra motion of precession into account by using the tropical year as its basis. The year as we usually define it is a sidereal year, the time it takes for the Earth to make one orbit around the Sun. In one year, as we usually measure it, the Earth really completes a little more than a full orbit around the Sun. During a sidereal year, the Sun moves fully around the sky and back into the same position with respect to the stars. In a tropical year, the Sun goes from the Vernal Equinox, around the sky, and back to the Vernal Equinox again. During this time, the equinox has shifted slightly in its position, so that a tropical year is a bit shorter than a sidereal year. It's easy to identify the progression of the calendar if you take careful notice of the sky. Next time you see sunrise or sunset, take notice of whether the Sun is setting due west or just north or south of west. Many ancient cultures watched the motion of the Sun carefully and over long periods of time. Using simple techniques and tools, they were able to measure periods like the length of a year very accurately. Ancient people who took notice of celestial motion would have found that the summer solstice occurred every 365 days. They would also notice that the solstice was delayed an extra day every four years. This is the reason for the leap year in the modern calendar. The delay occurs because the length of the year is a little more than 365 days — closer to 365¼ days long. By taking some simple observations over a period of a few years, it is possible to measure the length of a year to surprising accuracy using this technique. Solar calendars have been used throughout history. The ancient Babylonians thought the year had only 360 days, and made their calendar accordingly. The Islamic calendar is lunar, and is 11 (1/2) days different than the solar calendar. The Hebrew calendar is lunisolar. Our modern calendar is handed down to us from the Ancient Roman civilization. The calendar took its first mature form as the Julian calendar, almost exactly the same as the one used today. It had 365 days in a year, with a 366-day leap year every four years. In the Julian calendar, years divisible by 4 — such as 1992, 1996, and 2008 — are leap years. This gave the Julian calendar an average of 365¼ days per year, which is very close to the true 365.2422 day length for a tropical year. Although the drift of the Julian calendar is slow, the error in the calendar had accumulated enough by the sixteenth century that the Catholic Church became concerned about the drift's effect on the date of the celebration of Easter. The Italian chronologer Aloisius Lilius invented modifications to the Julian calendar to correct the difference. Pope Gregory XI instituted the new calendar, now named the Gregorian calendar, in the year 1582. The Gregorian calendar was identical to the Julian calendar except that the leap year was skipped on years not divisible by 400. In the years 1600, 2000, and 2400, there would be a leap year in the Gregorian calendar, but not 1800, 1900, or 2100. This produced a year of average length 365.2425 days, much closer to the correct value than the Julian calendar. The Gregorian calendar accumulates only 3 days of error over 10,000 years. Discussion questions. 1) On the date of the summer solstice, the sun is overhead on the Tropic of Cancer, and on the date of the winter solstice the sun is overhead on the Tropic of Capricorn. Draw a quick sketch that shows the relative positions of the sun and the earth on those dates. 2) On the date when the sun is overhead on the Tropic of Capricorn, the sun is actually located in the constellation of Sagittarius. So why did the Greeks name Tropic of Capricorn the Tropic of Capricorn instead of the Tropic of Sagittarius?
3,244
General Astronomy/Phases of the Moon. Like the stars and planets, the Moon doesn't stay fixed in the sky but slowly moves as the Earth rotates and as the Moon moves through its orbit about the Earth. To someone taking a casual glance at the Moon, it seems as fixed as the stars. But observation of either the Moon or the stars over a period of several hours will reveal their diurnal (daily) motion across the sky. The Moon rises and sets each day. An observer who watches the Moon over the course of many days will notice the Moon moving not only with the stars, but among them. Every month, the Moon completes one fewer pass across the sky than the stars have completed. We see this because of the Moon's orbit about the Earth. As the Moon progresses through its orbit, its rising and setting times change. Each day, the Moon rises and sets fifty minutes later than the day before. The moon usually takes 27 days to rotate once on its axis. So any place on the surface of the moon experiences about 13 days of sunlight, followed by 13 days of darkness. Temperatures on the Moon range from -153 C at night to 253 C during the day. For example if you were standing on the surface of the moon during sunlight hours it would be blazing hot. When the sun goes down, the temperature automatically drop 250 degrees in just a matter of moments. Furthermore, there are craters around the North and South poles of the moon which never seen the sunlight. These dark places would always be as cool as -153 C. However, there are nearby mountain peaks that are covered in continuous sunlight, and would always be hot. The "dividing line" between the light and dark halves of the globe is called the terminator, as it terminates the area of darkness (and also that of daylight). Typically, one-half of the Moon will be lit up by the Sun, while the half facing away from the Sun remains dark. (The only exception occurs during a lunar eclipse, when the Earth blocks the light falling on the lit side of the Moon.) The part illuminated by the Sun is not, it should be emphasized, always the same portion of the Moon's surface! Like the Earth, the Moon turns on its axis, exposing different areas at different times. In combination with the orbital revolution of the Moon around the Earth, this phenomenon creates the phases of the Moon as seen from Earth. The phrase "Dark side of the Moon" arose before the age of artificial satellites and the back side of the Moon could not be observed. Hence, the that one side was unknown or "dark". The tilt of the moon’s spin axis is only 1.54 degrees and as a result, lunar seasons are barely noticeable in most locations on the Moon. However, at the North and South poles, the height of the sun above the horizon varies by more than 3 degrees over the course of the year. In other words, it affects the percentage of sunlit regions and surface temperatures at the poles. Furthermore, the coldest areas are located in doubly shadowed regions inside small craters, in which they are located within the permanently shadowed regions of larger craters. Temperatures are as low as 35K (-238 C or -397 F) in these areas, even at noon on the warmest day of the year. Some half of the Moon is always illuminated but the fraction of the illuminated part or the Moon's phases directly depend on the relative positions of the Earth, Moon, and Sun. Simply put, it's a matter of how much of the daylight side of the Moon we can see from our current viewing angle. The phase will depend on how much of the side facing toward us is illuminated at any given time. The sketch below illustrates the phases of the moon for various Earth-Moon-Sun positions (the Sun is presumed to be off of the diagram to the right): Next to each "Moon" is a black-and-white sketch of the phase as it would be seen from Earth when the Moon is in that position. When the Moon is between the Earth and the Sun, the sunlit side of the Moon is facing completely away from us, and therefore we have the dark "New Moon". When the Moon reaches the other side of the Earth, the sunlit side will be fully toward us, and we have the "Full Moon". As the Moon moves from New to Full and the sunlit side grows increasingly large, we say the Moon is waxing; as we see less, in the decline from Full to New Moon, we say it is waning. Midway between the Full Moon and New Moon, half of the sunlit side of the Moon is visible from the Earth. Because a half of the half illuminated Moon can be seen, this is referred to as a "quarter Moon". When the Moon is waxing and reaches this position, it's called the "first quarter Moon"; when waning, the "third quarter Moon." When less than a quarter-moon is visible, it's referred to as a "crescent Moon" - waxing crescent or waning crescent, as appropriate. When more than a quarter-moon is visible, it's referred to as a "gibbous moon", again, waxing or waning. The Moon's orbit and rotation speed is just such that the Moon always shows the same side to Earth, aside from only a slight "wobble." The pattern of markings on the side facing Earth is very familiar in history and culture. Western society has long imagined a face in the markings — the "Man in the Moon." Other cultures have seen a woman, a rabbit, a frog or other creatures. The Moon always shows this same face to Earth because its rotation is "locked" with its orbit, for reasons we will see later, when we discuss gravity. More precisely, the time it takes for the Moon to complete a trip in its orbit is the same as the time it takes for the Moon to rotate once around its axis. Because we see the Moon moving around us, it appears as though the Moon isn't turning at all. If you stood on the Moon and looked up at Earth in the sky, you would see that it never rises, never sets, and never moves in the sky at all. Imagine, for example, standing at the middle of the face of the Moon that we see. From there, the Earth would always remain straight overhead. If you stood at the edge of the face we see from Earth — the "limb" — you would always see the Earth on your horizon. Up to now, we have considered the time for the Moon to complete one orbit around the Earth to be the same as the time for it to pass once through its series of phases, but this is not quite right. The Moon's phase at a particular point in its orbit changes as the Earth goes around the Sun. Once the Earth has gone halfway around the Sun, the position of the Moon for a given phase has also moved halfway around the orbit, since the Sun is on the opposite side. Thus, it takes a little longer for the Moon to go through its phases than it does for it to go through its orbit about the Earth. Suppose a full moon marks the beginning of both the period of the orbit of the Moon about the Earth and the period of the orbit of the Earth about the Sun. At the time of full Moon, the Sun, Earth, and Moon are aligned. Once the Moon returns to that position in its orbit, the Earth has moved a little around the Sun. Now, the Moon is not aligned with the Earth and the Sun. It takes about two days before the Moon has moved back into alignment with the Earth and Sun line (synodic month). The time for the Moon to complete an orbit, called a sidereal month, is about 27 days and 8 hours. The time to move through its phases, a synodic month, is about 29 days and 12 hours.
1,739