text
stringlengths
8
1.28M
token_count
int64
3
432k
Sin Chao, Jonathan!. Sin Chao, Jonathan! is a story that teaches the Masa Tang language via the "direct" or "natural" method, introducing new vocabulary and grammar over the course of the story. It is a translation of "Salute, Jonathan!", a course by that teaches Interlingue (Occidental). Masa Tang is a language created by Michael Wirth, who also created Ekumenski. Masa Tang draws inspiration from the languages of Southeast Asia while also incorporating vocabulary from languages around the world. Perek wan (Perek 1). Man jeng nai chyang. Man sha buk yom. Man sem chyang. Man jeng nai chyang ba? Yep, i jeng nai chyang. Man jeng nai…man ba? Mai, i mai jeng nai man. I jeng nai chyang. Man jeng nai treng ba? Mai, i mai jeng nai treng. I jeng nai chyang. Man jeng nai chyang. Man sha buk yom ba? Yep, i sha buk yom. Buk yom sha man ba? Mai, buk yom mai sha man. Buk yom mai sha. Man sha. Man sha buk yom. Chyang sem man ba? Mai, chyang mai sem man. Chyang mai sem. Man sem. Man sem chyang ba? Yep, man sem chyang. Man jeng wea? I jeng nai chyang. Man sha wot? I sha buk yom. Man sem wot? I sem chyang. I jeng nai chyang, en i sha buk yom, en i sem chyang. Man "jeng" nai buk yom ba? Mai, i "sha" nai buk yom. Man sem man ba? Mai, i mai sem man; i sem chyang. Man sang tom. Man sang bong, en man teng kop swang. I ting. I ting re chyang. I ting: “Chyang sang wot? Chyang sang bong ba? Chyang sang tom ba?” Man sha chyang ba? Mai, i mai sha chyang; chyang sang tom. I sha buk yom; buk yom mai sang tom. Buk yom ting re man ba? Mai, buk yom mai ting. Man ting. I ting re buk yom, en ting re chyang. En i sha nai buk yom. I sha re chyang. I sha: “Chyang sang bong, en chyang sang tom.” I ting: “Chyang sang bong”; i ting et chyang sang bong. I ting: “Chyang sang tom”; i ting et chyang sang tom. I ting re chyang, en i ting re buk yom. Buk yom sang wot? Buk yom sang wea man sha; i sha nai buk yom. Nai buk yom, man sha re chyang. Nai buk yom, man mai sha re treng; man jeng nai chyang, mai nai treng. I mai ting re treng; i ting re chyang, chyang Myunik. Chyang sang wot? I sang Myunik. Myunik sang wea? I sang wea man jeng. Man sang wea? I sang nai Myunik. Yep, Myunik sang chyang tom, en chyang bong. Man ting et Myunik sang chyang bong, en i ting et i sang chyang tom. Man teng kop swang. I sei: “Hai, Myunik!” I sang man bong! Man jeng en ting: “Treng sang wea?” I sem…i sem treng! I ting: “Treng!” Nau i mai ting re buk yom en mai ting re chyang; i ting re treng! Perek dwa (Perek 2). Nau man sang nai treng. I mai sang nai Myunik; i sang nai treng. I fai byajeng. I ting: “Nau mi fai byajeng de Myunik na Byena. I sang byajeng bong. Mi mak byajeng.” I ting re Myunik. I ting: “Nau mi sang nai treng, men onteng mi ya sang nai Myunik. En nau mi sha buk yom nai treng, men onteng mi ya sha buk yom nai Myunik. En nau mi ting nai treng, men onteng mi ya ting nai Myunik. Onteng mi ya ting re Myunik nai Myunik, en nau mi ting re Byena nai treng. Nau mi sang nai treng, mai nai Byena. Men mi ting en sha re Byena.” Nau man ting nai Myunik ba? Mai, i mai ting nai Myunik. I ting nai treng. Onteng i ya ting nai Myunik. I sei: “Hai, treng!” Man sang nai treng, en i fai byajeng na chyang. Chyang mai sang Myunik; Myunik sang chyang onteng. Chyang sang Byena; Byena sang chyang kyo. Man ting re Myunik en Byena. I ting: “Myunik ya sang chyang onteng, en Myunik ya sang bong. Nau i sang kyo, en mi sang nai treng; treng sang bong. Byena lo sang bong ba?” Man ting re Myunik: Myunik ya sang chyang onteng. I ting nai treng: kyo i sang nai treng. En i ting re Byena: Byena lo sang chyang dilap. En i ting: “Myunik ya sang tom. Treng sang tom. Byena lo sang tom?” En i ting: “Nai Myunik mi ya sha nai buk yom. Nai treng mi sha nai buk yom. Nai Byena mi lo sha in buk yom ba? Yep, dilap nai Byena mi lo sha nai buk yom. Mi mak buk yom.” Man ting mya (i ting mya = i ting en ting en ting), en i sha mya. Yep, i sang man kung kop swang. Man kung kop swang sha mya, en ting mya. I sang Jonathan; Jonathan sang man kung kop swang. I sha: “Mi sang Jonathan. Mi sang nai treng. Onteng mi ya jeng nai Myunik; dilap mi lo jeng nai Byena.” I ting, en sha: “Treng…i sang bong, men hen. I mai sang nyu; i sang hen. Nai Myunik ma treng sang hen? Yep, ma treng nang Myunik sang hen. Men ma treng nang Myunik sang bong, en mi mak ma treng nang Myunik. Onteng mi ya mak treng nai Myunik, en kyo mi mak treng nau, en dilap mi lo mak treng nai Byena. Mi mak treng!” Jonathan sha: “Myunik sang chyang bong en chyang hen, en Byena sang chyang bong en chyang hen. Myunik en Byena mai sang nyu, men sang bong. Myunik en Byena sang ma chyang hen, men bong. Ma chyang mai sang nyu, men bong. Mi mak chyang!” Jonathan ting, et yom dwa nang byajeng sang bong. I sei: “Kyo ya sang yom dwa bong nang byajeng. Mi mak byajeng!” Perek sam (Perek 3). Jonathan jeng nai Byena: chyang Byena. Jonathan ting et Byena sang bong, en et Byena sang moi. Jonathan mai ting et Byena sang lelik; i ting et Byena sang moi. I sei: “Byena sang chyang moi! Mi lo sha re i!” En i sha nai buk yom re Byena. Nai buk yom i sha: “Yom sam mi sang chok bong! Nai yom dwa mi ya jeng nai Myunik. Men nau mi jeng nai Byena: Byena mai sang Myunik. Myunik en Byena sang dwa chyang; Myunik mai sang Byena en Byena mai sang Myunik. Mi jeng nai Byena en chyang sang chok mai. Mi mak Byena; chyang mai sang lelik. Byena sang chok mai! Men mi teng mungdoi.” Wot? Jonathan teng mungdoi? Wot mungdoi? Nau i mai sem buk yom; i sem chyang en ting. I ting mya re mungdoi. Mungdoi nang Jonathan sang et i mak Byena, men i mai teng tem. I ting: “Hm. Nau ji sang pyet (7). Nai ji syet treng sai. Ji ten (10) mweng ji syet (7) sang sam (3) ji. Sam ji mai sang tem mya fo chyang moi! Mi mai teng tem. Teng tem sang bong, men mi mai teng i! Sha buk yom sang bong, men mi mai teng tem fo sha i! Wot fai nai Byena?” I ting: “Mi mai teng tem mya. Wot fai - sha buk yom, o nyam, o sem chyang? Mi lo fai wot?” I ting pyu, en sei: “Mi teng pik ting bong! Wan sunggan…mi ting. Treng mi sai nai ji ten (10). Teng treng pyu sen, nai ji ten wan (11), o ji ten dwa (12), ten sam (13), ten kwat (14) o ten pyet (15) ba?” I sem…yep! Teng treng yang sai nai ji ten pyet. Nau Jonathan sang bong jai. Jonathan sei: “Ten pyet (15) mweng syet (7) sang ot (8). Nau mi teng tem ot ji! Mi lo fai wot?” I sei: “Mi sa! Mi lo nyam bitek. En mi lo dop olut. Wan sunggan…mai, mi lo dop dwa olut, o sam olut. Pik ting bong!” Nau Jonathan dop olut nai Byena, en nyam bitek. I sang bong jai. I sei: “Mi sang bong jai taung mya! Mi mak chyang Byena. Mi mak taung mya byajeng!”
2,619
National Etiquette Differences in Europe/Introduction. Etiquette in Europe is not uniform. Even the regions of Europe do not have common manners. For example, a Dane will prefer direct speech while a Finn will tend to prevaricate. Even within a single country there may be different customs, especially when there are different linguistic groups, as in Switzerland where there are French, German and Italian speakers. Age and social context may determine the level and details of the customs which are followed. The age issue is clearly observable in countries that have passed through some historical event, a war, revolution of a change in political systems, like previous satellites of the ex-USSR where there is a huge generational divide between those who grew up in the 'Communist era' and those who did not, the same is also valid to those under Fascist regimes, these deep social changes have an impact in what is deemed appropriate behavior in a society. European etiquette globally. Many customs regarding good behavior have been exported to places with cultural traditions based in Europe, including America, Oceania, South Africa and so on. Therefore, much of this article is limited to the discussion of etiquette which is peculiar to only a particular part of Europe. Generalizations. While Europe contains a wide variety of social traditions, it is also (excluding Russia) relatively compact, well-traveled and urbanized compared to many other continents or cultural areas. As such many expectations regarding etiquette are shared across Europe. Avoid stereotypes and generalizations, because you are likely to cause offense to the country you are visiting and shows your country in a negative way. Generalizations are never good, e.g. all British people drink tea with biscuits at least once a day etc. Just as not all Americans chew "spitting' tobacco" and wear cowboy hats there are cultural variations and you should never make an assumption asking is the safest thing to do. Consideration. Etiquette begins with some sensitivity to the perceptions and feelings of others and the intention not to offend. Failing to thank and compliment a host, using a mobile phone in a theater, taking the last bit of a dish without offering it to others and many other examples of bad manners fall into this category.
497
Making Websites with Flask/Creating a web app. Now that you have learned the basics of Flask website development, it is time for the final project. We will be making a web application in which someone can someone can upload their name, age and favourite food, and every other user can view this information. Creating the homepage. We will now create the homepage for the website so that when someone connects, they will know what this is. Here is the code: from flask import * app = Flask(__name__) @app.route('/') def home(): #You can modify this HTML how you see fit return """ <!DOCTYPE html> <html> <head> <title>Learn about other users</title> </head> <body> <h1>Welcome to my Web App!</h1> <a href="/signup">Click here to create a account!</a> <p style="font-family: sans-serif;">This is a cool website to learn about fellow users.</p> </body> </html> if __name__ == "__main__": app.run(debug=True) As you can see, it links to a sign up page, that does not exist yet. We will now address this issue. Making the sign up page. For someone to use your website, they must create an account. But first, we must create a list of all accounts. We will structure it like this: codice_1. In the next chapter we will code this page, and more.
418
Meitei Culture/Law & Governance. The traditional law and governance in Meitei civilisation was predominantly monarchical in nature. In 429 CE, during the reign of King Naophangba, a proto-Constitution was established in Kangleipak. The law went on expanded, through several amendments, until it was finalised in 1110 CE in the form of a written Constitution of the Meitei kingdom. Since it was enacted by King Loiyumba (1074-1122 CE), it was known as the "Loiyumpa Silyel" in the later centuries.
147
Meitei Culture/Architecture. The traditional Meitei art of designing and constructing buildings, specifically in the realms of Kangleipak, is known as ""Meitei architecture", or "Meetei architecture", "Manipuri architecture", "Kanglei architecture"". Most of the buildings are semi-concrete or wooden in medium but heavily intricate in designs as well as in details. Best known examples are standing in the forms of temples, gates and public houses, other than royal palaces and royal court buildings.
116
Meitei Culture/Economy. Economy in Meitei civilisation heavily relies on the agricultural farm lands, established in the fertile plains of the Imphal river and her tributaries, in the Imphal Valley of Kangleipak. Another important source of economy is the Loktak lake, located in the southwestern part of the Imphal valley, which provides thousands of Meitei fishermen, their livelihoods, as fish is one of the two major staple food items of Meitei people, with the another one being rice (paddy). The Ima Keithel (literally meaning "Mothers' Market" in Meitei), located in the heart of Imphal city, is a commercial hub for the Meitei women. No man is allowed to use the market place as a shopkeeper. It is said to be the "world's only women run market". This centuries-old market is still working today, giving Meitei womenfolk the opportunity to hold a lion's share in the income generating commercial activities in Meitei society.
241
Meitei Culture/Sex & Gender. The two primary genders in traditional Meitei culture is male and female, derived from the concepts of Father Sky (Sky God) and Mother Earth (Earth Goddess). Meitei society is predominantly male dominated in nature. However, women are given great liberty in comparison to other cultures. One of the classical example is the "Ima Keithel" (meaning "Mothers' Market"), which is exclusively administered by Meitei womenfolk, where no man is allowed to sell anything inside the marketplace. This major commercial hub is located in the metropolis of Imphal, controlled by Meitei women. The culture of treating women with respect and honour is derived from the worship of goddesses and divine ladies in traditional Meitei religion (known as "Sanamahism" or "Lainingthouism" in modern days). Third gender is given due recognition in traditional Meitei society. However, their occupation is limited. They are traditionally most possibly supposed to become a priest, to be in service of the gods. This doesn't mean that priestly works are to be done by only them. Other than male priests ("maiba" or "amaiba" in Meitei) and the female priests ("maibi" or "amaibi" in Meitei), there is a special category of transgender priests known as "nupa maibi" (literally meaning "male priestess" or "male nun" in Meitei, showing the characteristics of both male and female). Second most possible occupation that a transgender is traditionally supposed to do, other than confining to religious works, is becoming an actor in the traditional Meitei courtyard theatre industry, calling "Shumang Kumhei" (also known as "Shumang Kummei" or "Shumang Leela"). Again, this doesn't mean that all the Shumang Kumhei actors are transgenders, as many are heterosexual straight males, and in special cases, female actors (actresses) also work. These two mentioned traditional occupations are still in prevalence in present times. However, their scope of works are expanded to a greater extend. In modern era, Meitei transgenders are best known for dominating the field of costume designing, beauty parlor and make-up. And this creates a popular stereotype (or misconception) among the masses that such fashion related jobs should be done only by the transgenders.
548
Samoan/Greeting. Talofa/ (lava) - Greetings Malo/ (lava) - Greetings Malo le soifua - Good health to you Oa mai oe? - How are you? Manuia, faafetai - Good, thanks. Ae a oe? - What about you? Manuia foi faafetai - Good also, thank you. Fa - Bye Manuia le aso - Have a good day
115
Meitei Culture/Literature. Meitei literature has both written as well as oral forms. All the Meitei language literary works, produced before the 18th century CE, were written in the traditional Meetei Mayek writing system. During and after the 18th century CE, Bengali script replaced the Meitei script for writing the Meitei language. However, in the later parts of the 20th century CE, Meitei script rose once again to replace the Bengali script for writing Meitei language. With the changes of writing system, the usage of words in Meitei literature, also changes, simultaneously, due to acculturation. Meitei script enthusiasts usually love to write linguistically pure Meitei language, free from any foreign or borrowed or corrupted words. On the other hand, Bengali script enthusiasts usually love to mix Meitei language writings, with the words borrowed or derived from Bengali language and Bengali's ancestor language, Sanskrit. Thus, in case of Meitei culture, the writing systems matter to the language of literary works. Meitei people believes that "language" is their mother and "script" is their father. And if Bengali script or Latin script is used to write their language, they believe that it is equivalent to their mother being under the control of another man, who is not their father. This notion leads the masses to revive the traditional Meetei Mayek writing system. Meitei oral literature (orature) is mainly composed of folktales and folk songs. As oral retellings may become varied from one narrator to another, folktales and folk songs usually exist in multiple variations. On the other hand, the possibility of variations is very limited in case of written literature.
388
Samoan/Alphabet. Vowels. Aa Ee Ii Oo Uu Vowels with a Macron. Āā Ēē Īī Ōō Ūū Consonants. Ff Gg Ll Mm Nn Pp Ss Tt Vv Borrowed Letters. Hh Kk Rr
108
Meitei Culture/Non-Meiteis. The concept of someone being a non-Meitei (alias non-Meetei) is very complex if they are indigenous to Kangleipak, because the Meitei language term "Meitei" (or "Meetei") means "an amalgamation of people". However, the group of people who are identified as "Meitei people" in today's world, are based on their mother language, being the so called Meitei language, in most cases. However, anyone who follows the traditional Meitei religion (known as "Sanamahism" or "Lainingthouism" in present times), irrespective of their mother language, may also be identified as a Meitei. Irrespective of their mother language and religion, any person who has a Meitei family name (or surname) or ancestral roots, may also be identified as a Meitei. Therefore, any person who isn't fitted in any of the above mentioned conditions are classified as a non-Meitei.
236
Meitei Culture/Slavery. Slavery in Meitei civilization were not confined on a permanent basis. If any slave wants to quit his/her profession, they are free to do so as there is no force, but on certain terms and conditions (usually a contractual timing), of their masters or mistresses. Marriages of slaves to their masters or mistresses usually don't happen in traditional Meitei society.
95
Electricity and magnetism/Light. The wave equation. In the absence of charges and currents, Maxwell's equations are formula_1 formula_2 formula_3 formula_4 We can deduce : formula_5 Or formula_6 ( the proof of this equality, from the definitions of formula_7, formula_8, formula_9 and formula_10 is given at the end of this chapter) formula_11 and formula_12 so formula_13 where formula_14 formula_13 is the wave equation. Its solutions are waves that propagate at the speed formula_16 of light, or superpositions of such waves. Plane waves. A wave that propagates throughout space is a function formula_17 of 4 variables, x, y, z and t, 3 space coordinates and one time coordinate. formula_18 is the field at the point formula_19 To understand a plane wave, we can think of a mille-feuille whose leaves can slide over each other. A wave can then propagate in the direction perpendicular to the leaves. If a moving leaf carries its two neighbors and is carried by them, a movement on one side of the mille-feuille can propagate to the other side. This propagation movement is a plane wave. If the direction of propagation is the x axis, the leaves are vertical planes and the movement of a point does not depend on its position on the leaf, it only depends on its x abscissa. A plane wave can therefore be represented by a function formula_20 which depends only on a spatial coordinate x and time t. It is a one-dimensional wave, because it depends only on one spatial coordinate. If, moreover, the leaves always move in the same vertical direction, their movement can be measured by a single number, their movement in the vertical direction. The plane wave can then be represented by a function formula_20 whose value is a single real number. If the motion were more complicated, formula_22 would be a vector. Let formula_23 be a real function of a single real variable. formula_24 is a real number that depends only on the real number formula_25. formula_23 can represent any curved line in a plane that never goes back, extending infinitely from left to right. This way, for each formula_25, there is a unique point on the line whose coordinate is formula_25. The distance from this point to the horizontal axis is formula_24. Let formula_17 be the real function of two real variables, defined by formula_31 formula_17 represents a plane wave which propagates without distortion at speed formula_16. The sign of formula_16 determines the direction of propagation. The mille-feuille is deformed by the propagation of the wave, but the formula_23 shape of this deformation does not change. It remains the same throughout its journey. Such a wave can therefore transmit a message from one point to another, as far away as we want. formula_16 is the signal propagation speed. An electromagnetic wave is a wave of an electric force formula_37 and magnetic force formula_38 which propagates at the speed formula_16 of the light : The wave formula_17 is a solution of the wave equation formula_41 Proof : As formula_17 does not depend on y or z, formula_43 so formula_44 Now formula_45 formula_46 We can show in the same way that formula_47 So formula_48 for all x and all t. Are electromagnetic waves mechanical? A mechanical wave is a wave that propagates by setting masses in motion: Maxwell believed that electromagnetic waves were similar to mechanical waves that propagate in a material medium, which he called the ether, and which was supposed to fill the entire Universe, since light propagates everywhere, and because mechanical waves like sound do not propagate in a vacuum. When a wave is mechanical, its material support does not propagate. The moving masses oscillate around an equilibrium position. In a homogeneous and isotropic medium, the speed of wave propagation is the same in all directions. It is the speed of the wave relative to its material support, which we measure when we are at rest in relation to this support. If we are moving relative to the material support, the speed of the waves is not the same in all directions. If light propagated in the ether, we would have to observe this dependence of its speed on its direction, because we cannot always be at rest with respect to the ether. We tried to observe it, but we never succeeded. The theory of relativity posits that the speed of light is always the same for all observers, regardless of their motion. This is absurd from a classical point of view because the speeds measured by observers in relative motion are always different. Einstein showed that there is however no contradiction, provided that we admit that the simultaneity of events depends on the movement of the person observing them. According to Einstein, time is not absolute, because the simultaneity of events is not absolute, but relative to the observer. If light propagated in the ether, the measurement of its speed would depend on the movement of the observer relative to the ether. So the theory of relativity dictates that the ether does not exist. Light is a wave without material support. Electromagnetic waves are waves of electric and magnetic force that propagate in a vacuum. They are not mechanical waves because they can propagate without putting masses in motion. Monochromatic waves. A wave formula_49 is monochromatic with frequency formula_50 if and only if for all formula_51 and all formula_52, formula_53 formula_54 is the angular frequency, or the pulsation.. A monochromatic sound wave is pure sound. The higher its frequency, the more high-pitched it is. A monochromatic light wave is a pure color, one of the colors of the rainbow, but extremely bright, as if we could see the rainbow in front of the night sky. Low frequency light is red. If we increase the frequency, we reach violet, the highest frequency, passing through orange, yellow, green and blue, in that order. Beyond violet, we find ultraviolet, X-rays and formula_55 rays. Below red, we find infrared, microwaves and radio waves, the frequency of which can be as low as we want. White light is a superposition of monochromatic lights. A rainbow and a prism sparate the components of white light and thus reveal its spectrum, that is to say its composition in monochromatic lights: A monochromatic plane wave of pulsation formula_54 which propagates at speed formula_16 is represented by a function formula_49 such that formula_59 where formula_60 is a constant, formula_61, formula_62 is the wavelength. The definitions of formula_16, formula_62, formula_17, formula_54 and formula_67 lead to formula_68 The superposition of waves. If formula_49 and formula_70 are two wave solutions of the wave equation formula_71, then formula_72 is also a wave solution of the wave equation. Proof: the derivative of a sum is the sum of the derivatives, so formula_73 and formula_74. Photons are particles of light. Electromagnetic waves represent their movements. Photons do not collide. An illuminated object can emit light without being hindered by the light illuminating it, because light going in one direction is not hindered by light going in the other direction. Photons can pass through each other without knowing each other, as if each could pass through the other. The superposition of waves explains this mutual indifference of photons. Two waves propagating in opposite directions are superposed at the point where they meet, and this superposition does not affect their propagation. The superposition of waves is a very general principle with which most phenomena are explained, in particular the decomposition of white light, because it is a superposition of monochromatic waves. Standing waves. A wave formula_49 is stationary if and only if for all formula_51 and all formula_77, formula_78 Such a wave does not propagate. The shape defined by formula_79 vibrates in place but does not move. The vibrations of a string stretched at its ends and the surface of a drum are standing sound waves. The vibrations of air enclosed in a cavity, such as inside a guitar, are also standing sound waves. Let formula_49 and formula_70 be two identical monochromatic plane waves, except that they propagate in opposite directions. formula_82 formula_83 formula_84 is a standing wave. Proof: formula_85. So formula_86 If light is trapped between two parallel mirrors, both perpendicular to its direction of propagation, it is reflected on the two mirrors and therefore propagates at the same time in two opposite directions. It thus produces a standing wave. Light trapped between two mirrors inside a laser is a standing wave. The reflection of light. Sound waves are pressure waves. All bodies, solid, liquid or gaseous, can vibrate. When they vibrate, they cause neighboring bodies to vibrate. This vibration is sound. When it is transmitted through the air to an ear, it vibrates its eardrum, which is like the surface of a very small drum. This vibration is transformed by neurons into electrical signals which propagate to the brain. Sound is also a density wave, because the pressure of bodies depends on their density. Sound is also a wave of velocity, because in the absence of movement, the density of bodies is constant. Echo is the reflection of sound off a cliff or wall, in the same way that light is reflected by a mirror. A soft wall does not reflect sound. The harder the wall, the better it reverberates sound. The reverberation of sound is its reflection, its bouncing off the hard walls. A very hard wall does not vibrate, or almost not. Its velocity field is zero. The same goes for the air in contact with it. A wave on the surface of the water is reflected because it must remain horizontal, perpendicular to the wall which reflects it. So its slope is equal to zero. If a mirror is metallic, the electric field parallel to its surface is zero, or almost zero, because the electric charges are mobile, they are constantly moving so as to cancel the electric forces that move them. A metal mirror behaves towards light like a hard wall towards sound waves, because it cancels the electric field parallel to its surface. Consider a plane wave formula_17 which propagates without deformation towards the right (increasing x) emitted at a point located at formula_88 in the direction of a reflecting wall located at formula_89. formula_17 is determined by its movement at formula_91, which can be the emitting source of the wave. formula_92 because formula_93 We seek a solution formula_94 of the wave equation such that formula_95 for all t, since the field must cancel on the reflective wall. formula_96 By definition of formula_94, formula_98 for all formula_77. formula_94 is the superposition of two waves that are symmetrical to each other. One is the reflection of the opposite of the other relative to the reflecting wall. The two waves formula_101 and formula_102 propagate in opposite directions. The wave formula_102 could be emitted by a point at formula_104 which has a movement opposite to that of the initial emitter: formula_105 Everything happens as if the reflecting wall produced a wave emitted by a body exactly symmetrical to the body emitting the initial wave. It's the mirror effect. We see in a mirror as if the bodies in front of it were present behind it. Electromagnetism therefore explains why metal surfaces are always shiny and reflective. Rough surfaces also reflect light, unless they are black and perfectly absorbent. But they do not have the effect of a mirror: The eye and the formation of images. For an image to form, it is enough for each point on the image plane to receive the light emitted by a single point on the object. The wider the light source that illuminates a point in the image, the blurrier the image. We can see an image on a white wall in a dark room if we let the light pass through a small hole made through the shutters. We can thus see the sunny opposite facade projected upside down on a wall or on a white sheet. Each point on the wall receives light from a single point on the facade if the hole is very small. If the hole is large the image is blurry. A curved interface between two transparent materials has the property of making the light passing through it converge, or diverge. Images form at the back of the eye, upside down, because light from one point on an object converges on a single point at the back of the eye. We explain the propagation of light in transparent materials with Maxwell's equations. The formation of images is therefore also a consequence of Maxwell's equations. The refraction of light. Refraction of light explains why a stick appears broken by the surface of water and why lenses can cause light waves to converge or diverge. Refraction is explained by the difference in the speed of propagation of light between two transparent materials. Light is always slower in transparent materials than in a vacuum, 300,000 km/s. In the air this slowdown is very slight, but in water its speed is almost 215,000 km/s, and in a diamond it goes down to 125,000 km/s. When the direction of propagation of light is perpendicular to the surface between the two materials it passes through, it is not modified: But the light is all the more deviated as its initial direction of propagation deviates from the perpendicular to the surface: If we have our heads underwater and look at the edge of the pool, beings seem to be further away than they really are: Conversely, if we are in the air, and if we look at beings underwater, they appear to be closer than they actually are. This is why the sticks appear to be broken by the water surface: We see in this photograph that the brush is broken by the surface of the water and that it is enlarged by the curved surface of the glass, as if this surface was were a magnifying glass. Light always obeys Fermat's principle: the path followed by light is always the shortest among all possible paths. A lifeguard is on a beach. If the person she has to save from drowning is in front of her, she chooses the shortest path, which is in a straight line, perpendicular to the line of the beach. If the future rescued person is not in front of her, she must not go in a straight line, because she runs faster on the beach than she swims in the water. She must therefore choose a broken line as her trajectory, first running on the beach to be almost in front of the future rescued person, then swimming. Light does the same when it passes from air into water. Intelligence is to choose the best among the possibilities, or at least a satisfactory one. Light always chooses the shortest path, because there is no time to lose. The slowing of light in water and other transparent materials is explained with the Maxwell and Lorentz equations. When light passes through a material, its charges begin to move and themselves become light-emitting sources. The superposition of the incident wave and the induced waves is the cause of the reduction in the speed of the resulting wave, and therefore of the refraction of light. Interference. One of the most astonishing consequences of the principle of superposition is that light plus light can equal darkness: The energy of the light we observe is proportional to formula_106, the scalar square of the electric field it propagates. If two light sources produce equal and opposite electric fields, their superposition produces a zero field, without energy, therefore an absence of light, darkness. In Young's slit experiment, two slits allow light to pass through. If we observe the light on a screen which receives it, we see an alternation of dark and bright fringes, but the dark parts are illuminated by the two slits like the bright parts: Young understood in 1803 that waves in opposite directions cancel each other out while they add up when they have the same direction: The polarization of light. Sunglasses sometimes have polarized lenses: Light is linearly polarized when it has a direction perpendicular to its direction of propagation. A polarizer is a filter that stops light polarized in one direction and lets it pass if it is polarized in the perpendicular direction. Circular polarization is a superposition of two linearly polarized waves: When light is circularly polarized, it has a direction of rotation around its direction of propagation. For quantum physics, the polarization of light is the spin of photons. Photons have spin means that they have rotational inertia, like spinning tops. Rotational inertia is what makes a body maintain the same axis and the same speed of rotation. It is what keeps the moving bicycles in balance. Stationary bicycles do not have this balance, because their wheels do not turn. Light from the Sun or from incandescent light-emitting materials is not polarized. But the light in the sky is polarized. The light obtained by reflection on a mirror, on water or on a glass can also be polarized. To see if the light is polarized, simply look at it through a polarizing glass that is rotated around an axis perpendicular to its surface: These two photographs were taken with a polarizer filter that was rotated 90° between the left and right images. When we put a material under stress, it generally behaves like a polarizer. This polarization effect reveals the stress: If we place a crystal between two crossed polarizers that we rotate, we can obtain very beautiful effects, because birefringent crystals behave like polarizers: Maxwell's equations show that the direction of linearly polarized light is the direction of the electric field formula_37. formula_37 and formula_38 are always perpendicular to the direction of propagation of an electromagnetic wave. Proof for a plane wave: the partial derivatives with respect to y and z are zero, since the field depends neither on y nor on z. According to Maxwell's fourth equation, the component formula_110 in the direction of propagation of the electric field formula_37 is such that formula_112. Since formula_110 cannot vary over time, it cannot propagate a wave. It is therefore zero for a propagating wave. The same argument holds for formula_114 from Maxwell's third equation. Let there be light. The Maxwell and Lorentz equations predict the existence of light and all its properties: its propagation, its colors, its standing waves, its reflection, why it forms images, its refraction, its interference and its polarization. They make it possible to study most of the properties of matter (except its radioactivity, which is of nuclear origin, and its gravity) and its interactions with light. The electromagnetic field has an autonomous existence. Once accelerated electrical charges produce light, it propagates on its own, and the charges that produced it can no longer stop it. By giving Maxwell's equations (or Coulomb's law and the relativistic geometry of space-time) God gave the laws which make that light can exist, that we can see it and that we can see the world thanks to it. God said “Let there be light” giving the laws of electromagnetism, the equations of Maxwell and Lorentz. The nabla operator. The gradient of a scalar field, the divergence and the rotational of a vector field are the three fundamental operators with which physicists do most of their calculations, particularly in electrodynamics and fluid dynamics. All three can be written with the nabla formula_115 operator: formula_116 formula_117 formula_118 formula_119 formula_120 formula_121 can therefore be written: formula_122 Proof of formula_122: formula_124 formula_125 formula_126 The calculation is similar for the other two components.
4,697
Kitchen Remodel/Cabinet installation trouble shooting. In this chapter, I will write about some problems that we expectedly or unexpectedly faced during our kitchen installation, and how we solved them. A too low stretch of ceiling. An expected cause of trouble that we had to cope with during our cabinet installation was the low ceiling above the refrigerator/oven row of cabinets and in the pantry space. It has a height (concrete floor to ceiling) of 83" (221 cm). The cabinets are, without legs, 80" (203 cm), and the legs require at least 3½" (9 cm). The European version of those legs would have saved us, since they require only a minimum of 2¾" (7 cm), but those are not available in the U.S. As I already mentioned in the previous chapter, we solved this problem by cutting the legs down to the length that we needed. For additional support, we also used brackets for island support and a great many screws that went into the walls, into the ceiling and from one cabinet into another. <br clear=all> A slanting stretch of ceiling. I designed the space that is shown in the first image below to accommodate a row of high cabinets, including our refrigerator, the oven and the microwave oven. Originally, I had planned to align those elements with the front edge of the dropped ceiling. Then we noticed that this dropped ceiling isn't level but slopes down to the right by half an inch. There was no way aligning it with the top edge of the cabinets. We fixed that problem by moving the entire cabinet block one inch forward into the kitchen space and covering its vertical edges with two strips of cover panel. The ceiling is still slanted, but the cabinets cover it now. A not wide enough space. Some of the distress that the installation of our kitchen cabinets gave as came from my incomplete knowledge of Ikea's 153° hinges. For the "extra", a pair of 18" high cabinets, we had designed a a special recess with walls at both sides. That recess was not too narrow to hold the cabinets – they went into their places without a problem –, but after we mounted the first door, we noticed that it wouldn't open wide enough to allow the drawers which lay behind this door to open. The solution was not the most elegant, but for our purposes it is absolutely sufficient: Instead of mounting those doors to the exterior sides of the two cabinets, we used the interior sides. Since these are wide opening hinges, the access to the cabinets' interior is virtually not impaired. The only downside is that the two doors cannot be opened at the same time. Another not wide enough space. It never rains but it pours. The most severe problem that we had to deal with, arose during the installation of our pantry row of cabinets. Those were supposed to fit between two walls: one that was already there before the remodel and another one we had newly built just for this purpose. The row includes 3 high cabinets of 30"+30"+18", which add up to 78". We directed our contractor to position the new wall accordingly and to also add a little extra margin for safety. We got a width of 78¼". Guess what, that was not enough. What we had not considered was: My first idea was to tear the half-inch-drywall down on the right-hand wall and to replace it with the thinnest drywall that was available. But even that would not have been enough. Since we were not inclined to modify "both" walls, just to reap the needed fraction of an inch, we decided not to use ¼" drywall, but "hardboard", which is only 1/8" thick. This did the trick, it works fine and isn't even noticeable. One possible alternative would have been to shave the cabinet to size. That we didn't want to do because it might have compromised the functionality of the cabinet's door.
905
Steam Deck. This is a guide and manual for Valve Steam Deck that is a gaming handheld as well as a personal computer. There is a wikibook title for History of video games/Platforms/Steam Deck On Windows 10: Setup Steam. If you don't use SteamOS 3 Arch Linux and have Windows 10; If the game does not correctly use Steam deck controller, you can install Steam and enable it through adding Windows games as non Steam game into Steam library , if there are still problems you could install Steamdeck Windows Controller Driver (SWICD) that should help there is a guide too steamcommunity [https://steaminput.wiki/en/home ] Losing battery while shutdown. 1.Shutdown your Steam Deck 2.Hold down the volume up (+) button and press the power button to boot into BIOS 3.Open BIOS 4.Open Setup Utility , Open Power 5.Select Battery Storage mode 6.Select Yes 7.Your device will boot up in battery storage mode
237
Polish/Instrumental case. Narzędnik (Instrumental). The instrumental case in Polish is used in the following situations: The instrumental case answers the questions "with who?" ("z kim?") and "with what?" ("z czym?"). Singular. The singular form of the instrumental case is formed by adding specific endings to the nominative form. Noun Declension. See Polish/Hard and soft consonants Plural. In the plural form, nouns in Polish are declined based on gender, either as virile ("masculine personal") or nonvirile ("masculine animate", "masculine inanimate", "feminine", and "neuter").
158
Italian/Grammar/Interrogatives. Interrogatives in Italian Grammar. Interrogatives, or "parole interrogative," are used to ask questions in Italian. They are essential tools for obtaining information and are typically placed at the beginning of a sentence. Here are the main interrogatives in Italian: 1. **Chi?** - Who? 2. **Che cosa? / Cosa? / Che?** - What? 3. **Quando?** - When? 4. **Dove?** - Where? 5. **Perché?** - Why? 6. **Come?** - How? 7. **Quanto/a/i/e?** - How much/many? Let's look at each one in more detail: - **Chi? (Who?)**: This interrogative is used to ask about people. For example, "Chi è quella persona?" (Who is that person?). - **Che cosa? / Cosa? / Che? (What?)**: These interrogatives are interchangeable and used to inquire about things or activities. For example, "Che cosa stai facendo?" (What are you doing?). - **Quando? (When?)**: This interrogative is used to ask about time. For example, "Quando arriva il treno?" (When does the train arrive?). - **Dove? (Where?)**: This interrogative is used to ask about location. For example, "Dove abiti?" (Where do you live?). - **Perché? (Why?)**: This interrogative is used to ask for explanations or reasons. For example, "Perché sei triste?" (Why are you sad?). - **Come? (How?)**: This interrogative is used to ask about the manner in which something is done. For example, "Come stai?" (How are you?). - **Quanto/a/i/e? (How much/many?)**: This interrogative agrees in gender and number with the noun it refers to. It's used to ask about quantity. For example, "Quanto zucchero vuoi nel caffè?" (How much sugar do you want in the coffee?). Remember, the use of intonation is crucial when asking questions in Italian. Even without these interrogatives, you can turn a statement into a question by raising your intonation at the end of the sentence.
534
Infrastructure Past, Present, and Future Casebook/Skyline Drive. This page is for a case study on the Shenandoah National Park scenic byway, Skyline Drive, created by Johnathan Selmer, Jay Shuey, and Guillermo Padilla. It is part of the GOVT 490-003 (Synthesis Seminar for Policy & Government) / CEIE 499-002 (Special Topics in Civil Engineering) class offered at George Mason University taught by Jonathon Gifford. Summary. The idea of Skyline Drive was first suggested in 1924. In a report from the Southern Appalachian National Park Commission to Secretary of the Interior Hubert Work recommending the establishment of a national park in this area, it was pointed out: Under the joint supervision of the Bureau of Public Roads and the National Park Service, construction of Skyline Drive began in 1931. By September 15, 1934, the first section of the Drive, 34 miles long, was opened for travel. This made available an extensive region of the Blue Ridge in which was located the vast central portion of the proposed Shenandoah National Park extending from Thornton Gap to Swift Run Gap. Within a year more than one-half million visitors were attracted to this portion of the park. Today, Skyline Drive has grown and now runs 105 miles north and south along the crest of the Blue Ridge Mountains and continues being the only public road through the Park – attracting over 1.2-million travelers annually. Institutional Arrangements - Oversight and Maintenance. Key Actors and Institutions involved with the development and maintenance of the Skyline Drive include: Narrative of the Case. In the late 19th century, widespread exploitation of natural resources, particularly in the Western regions, prompted growing concerns regarding wasteful practices and the need for conservation measures. President Theodore Roosevelt emerged as a leading advocate for conservation, making calls for federal oversight of resources and the protection of wilderness areas. Collaborating with influential figures such as John Muir, founder of the Sierra Club, advocated preservation of natural resources from use, while Gifford Pinchot, a forester, called instead for conservation, the proper use of natural resources. Together, environmentalist advocacy of different types led to the establishment of the National Park Service by Congress in 1916, and the preservation of areas including Yosemite and Yellowstone. In addition, the Roosevelt administration implemented significant policies, notably the Newlands Act of 1902 and the establishment of the National Conservation Commission in 1909. The proposal for a ridge road along the Blue Ridge mountains in Virginia was initially embraced as part of a new National Park plan in 1924. However, it sparked intense controversy within the conservation community. Benton MacKaye, a key figure in conservation, opposed the road, fearing it would disrupt the wilderness. On the other hand, Myron Avery, known for his leadership in trail construction, supported the road's inclusion in the Skyline Drive project. Their clash highlighted differing views on wilderness preservation versus accessibility. Despite MacKaye's objections, the road was built, deepening the divide between preservationists and those advocating for broader public access to nature. The conflict underscored the complexities involved in balancing conservation objectives with societal interests. While MacKaye emphasized preserving the untouched wilderness of the Appalachian region, Avery prioritized practical trail construction and public engagement. Their disagreement left a lasting impact on the history of conservation in the United States, serving as a significant chapter in the evolving narrative of wilderness preservation. Policy Issues. Environmental Impact & Wildlife Management: The way Shenandoah National Park has approached their environmental and wildlife management has evolved over time to address challenges from increased human activities, including as they pertain to Skyline Drive. A scenic byway stretching 105 miles through the park, Skyline Drive aimed to provide visitors with stunning views of the Blue Ridge Mountains from the comfort of their vehicles. However, this monumental undertaking altered the landscape and ecosystems and created long-lasting consequences for the park's natural environment. The construction phase of Skyline Drive involved extensive land clearing, grading, and paving, which resulted in the destruction and fragmentation of local habitats . Habitat fragmentation impedes the movement of wildlife populations, inhibits gene flow between isolated habitats, and increases the vulnerability of species to extinction. Forested areas were cleared to make way for the roadway meanwhile excavations and the construction of bridges and retaining walls altered the park's natural drainage and rate of soil erosion . Skyline Drive's ongoing use as a popular tourist attraction and recreational thoroughfare has continued to impact Shenandoah National Park's environment and wildlife. Influxes of traffic along the roadway introduced air and noise pollution, which disrupted wildlife behavior, and posed risks to pedestrian safety. Regular road maintenance, such as asphalt resurfacing and roadside vegetation management, continue to act as a catalyst of environmental degradation. Invasive species along road corridors pose a threat to native plant communities and exacerbates competition for resources. Despite these environmental challenges, legislation such as the Clean Air Act, enacted in 1970, have provided regulatory frameworks for environmental protection in Shenandoah National Park. The Endangered Species Act of 1973 required the protection of at-risk species like the Shenandoah Salamander . Recent technological advancements allow park officials to evaluate management effectiveness and monitor wildlife, habitat, and ecosystem health more closely. The SWAS-VTSSS (Stream Water and Sediment Chemistry, Virginia Tech School of Forestry and Wildlife Sciences) monitoring program, initiated in 1979, plays a crucial role in evaluating water quality and ecological conditions in mountain streams affected by Skyline Drive and other anthropogenic activities . The program contributes to evidence-based and adaptive management practices in Shenandoah National Park by collecting comprehensive data on stream water chemistry, discharge rates, and ecological responses. The National Park Conservation Association's (NPCA) "Polluted Parks Report" underscores the ongoing challenges posed by air pollution in Shenandoah National Park . Despite its designation as one of only 49 Class I air areas managed by the National Park Service, the park continues to experience significant air quality concerns stemming from external sources of pollution . Efforts to address air quality concerns involve sophisticated monitoring systems, regulatory compliance, and collaborative initiatives to reduce pollution levels and preserve the park's natural resources. Land Acquisition & Eminent Domain: The policy of land acquisition and eminent domain for the creation of Shenandoah National Park was a multifaceted and contentious process that unfolded over several years in the 1930s. Discussions about the park's creation began in 1924, but it wasn't until February 1, 1934, that the federal government under Arno Cammerer, director of the National Park Service, announced that the government would not accept land for the park from the state of Virginia until all residents had left the area. In 1928, the Virginia legislature passed a condemnation law which allowed the state to acquire land for park via eminent domain. However, the law faced opposition from landowners who felt undervalued by the state's appraisal process. By 1933, landowners owning about 20,000 acres of land had contested the appraised values, leading to appeal hearings and delays in the acquisition process. The blanket condemnation law also faced legal challenges, most notably in the case of Robert H. Via, who sued the state on constitutional grounds citing the Equal Protection Clause of the 14th Amendment. Although Via's appeal was ultimately rejected by the Supreme Court in November 1935, his legal battle slowed the land acquisition process. Furthermore, an estimated 268 families living in Shenandoah at the time had no legal claim to the land they had inhabited for generations. The total number of families affected by the removals and resettlement efforts exceeded 500. The "buy an acre" campaign was another significant facet of the park's creation. This campaign aimed to raise funds for land acquisition through public donations. Led by the Shenandoah National Park Association, park backers initiated a campaign aimed at persuading Virginians from around the state to contribute to the land fund. With a slogan advocating that Virginians "Buy an Acre" for $6.00, the fundraising drive raised nearly $1.2 million dollars. Approximately $1 million of funding came from state appropriations at the urging of Governor Byrd. Park enthusiasts also tried to secure donations from noted philanthropists. Carson had hoped to raise $2 million dollars from these notable figures, but only won a small percentage of that amount. With celebrity philanthropists largely absent from the list of supporters and with the onset of the Depression in 1930 sharply curtailing other fundraising efforts, park supporters had raised only slightly more than half of the estimated $4 million dollars needed to purchase the 321,000 acres. Consequently, Carson once again prevailed upon Congress to reduce the park's size. In 1932, Congress made its final acreage reduction, drastically reducing the minimum acreage needed for the park to be established to 160,000 acres, less than one-third the original congressional authorization mandated. The removals began in earnest after the federal government officially accepted title to 176,429.8 acres of land from Virginia on December 26, 1935. By early 1938, fewer than four years after Cammerer's removal order, between 500 and 600 families had permanently left their homes in the park. The removals were often met with resistance, resulting even with some families needing to be forcibly evicted by local law enforcement. Initiatives were undertaken to assist displaced families in resettling. One such initiative was the Federal Homestead Corporation (FHC), which initially aimed to establish homesteads for former residents. This initiative was stalled due to its extensive bureaucratic process and legal issues. The project was later revived under the Resettlement Administration (RA) in 1937. Efforts of the RA resulted in the construction of homesteads in locations across Page, Greene, Madison, Rappahannock, and Rockingham county and were estimated to cost $6,000 per homestead. Of the more than 500 families affected by removals, only 170 families qualified for and were placed in these homesteads. However, the homesteads included mortgages and monthly bills to which many of these discplaced families were unaccustomed to. Within two decades, none of the original mountain families remained within their resettlement homesteads. Key Lessons and Takeaways. In the broader context of national parks, large government projects such as Skyline Drive are an example of planners and policymakers utilizing the government's sole authority to acquire private property for public use through eminent domain. Eminent domain has always been and will remain a heated topic for debate within the United State’s legal system, with the pushback citing constitutional arguments for protections against the deprivation of life, liberty, and property, juxtaposed with the government’s authority to violate those said protections with reasonable cause and just compensation. Skyline Drive in Shenandoah National Park serves as a stark reminder of the delicate balance between human development and environmental preservation in national parks. The construction and use of this scenic byway demonstrates the lasting effects that human activities can have on natural landscapes. The National Park Service has committed itself to spreading awareness of related environmental issues and promoting more sustainable practices to preserve America’s most treasured landscapes for generations to come. Discussion Questions. 1. How do you think the construction of Skyline Drive reflects the broader tension between preserving wilderness areas and making them accessible to the public? 2. How do you view the ethical and legal implications of eminent domain in the context of conservation efforts and public infrastructure projects? 3. How might compensation initiatives for eminent domain take into account the cultural and historical connections of individuals who lack a legal claim to the land they have called home for generations? Should those individuals be compensated? 4. Scenic roads through national parks offer a chance to experience nature up close. However, they also become arenas for tension among pedestrians, cyclists, and motorists, all vying for use of these spaces. How can policymakers balance the enjoyment of these roads for different users while ensuring safety and preservation of the natural environment? 5. Considering the current political climate, do you believe a project like Skyline Drive could be undertaken today? Would a venture of this nature, balancing preservation (protection against use) and conservation (proper use of natural resources), even be considered?
2,969
Chess Opening Theory/1. g4/1...e5/2. f3/2...Qh4. 2...Qh4#. White has played a very poor move and allowed Black the quickest possible checkmate in the opening game of chess. Mate in two moves. The mate almost never occurs in practice but is commonly known among chess players because it is the fastest possible.
89
Theory of Formal Languages, Automata, and Computation/Preface. Why study the theory of automata, languages, and computation (ALC)? It is not an area of high-profile research, and it’s not regarded as an applied area either, though some of the material is bread and butter, nuts and bolts computing – so pervasive that its invisible to many. That is, it’s a foundational course. Happily, I think, we study ALC because its engaging and fun for those interested in computation. In its abstractions you will find exercises in pure computational theory and computational thinking. Like a humanities class that you take as a computing student, if you are strongly inclined only towards the vocational thrusts of computing, and you still take a course on this material, I suggest that you take on faith that some-to-much of what you learn will come in handy later. I say “take on faith” because the links between theory and application probably won’t be as overt in this text as you would see in an algorithms text for example. There are, however, conceptual links between ALC and algorithms of course, programming languages, and artificial intelligence, which this book addresses. I took “this class” in 1979 from a PhD student, Dov Harel, and I thought it was beautiful – I thought that the Pumping Lemma was a metaphor for life. The field hasn’t changed much (foundations don’t change much, or perhaps only rarely), though the onslaught of AI on the computational fields, and on other publics, has led to new analyses of how deep learning frameworks, which are dominant in AI and machine learning right now, fit into classical computational theory. While the book touches upon the recent work on AI, as well as classical material from AI, it is largely conventional in its treatment of the theory of formal languages, grammars, automata, and computation. Maybe you’ll have a reaction to ALC something like mine (as well as other reactions perhaps) – it’s adoration of computing as a field that offers greater wisdom, not one that simply manifests as mastery of the latest gadgetry, but stems largely from an appreciation of computing’s earliest core abstractions. The goal of the book is to "streamline" treatment of ALC to what I generally cover in a one semester course, with an organization that I prefer, with added treatments of applications too, including artificial intelligence (AI). My knowledge of this area resulted from undergraduate learning and from teaching undergraduates in the area, using primarily the texts of Hopcroft and Ullman, though I bring personal insights to the treatment, as well as treatment of the relationship of ALC to AI and machine learning. The exercises are often taken from other sources, and are cited as such. I add other exercises as well. My original exercises uniquely include some that require students to interact with AIs to discuss material, and to vet these interactions. Instructors like me, who are not researchers in the area, but love it nonetheless, might recognize my interest in talking to AIs about this material -- who else would I be able to talk to about it in the normal course of my day!?! The interactions with large language models of AI are an opportunity for students to discuss ALC -- its methods, concepts, and philosophies -- in ways that they might never have done before. These AIs change the way that humans can now interact with computers, and they have the potential to rob us of something that is (or was) one of the unique skill sets of computer scientists -- the ability to communicate with the world's stupidest entity capable of response! I say "rob us" because the computers are seemingly less stupid by the day. Computer scientists are, or should be, among the world's best communicators in my opinion. The mode of communication, that is programming languages, are no longer necessarily needed, but at least in the foreseeable future some of the principles of good communication, such as the danger of making assumptions about what your conversation partner knows or believes, are still germane. The exercises on conversations with AIs on ALC material ask students to analyze, debug, expand, specialize, and otherwise interrogate their discussions.
913
Theory of Formal Languages, Automata, and Computation/Authors. Douglas H. Fisher, with original and editorial contributions from Kyle Moore and Jesse Roberts.
34
Theory of Formal Languages, Automata, and Computation/Introduction. This textbook is fundamentally on finite representations of infinite languages, where the representations are amenable to computational analysis and characterization. That's it in a nutshell. The rest of this book fleshes out this nutshell in considerable detail, with pointers to still more detailed treatments. Languages. A formal language is a set of strings over some alphabet. A language, call it L, can be finite or infinite. The alphabet, which is often denoted as Σ, is a finite set of primitive symbols, such as Σ = {a,…,z, A,….,Z} or Σ = {0,….,9}. Each string is a sequence (i.e., ordered) of symbols from the alphabet. Just as sets generally can be represented intensionally or extensionally, languages can too. An extensional representation is an explicit listing of the strings in the language, and is only relevant to the representation of finite languages. For example, the finite language of binary representations of natural numbers less than 4 is L = {0, 1, 10, 11} or alternatively it could be {00, 01, 10, 11}, over the alphabet Σ = {0, 1}. In contrast, intensional representations of languages are finite descriptions of the set of strings in the language. These finite-length descriptions are used to describe infinite, as well as finite languages. The natural language description “binary representations of natural numbers less than 4” is an intensional representation of a language, which can also be written as L = {w | w is a binary representation of an integer less than 4}. The phrase “binary representation” suggests the relevant alphabet of {0, 1}. There are other intensional representations of finite sets above. What are they? While I used Σ = {a,…,z, A,….,Z} or Σ = {0,….,9} as representing alphabets, I could also say L = {0,….,9}, for example, or L = “the single digits in common arithmetic, base 10”. Intensional descriptions are also absolutely needed to finitely represent infinite languages. Consider L = {w | w is a binary representation of an integer greater than 3}, or perhaps “L is the set of binary representations of natural numbers greater than 3”, with Σ = {0, 1}. An interesting observation about intensional descriptions is that there are implicit (typically unstated) inferences that are required to interpret them. There are inferences required of extensional representations too, but I would say that the inferences in the latter case are much closer to the mechanical level that are unambiguous. Sometimes the intensional description, upon further thought, is insufficient for unambiguous understanding. Fortunately, in the intensional description of the digits, most of us will know how to interpret the ‘…’ because of our prior training, but others will not know how to interpret that. Even for the best trained among us, the description of the binary representations of natural numbers greater than 3 may, upon reflection, seem incomplete. Perhaps we should add a phrase, as in L = {w | w is a binary representation of an integer greater than 3, where w is represented by the least number of binary digits necessary} = {100, 101, 110, 111, 1000, 1001, …}. Why? Even here we might want to say w has no leading 0s. To my knowledge, no course on formal languages will seek to formalize the nature of implicit inferences in the general case, though this would be required to fully understand the complexity of language representation and comprehension, and is undoubtedly of importance to pure mathematics. We will touch on some of these issues, particularly when we talk about the relationship between AI and formal languages. As already mentioned, an intensional description (+ the inference mechanism, which may be a computer program) is a finite representation of what is often an infinite language. In fact, much of this course is about two classes of finite representations of languages – grammars and automata (or machines) – that are both amenable to mechanical, unambiguous, automated inference. Roughly speaking, a grammar specifies how to generate strings in a language, and an automaton specifies how to recognize strings in a language. This distinction between generators and recognizers of languages is not as sharp as suggested here, but it’s a good starting point. Proof. A traditional goal in automata and formal language courses is to practice skills of deductive proof, which may influence and benefit thought for a lifetime. The word ‘proof’ is often misused, as when it is used to state that there is no proof of this or that natural or social phenomenon (e.g., no proof that climate change is human made). Such areas are not appropriate for deductive proof, but as we will see when discussing AI, the logical mechanisms for carrying out deductive reasoning can be augmented with probabilities and utilities, and be reinterpreted, to become the skeletal structure for other important forms of reasoning: "possibilistic reasoning" (i.e., what is possible?), "probabilistic reasoning" (i.e., what is probable?), "evidential reasoning" (i.e., what is probable given evidence/data/observations?), and "rational reasoning" (i.e., what are the expected consequences of actions, which includes probabilities but also utilities/costs of outcomes?). Returning now to deductive proof, Hopcroft, Motwani, and Ullman (2007) state But they continue Contrast this statement with breakout box on page 12 of the same text "codice_3" where it acknowledges that "persuasion" of an audience member is very relevant, but this will depend on what knowledge that audience member has. It goes on the say, in effect, that proof should persuade an audience (that will include knowledgeable members too) and not just arbitrary individuals. Another point is that proof (which I will characterize as  "deductive" and Polya calls "demonstrative" and "finished", and includes all forms of proof that we discuss) is the result of search for a proof through intelligent, knowledgeable guessing -- that is, mathematics codice_4". Polya makes clear that the "guessing" behind the search for a proof should not be arbitrary but should follow principles of "plausible" reasoning. Textbooks often present "finished" proofs, but embedded in the discussion sometimes there will be insights into the search and rules of thumb for easing and directing the search for the finished product. As an example, consider Goldbach’s Conjecture – that every even number greater than 4 is the sum of two prime numbers. But how do I come up with this statement to begin with? How did Goldbach come up with Goldbach’s conjecture? I do so through what Polya also includes as part of plausible reasoning – I notice that every even number that I can think of is the sum of two primes. This is an empirical exercise of identifying a pattern based on data. After identifying a plausible pattern, I can attempt a proof of the statement (e.g., If N is even and greater than 4, then there exists (formula_1) N-k and k that are both prime). No one, by the way, has ever proven Goldbach’s Conjecture. It is so easily stated but as yet unproved as true or false. That’s one of the attractions of number theory – easily comprehended and difficult to prove hypotheses, at least in many cases. A classic program of early AI and machine learning is known as AM, standing for "A Mathematician", by Douglas Lenat. Rather than proving theorems, which was an early focus of AI research predating AM, Lenat's system searches for empirically supported patterns in number theory which then are presented by the system as candidate theorems. AM illustrates the tasks of mathematics in the making, which are integral to computational theory as well as mathematics. Once a candidate theorem is identified, a search for a proof of it (or not) can begin. Generally, finding a valid proof is a much more difficult problem than validating a given proof. An interesting hybrid problem of finding and validating arises when grading a proof, where ostensibly the task is to validate or invalidate the "proof" submitted by a researcher, students included, but this will sometimes require that a focused search evaluates "how far" the submission is from a valid proof (e.g., for the purpose of assigning a grade). This grading of a proof attempt is a good example of an AI task, rather than ALC territory "per se". Proof by Induction. You may have covered proof by induction in a course on discrete structures, and if so, our treatment here is review. If we want to prove a statement "St" of all natural numbers, then we explicitly show base cases -- that "St"(0) is true and perhaps other base cases -- and then we show the truth of an inductive step, that ‘if "St"("k") is true then "St"("k+1") is true’ for all k formula_2 0 and/or some other base cases. The inductive step suggests the importance of proofs by induction in formal languages, where strings have lengths and the truth of statements about strings of length k+1 are demonstrated by building on the assumption of truth of strings of length "k" (or less than and up to length "k"). Figures InductionProof1, InductionProof2, and InductionProof3 give sample proofs, two arithmetic and the third involving strings. The deductive validity of proof by induction rests on the axioms of number theory -- one in particular. But to review, of number theory are as follows. 1. 0 is a natural number. 2. For every natural number x, x = x. That is, equality is reflexive. 3. For all natural numbers x and y, if x = y, then y = x. That is, equality is symmetric. 4. For all natural numbers x, y and z, if x = y and y = z, then x = z. That is, equality is transitive. 5. For all a and b, if b is a natural number and a = b, then a is also a natural number. That is, the natural numbers are closed under equality. 6. For every natural number n, the successor of n, n+1, is a natural number. That is, the natural numbers are closed under the successor function. 7. For all natural numbers "m" and "n", "m" = "n" if and only if the successor of "m" is the successor of "n", "m+1" = "n+1". 8. For every natural number "n", "n+1" = 0 is false. That is, there is no natural number whose successor is 0. 9. If K is a set such that: 0 is in K, and for every natural number n, n being in K implies that n+1 is in K, then K contains every natural number.” It is the 9th axiom that justifies proof by induction. It says precisely what we have been illustrating above, which is that if a statement is true of n (i.e., "St"("n") is true; "n" is in the set of natural natural numbers for which "St" is true), and "St"("n") formula_3 "St"("n+1"), then the statement, "St", is true of every natural number. The 9th axiom, instantiated for particular statements, "St", is itself a finite representation of the countably infinite set of natural numbers. Proof by Contradiction. There are other forms of proof too, like proof by contradiction. Suppose you want to prove proposition "p", then the equivalent statement "formula_4p" formula_3 "false" (or simply that formula_4p is false) may be more amenable to proof. For example, prove “If x is greater than 2 (p1) and x is prime (p2), then x is odd (q)” by contradiction. Abbreviate this as proving (p1 ⋀ p2)  formula_3 q by contradiction, so negate the statement resulting in formula_4((p1 ⋀ p2)  formula_3 q), which is equivalent to (represented as formula_10) formula_10 formula_4(formula_4(p1 ⋀ p2) ⋁ q) formula_10 formula_4( (formula_4p1⋁ formula_4p2) ⋁ q) formula_10 (formula_4(formula_4p1⋁ formula_4p2) ⋀ formula_4q) formula_10 ((formula_4p1⋀ formula_4p2) ⋀ formula_4q) formula_10 ((p1⋀ p2) ⋀ formula_4q) formula_10 (p1⋀ p2 ⋀ formula_4q) // formula_4(x is odd) formula_10 (x is even) Does (x is greater than 2) and (x is prime) and (x is even) create a contradiction? If x is even then x = 2y,  for some y > 1, then x is composite by definition, contradicting x is prime (p2). Here is another example of proof by contradiction. This example is due to Hopcroft, Motwani, and Ullman (2007). Let S be a finite subset of an infinite set U. Let T be the complement of S with respect to U. Prove by contradiction that T is infinite. Note that formula_33Uformula_33 and formula_33Sformula_33 represent the cardinality or size of sets U and S. Prove  “IF   S ⊂ U (p1) and   there is no integer n s.t. formula_33Uformula_33 = n (p2) and   there is an integer m s.t. formula_33Sformula_33 = m (p3) and   S ⋃ T = U (p4) and   S ∩ T = {} (p5)   THEN there is no integer p s.t. formula_33Tformula_33 = p  (q) by contradiction Similar to prior example, negate the entire expression and simplify, getting p1 ⋀ p2 ⋀ p3 ⋀ p4 ⋀ p5 ⋀ formula_4q. formula_4q formula_10 there is an integer p s.t. |T| = p   p4 and p5 formula_3 formula_33Uformula_33 = formula_33Sformula_33 + formula_33Tformula_33 = m + p contradicting p2 (that there is no integer n s.t. formula_33Uformula_33 = n) Diagonalization. Special cases of proof by contradiction include proof by counterexample and proof by diagonalization. This latter strategy is used to prove exclusion of items from an infinite set of like items. In diagonalization one defines a matrix in which the rows, of which there may be a countably infinite number, represents members of a set. Each row is defined by the variables given by each of the columns of the matrix. One defines the matrix and uses the diagonal of the matrix to construct a hypothetical candidate row, but it is a candidate row that cannot belong to the set of rows in the matrix by virtue of the way the candidate row was constructed. Cantor’s proof that the real numbers are not countably infinite introduced the diagonal argument, and assumes a matrix where rows correspond to the (countably infinite) natural numbers (i.e., 0, 1, 2, 3, …) and the columns correspond to digits that are to the right of the decimal point, as in 0.x1x2x3… (e.g., 0.3271…). We can then take the diagonal of this matrix and create a candidate row by taking each digit xj,j in the diagonal and changing it, say to (xj,j+1) mod 10. This candidate sequence cannot match any row, by definition, since at least one digit in it, the one that was changed along the diagonal, is different from the corresponding digit in any row. So, there are syntactically valid sequences of digits to the right of the decimal point that cannot correspond to any row in the countably infinite set of rows. This is illustrated in Figure Diagonalization. When this strategy was introduced in the late 19th century there was some controversy around it because of the novelty of creating an infinite-length counterexample, and Wittgenstein was reported to have called the diagonal argument “hocus pocus”, but it has now been long accepted as mathematically valid. We will use it later in proving that some languages are not members of certain language classes. Proof with Construction. The most common form of "proof" or demonstration that you will see in this textbook involves construction. For example, if we want to show that two descriptions, X and Y, define the same language, we show that we can construct/translate Y from X: X formula_3 Y, and vice versa, Y formula_3 X. But we can't construct just any old thing and expect that to be a sufficient demonstration of equivalence. You have to be convinced that the translation or construction from one representation to the other is not buggy and is otherwise valid. If we want to be formally rigorous we might therefore follow construction by a proof, say by induction or contradiction, of the construction's validity. We will typically only allude to or sketch the proof that the construction is valid, if that, spending most of our time explaining the construction and its validity. Automated Proof. In AI, automatic proof has a rich history, and we take time to talk about AI proof as a way of demonstrating important principles of thinking. An important insight that is rarely taught in lecture is that to prove something requires that one search for a valid proof. When was the last time that you saw a lecturer prove something live and previously unseen by the lecturer? This search can take a long time and it may not be successful at all. Rather, the search for a proof (or the search for a correct computer program for a given specification) typically happens beforehand, perhaps from scratch or by looking up the results of someone else’s search. Demonstrating the validity of an existing proof, which is what is typically done in lecture, is generally much easier than finding a valid proof through search! This distinction between validating and finding will relate to our study on "P" and "NP" problems, respectively. You may have seen P and NP elsewhere, perhaps an algorithms course, but if not we address it in Properties of Language Classes. Our talk of AI proof will make explicit the importance of search or what Polya includes as part of plausible reasoning for theory "in the making". If you know a statement that you want to prove, then it often makes sense to reason backwards from that statement to a known set of axioms by inverting deductively valid steps (so that when read forward the steps are valid). "Backwards reasoning" is an important strategy in AI and in thinking generally. An early, if not the first AI paper on this as thought strategy was Shachter and Heckerman (1987) but a search on Google for backwards reasoning or the like turns up numerous business articles and talks in the last decade. I am reminded of an old adage in motorcycle circles -- "when you are making a turn look to where you want to be", which I am sure is a metaphor for much that you will read in business and elsewhere. A more technical example is that if I am planning a route to particular destination, I might start with the destination on a road map and work backwards along the roads leading to it, selecting those that are in the direction of my current location. As a matter of interest, automated route planners might work in both directions, from destination to start location, and vice versa, in search of intersections in the search "frontiers". Automated proof (aka theorem proving) is grounded in formal logic. You saw propositions (e.g., 'p's and 'q's), as used in propositional logic, under Proof by Contradiction above. Two basic operations of inference in propositional logic are "modus ponens" and "resolution". The modus ponens inference rule says that if you know 'p' (i.e., proposition p is true) and you know 'p formula_3 q' (i.e., p implies q) then you can deductively conclude proposition 'q' (i.e., q is true). You now know both p and q, i.e., (p formula_58 q). Modus ponens is the best known deductive inference rule. The resolution rule relies on the equivalence between implication and disjunction, (p formula_3 q) formula_10 (formula_4p formula_62 q). If you know 'p' and you know '(formula_4p formula_62 q)' then you know q of course. Its as if the p and the formula_4p cancel each other, and if you think of it logically you can see why: if p formula_58 (formula_4p formula_62 q) formula_10 (p formula_58 formula_4p formula_62 q) formula_10 (p formula_58 formula_4p) formula_62 (p formula_58 q) formula_10 false formula_62 (p formula_58 q), which is again (p formula_58 q). Modus ponens and resolution are equivalent rules of inference at this simple level, but they offer different perspectives on deductive reasoning, and we will see them again when discussing computational theory, and again in the last chapter on applications, where we will also talk about automated theorem proving in both propositional and first-order logic. Problems, Procedures, and Algorithms. Before getting to the various types of grammars and automata to finitely represent languages, lets briefly preview some theory of computation with concepts that we already know – procedures and algorithms. A "procedure" is a finite sequence of steps that can be carried out automatically, that is by a computer. An "algorithm" is a procedure that is guaranteed to halt on all inputs. Decidability and Undecidability. Procedures, be they algorithms or not, can be written to recognize languages. The question of whether a string "w" is a member of a language "L" is a yes/no question, and a procedure for answering the question is a finite representation of "L" if and only if (a) the procedure says 'yes' for every "w" in "L" (and halts), and (b) the procedure never says 'yes' for any "w" that is not in "L". If in addition the procedure says 'no' for every "w" not in "L" (and halts) then the procedure is an algorithm. Notice that these conditions collectively allow for a procedure that is a finite representation of "L" but that is not an algorithm. Such a non-algorithm procedure says 'yes' and halts if and only if "w" in "L", but there exists "w" that are not in "L" for which the procedure will not answer 'no' and will not halt. If for language "L" there is no algorithm for correctly answering the yes/no question of membership in the language, then membership in "L" is said to be "undecidable". If an algorithm does exist for correctly answering the question of membership in "L" for all possible input strings, members and non-members, then membership in "L" is "decidable". Presumably most readers can write a procedure that recognizes whether an input integer is a "prime number" or not, outputting yes or no, respectively. If written correctly the procedure would certainly halt in all cases, so its an algorithm and it recognizes the language of prime numbers. An algorithm can also be written to determine whether an integer is a "perfect number" or not. A perfect number is a number that is the sum of all its divisors (excepting itself, but including 1). So, membership in the language of prime numbers is decidable, as is membership in the language of perfect numbers. Decision Problems. Formally, "decision" "problems" refer to a yes/no questions. The question of membership in a language is one type of decision problem, for which we have given examples of testing for primality and testing for "perfection". Lets consider two other decision problems. You can write an algorithm for answering whether there is a prime number that is greater than a number that you supply as input. Since the prime numbers are known to be infinite then its a simple algorithm indeed -- just answer 'yes' regardless of input! But for the sake of illustration, assume that in addition to yes, you wanted to give the smallest prime that is greater than the number that is passed as input. You can write an algorithm for doing this that could use the algorithm for membership testing for primes as a subroutine. You can also write a procedure for answering the question of whether there is a perfect number greater than a number that is passed as input, but it is not known whether the perfect numbers are infinite, and so its not known whether this procedure is an algorithm or not – its not known whether it will halt on all inputs. In the case of an input that is greater than the largest currently known perfect number, the procedure will halt with a yes if a larger perfect number is eventually found, but again, the procedure will not halt if there is no larger perfect number. This decision problem of next perfect number is not known to be decidable or undecidable, but as in all examples of undecidable decision problems thus far, the 'undecidability aspect' of a question lies in the handling of 'no' cases. Notice that the knowledge that the primes are infinite and the uncertainty on whether the perfect numbers are (or are not) infinite lies outside the procedures for next-prime and next-perfect, respectively. We can write the procedures for each next-type problem in very much the same way, and the fact that one is an algorithm and one is not known to be an algorithm is not revealed in the procedures themselves. As other examples of decision problems lets consider inspirations from Goldbach's conjecture. Consider the language of even numbers greater than 4. Call it "L"Even. Consider the language, "L"EvenSum, of even numbers greater than 4 that are the sum of two primes. Now, consider the language, "L"EvenNotSum, of even numbers greater than 4 that are "not" the sum of two primes. One decision problem is whether "L"EvenSum formula_82 "L"Even? A second decision problem is whether "L"EvenNotSum formula_82 formula_84, i.e., the empty language? Intuitively, these two questions seem similar, and perhaps equivalent. Its left as an exercise to show that the two questions are indeed equivalent. Given that Goldbach's Conjecture is still a conjecture, it is not known whether "L"EvenSum formula_85 "L"Even and "L"EvenNotSum formula_85 formula_84 are decidable or undecidable. Not coincidentally, it is not apparent on how the write a procedure for answering the questions! Such a procedure would constitute the main part in a proof with construction on the decidability or undecidablity of these decision problems. "L"EvenSum formula_85 "L"Even is just one example of a decision problem that tests the equality of two languages. Likewise, "L"EvenNotSum formula_85 formula_84 is just one example of a decision problem testing for the emptiness of a language. In the chapter on Properties and Language Classes we will study decision problems in the context of classes (aka sets) of languages rather than specific languages only. For example, is the question of "L" formula_85 formula_84 decidable or undecidable for all languages "L" in a given class of languages? Interestingly, decision problems themselves define languages -- languages of "languages"! Let Lmetaformula_84 be the set (aka language) of recognition procedures for languages that are empty. Convince yourself that this makes sense. A recognition procedure for a language is after all a finite sequence of steps, which is a finite string in a some programming language. Is Lmetaformula_84 formula_82 formula_84? Is membership in Lmetaformula_84 decidable? Does a recognition procedure for Lmetaformula_84 even exist that can correctly answer in all of the yes cases and no cases? Languages that cannot be Finitely Represented. If no recognition procedure for Lmetaformula_84 exists (i.e., no procedure exists that can recognize all, and only, members of Lmetaformula_84) , then recognition for Lmetaformula_84 would be undecidable in an even stronger sense than other undecidable decision problems that we've discussed thus far, in which the 'undecidability aspect' of the problem rested with the 'no' cases only. Intuitively, examples of languages that cannot be finitely represented at all will be those for which there is no basis by which a recognizer can be constructed. A language in which the members are selected randomly would be an intuitive illustrative example: LR = {w | w in Σ* and random(w) == true} could be formed by enumerating the strings over alphabet Σ in some systematic order and randomly deciding in each case whether a string, "w", is in the language or not. The reason that a recognizer can't be built for this language assumes that random("w") is applied in the recognizer with no memory of its application in the creation of the language. And we can't simply remember the members, since there are an infinite number. In contrast, if random("w") were a pseudorandom process then such procedures are repeatable and we could implement a recognizer in that case, but if we have a truly random process then no recognizer can be built to say yes correctly in all cases. Random languages, as above, are posed to give you an example of languages that are not finitely representable that might be less mind numbing than Lmetaformula_84. We will return to these questions when we talk more on the theory of computation. Generating Languages. Recognition algorithms like those for prime and perfect numbers can be used as subroutines for generating languages as well. The language of prime numbers can be generated by initializing the set of "strings" to {2}, and embedding the recognition procedure in a loop that iterates over increasingly large odd integers, starting with 3, testing each for primeness, and adding only those numbers that are found to be prime. Note that because the primes are known to be infinite this generation procedure won’t terminate, so its not an algorithm but of course its not a yes/no question either. We could create a non-equivalent algorithm from the generator by using a counter that artificially stops the iterative process after a specified number of iterations. This would not be a generator of the infinite language any more, but only a generator up to strings in the language up to a prescribed length, which would be a finite language. We can use the same strategy of systematically generating larger numbers, not odd only in this case, and using the perfect number recognition algorithm on each. This would be a generator of the perfect numbers. This too would run forever unless it was constrained by a maximum count on iterations. A procedure that is designed to "run forever" is called an "anytime algorithm" because at any time it can be paused and the output to that point can be examined before continuing to play the algorithm forward. Anyone who has experienced a system crash knows that an anytime algorithm is best regarded as a theoretical construct, though even in the case of a crash, computer systems generally save their state and can be restarted from that saved state, so an anytime algorithm could proceed indefinitely, if not forever. There are ways to convert a generation procedure for a language to one that generates one string, "w", of a language, "L", on each run of the algorithm. An exercise asks you to sketch an approach to do so, which will guarantee that each string in the language will be generated with non-zero probability. Exercises, Projects, and Discussions. Induction Exercise 1: Prove the equality given in Figure InductionExercise1 by induction. Induction Exercise 2: (Due to Hopcroft and Ullman (1979)). Briefly explain what is wrong with the following inductive “proof” that all elements in any set must be identical. The basis is that for sets with one element the statement is trivially true. Assume the statement is true for sets with "n-1" elements, and consider a set S with "n" elements. Let "a" be an element of S. Let S = S1 formula_103 S2, where S1 and S2 each have "n-1" elements, and each contains "a". By the inductive hypothesis (assumption) all elements in S1 are identical to "a" and all elements in S2 are identical to a. Thus all elements in S are identical to "a". Induction Exercise 3: Prove by induction that the number of leaves in a non-empty "full" binary tree is exactly one greater than the number of internal nodes. Proof Discussion 1: Have a discussion with an AI large language model on the history and philosophy of "proof", potentially including the role of plausible reasoning, infinity, and/or other issues that you are particularly interested in. Include your prompts and the responses, and/or ask the LLM to summarize the discussion in X words or less (e.g., X = 500). Decision Problem Exercise 1: Show that "L"EvenSum formula_85 "L"Even and "L"EvenNotSum formula_85 formula_84 are equivalent decision problems. Decision Problems Exercise 2: There are ways to convert a generation procedure for a language to one that generates one string, "w", of a language, "L", on each run of the algorithm. Sketch an approach to do so, which will gaurantee that each string in the language will be generated with non-zero probability. Your "one-shot" version of generate can use the recognition procedure for "L" as a subroutine. Proof Project 1: Learn about AI proof assistants and use these throughout the semester to addresses problems that you are assigned, and unassigned. Report the results and experiences in a proof portfolio. https://www.nytimes.com/2023/07/02/science/ai-mathematics-machine-learning.html
7,957
Theory of Formal Languages, Automata, and Computation/Automata and the Chomsky Hierarchy. Automata (or Machines) specify how the strings in a language can be recognized. Automata are another form of finite representations of formal, typically infinite languages. We describe four broad categories of automata and corresponding categories of languages that the automata categories represent. The close correspondence between generators and recognizers should not surprise us, since we have already seen in the form of programming language parsers how a generator (grammar) can be adapted to a recognizer, but interestingly, and I think reassuringly, even though the developments of automata and grammars were largely independent, the language classes defined by the major types of automata correspond exactly to the language classes of the Chomsky Hierarchy as defined by grammar classes! In contrast to our treatment of grammars, which proceeded from broadest class (Type 0 or unrestricted) to the most constrained class (Type 3 or regular), when presenting classes of automata for language classes we work in the opposite direction, from simplest and most constrained automata (Type 3, regular), known as "finite automata" (FAs), to the computationally most powerful theoretical automata, which are "Turing machines" for Type 0, unrestricted languages. The most important reason for this change in direction is that FAs can be regarded as standalone computational devices, but they are also integral to all the automata types that we will be discussing. Finite Automata: Recognizers of Regular Languages. As we will see soon, finite state automata or simply finite automata are recognizers of regular languages. A finite automaton (FA), M, over an alphabet Σ, is a system ("Q", Σ , formula_1 , q0 , "A") , where "Q" is a finite, nonempty set of states; Σ is finite input alphabet; formula_1 is a mapping of "Q" × Σ into "Q" (that is, formula_1 is a set of transitions between states on particular Σ symbols); q0 is in "Q" and is the start state of M; and "A" ⊆ "Q" and is the set of accepting states. FAs are nicely summarized visually as state transition diagrams (see Figure FAexample1), where states ("Q") are shown as circles, transitions between states (formula_1) from one state to another state are labeled by an alphabet (Σ) symbol, an unlabelled arc that emanates from outside the diagram indicates the start state (q0), and concentric circles identify the accepting states. In other sources you may see "final" states used instead of or in addition to "accepting" states. This book uses "accepting", however, rather than "final", since a so-called "final" state need not be final at all -- transitions from "final" states to other states are possible and common. To determine whether a string, "w" of symbols from Σ is in the language recognized by an FA, M, the string is processed from the left to right, one symbol at a time. Before processing of the input string begins, the FA is in its start state. As each symbol in "w" is read from the left-to-right, an appropriate transition is taken from the current state to the next state indicated by a transition. In addition to a state transition diagram, the transition function formula_1 is often represented as a list of transitions or as a table of transitions. Both are shown in Figure FAexample1, which represents/accepts the strings over 0s and 1s that start with '00'. The language accepted by an FA is the set of strings that place the FA in an accepting state when the string has been completely processed (i.e., read). In the case of the example FA, there will be exactly one path through the FA from the start state that corresponds to a given string. The string either ends up in an accepting state (i.e., its in the language defined by the FA) or it ends up in a non-accepting state (i.e., its not in the language defined by the FA) after the string has been completely read. When presenting an FA transition diagram (or list or table) it is a common convention, for purposes of comprehensibility, to omit explicit inclusion of "trap state"s, and any transition to a trap state is omitted as well. If in processing a string there is no transition shown for a (state, input) pair, we might call the transition undefined, but literally it signifies a transition to a non-accepting trap state, so we know the string being processed is not in the language. A "shorthand" version of the FA in Figure FAexample1 is shown in Figure FAexample2. It should be clear that from the standpoint of acceptance and non-acceptance, an FA never needs more than one trap state, if any at all. Deterministic and NonDeterministic FAs. If there is exactly one transition defined for every (state, input) pair in an FA, then the FA is "deterministic". The FA in FAexample1 is a deterministic FA (DFA), as is the FA in Figure FAexample2, remembering it is but shorthand, with implied (but real), transitions to a trap state. The FA of Figure FAExample3 is also deterministic. If there is one or "more" transitions that can be taken for any of the same (state, input) pair then the FA is a "nondeterministic" finite automaton (NFA). Notice that by definition every DFA is an NFA, but not vice versa. Figure NFAExample1 illustrates an NFA, which is not also a DFA. In this example there are two transitions defined for (q0, 0). In the case of an NFA, there may be more than one path from the start state to accepting states after processing a string. You can think of each path being enumerated using a breadth first enumeration strategy. An NFA accepts a string if at least one path ends with an accepting state after the string is processed. This is illustrated in Figure NFASearch1 for the input string 00101. Equivalence of NFAs and DFAs. The languages that are recognized by the NFAs and DFAs are the same. That is, for any NFA there is a corresponding DFA that recognizes the same language as the NFA, and as we have already noted, every DFA is by definition an NFA. In the former case, we can translate an arbitrary NFA to a DFA, where the DFA is constructed by considering combinations of states in the NFA to be single states in the DFA. Intuitively, each state in the DFA will correspond to a possible frontier in the breadth first enumeration of strings/paths recognized/expanded by the NFA. Figure NFAfrontiers shows some of the frontiers, in ellipses, in the search for a particular string, but that process is generalizable to enumeration of all strings, not just 00101. In the worst case, if there are |K| states in an NFA, there will be O(2|K|) states in a corresponding DFA. While the NFA description of a language is often simpler and more elegant than the DFA equivalent, the NFA will require enumeration in practice when actually processing strings, and thus the NFA has higher runtime costs than a corresponding DFA. Liberal Expansion then Pruning. One approach to do the translation from NFA to DFA uses an approach of "liberal expansion then pruning." We won't ultimately use or endorse this approach, so I advocate it as optional, but it might be interesting to some as a step in algorithm development. (1) Given an NFA, let's take the power set of the NFA states (i.e., the set of all possible combinations of NFA states), where each member of the power set will correspond to a state of the new, equivalent DFA. (2) The start state of the DFA will correspond to the start state of the NFA. Consider the top NFA in Figure NFAExample1. The start state of the DFA will be {"p"}, where "p" is the start state of the NFA and {"p"} is a member of the power set of NFA states. (3) To determine whether there is a transition between any two DFA states, lets look at the NFA states that are used in the DFA state composition, and create a transition between the DFA states, call them X and Y, for which there is a corresponding transition between an NFA state in "X" and an NFA state in Y. Two DFA states would correspond to X={q, r} and Y={r, t}, since each is a combination of NFA states. To determine if there is a transition on a '1' from X to Y, we note that in the NFA there is a transition from "q" (one member of "X") to r (one member of Y). This observation isn't sufficient for adding a transition from X (i.e., {q,r}) to Y (i.e., {r,t}) on '1' in the DFA, however, since that same reasoning would also result in a transition from {q, r} to {r} in the DFA, and from {q, r} to {r, t, s} too, and for every Y for which r was a member. This is just a bad way of creating another NFA, rather than translating an NFA to a DFA. Rather, let's assume that a transition is made only between X and Y on a symbol "x" if "every" member of "X" participates in an NFA transition to a member of Y on x, and if "every" member of Y is the result of a transition on x from a member of X in the NFA. So, returning to our example, we would create a transition from {q, r} to {r, t} on '1' in the DFA because formula_6 and formula_7 in the NFA, but {q, r} would not transition to any other state on '1' in the DFA. (4) After adding transitions using this strategy, we prune any states and subgraphs that are not reachable from the start state. For example, {r, t} is reachable from {q,r}, but is {q,r} reachable from {p}? No, since p is part of every DFA state because of the initial loop on 0,1. This is seen in the top-most path in Figure NFAfrontiers for '00101' but confirm your understanding that p is part of any frontier of a breadth first enumeration of all reachable states. Having found that {q,r} is not reachable from {p}, we prune {q,r}, then eliminate any states for which there are no incoming links, and repeat the process until all unreachable states are eliminated. Liberal expansion and pruning, as we have sketched it, works for NFA to DFA translation, but it initially creates many more DFA states than necessary in most cases (i.e., O(2|K|). Nonetheless, variations on liberal expansion and pruning are useful in many circumstances, which we address further when discussing AI. Another strategy for NFA to DFA translation is to grow the number of DFA states more judiciously than naive creation and pruning of the power set of NFA states. Systematic and Judicious Growth. The prefered process for NFA to DFA translation essentially does as we started with -- a breadth first expansion of possible paths taken through the NFA. We can use a table representation of this expansion, and where each cell in the table is a DFA state represented as a possible NFA frontier. Again, the worst case table size will be exponential in the number of NFA states, but in most cases it is far smaller. Using our running example (but with the 'shorthand' FA), we see in Figure NFAtoDFAExample1a that {p} is the DFA start state. p and q are on the frontier of the breadth-first enumeration after processing a 0, so DFA state {p,q} are added as a DFA state. Only p is on the frontier after processing the first 1, and so {p} transitions to itself in the DFA. Continuing, {p, q} is a DFA state that must be investigated. From {p, q} on a 0 we have to identify states that can be next from either p or q (inclusive or). From p on a 0 we can go to p or q in the NFA. From q on a 0 we can go to r. So {p,q,r} becomes a new state in the growing DFA. Note that {p,q,r} corresponds to a frontier in Figure NFAfrontiers. Still on Figure NFAtoDFAExample1a, from {p,q} on a 1 we can stay in p, or from q we can go to r in the NFA, so {p,r} is a new DFA state. We continue to process those DFA states that were created in the previous step. In Figure NFAtoDFAExample1b we take {p, r} next and then {p,q,r}, though either order would yield the same results. As we know, there is an implicit trap state that can be reached from r too, but we can keep the trap state implicit for simplicity's sake (and the resulting DFA may be a 'shorthand' too).Continuing further we arrive at the table of Figure NFAtoDFAExample1c, with all DFA states generated from all NFA frontiers that are reachable from {p}. The final DFA, expressed as a transition diagram, is shown in Figure NFAtoDFAExample1d. Note that the accepting states in the final DFA correspond to all NFA frontiers that include an accepting state in the NFA, since each of these include at least one path that accepts a string. At this point we could rename the DFA states if we wished to simple atoms, but we can leave the names as is, which is certainly helpful for illustration here, but remember the names, while appearing to be 'composite' are actually atomic in the final DFA. Connection to Machine Learning. If we compare the DFA of Figure NFAtoDFAExample1d with the initial NFA of Figure NFAtoDFAExample1a we are reminded of an observation that we began with: that the "NFA description of a language is often simpler and more elegant than the DFA equivalent" but the "NFA will require enumeration in practice when actually processing strings, and thus the NFA has higher runtime costs than a corresponding DFA." So a development strategy that suggests itself is that human software developers start by specifying a non-deterministic software solution to a problem, and an AI translate this non-deterministic solution to a deterministic solution. In real world problems however it is probably not possible to translate to a completely deterministic solution because of uncertainties in an environment, but in real life settings it is often possible to eliminate or otherwise mitigate the costs of nondeterministic choices with the help of probabilities of transitions that are seen in practice. We will delve further into this in later chapters. Retrospection. Systematic and judicious growth is another general strategy for problem solving. We generate only those states that we need to, again by using an enumeration strategy. "When we need to" is something that is up for interpretation, however, and yet another strategy, which is sometimes called a lazy enumeration strategy, will only enumerate the space necessary to identify paths for a given string, one at a time (e.g., much like Figure NFAFrontiers). Such lazy strategies are used in AI systems and algorithms more generally, and can be particularly important when strings (and derivations and recognition paths) vary in the probability with which they occur in some environment. Again, this will be covered more when we discuss AI and machine learning, as well as probabilistic grammars and probabilistic automata, in more depth. Finally, while we used Figure NFAExample1 to illustrate the equivalence of NFAs and DFAs, we didn't actually say what language the NFA and DFA of Figure NFAtoDFA1d represented. You could reflect on this before reading on, but the answer could help comprehension of the example. Spoiler. The language recognized are strings of 0s and 1s in which there are two 0s separated by one other symbol. In state p of the NFA there are two possible actions when a 0 is encountered. One action is to guess that the 0 is the first of the two paired 0s by transitioning from p to q, and the other action is to guess that the 0 is not the first of two suitably paired 0s and stay in state p. Equivalence of Regular Grammars and FAs. We defined the regular languages as those that could be generated by regular grammars. But the regular languages are exactly those that are recognized by FAs as well. We show this by showing how any regular grammar can be converted to an NFA (which can then be translated into a DFA), and by showing how a DFA (which is also an NFA) can be translated to a regular grammar. Both processes are straightforward. To translate a regular grammar to an NFA, insure first that ε only appears, if at all, on the righthand side of the start symbol, S, and that if ε does so appear (because ε is in the language generated by the grammar), then S does not appear on the right hand side of any production. Let G = (VG, ΣG, PG, SG) be a regular (type 3) grammar. Then there exists a finite automaton M = ("Q"M, ΣM, formula_1M, SM, "A"M) with L(M) = L(G) (i.e., the languages recognized by M and the language generated by G are the same), and thus ΣG = ΣM. (1) Define the states of the NFA, QM, to correspond to the variables of the grammar, VG, plus an additional state H, so the states of the NFA are QM = VG formula_9 {H}. (2) The initial state of the NFA corresponds to the start symbol, SM = SG, of the regular grammar. (3) If there is a production SG formula_10 ε, then the accepting states of the NFA are "A"M = {SM, H}, otherwise "A"M = {H}. (4) formula_1M(X, d) contains the accepting state H if X formula_10 d is in PG. Recognize that there is always a single variable in each sentential form of a regular grammar derivation, except the last one which removes the remaining variable. (5) formula_1M(X, d) contains Y for each Y that appears in a production X formula_10 dY. In short, the NFA will follow very closely from the grammar, with states corresponding to non-terminal symbols and transitions in the NFA corresponding to productions in the grammar. Figure FA-RegularExample1 shows a Regular grammar and a NFA representing the same language. Nondeterminism occurs at State A on an 'a' and at state B on a 'b'. Now we wish to show that an FA can be translated to a grammar that represents the same language as the FA. In this directiuon we assume the FA is a DFA, M, knowing that we can always translate an NFA to a DFA, if needed. Again, the correspondence between FA and grammar is very close. Define a regular grammar G = (VG, ΣG, PG, SG), where PG includes X formula_10 dY if formula_1M(X,  d) = Y and X formula_10 d if formula_1M(X,  d) = H and H is in AM. Again, see Figure FA-RegularExample1 to confirm your understanding, remembering that there is something you need to do before applying the translation procedure from FA to regular grammar. NFAs with epsilon Transitions. An important source of nondeterminism in FAs is when multiple transitions are defined for any (state, input) pair; we have already discussed that source of nondeterminism. A second source of nondeterminism is epsilon (formula_19) transitions. An formula_19 transition from one state to another can be taken without reading an input symbol. Consider the NFA in Figure NFAExample2. In the top, shorthand NFA, from q0 we can take an formula_19 transition to q1 before reading an input symbol. This choice of taking formula_19 transitions can be applied repeatedly. If we are just starting to read a string from left to right using NFAExample2, we can either (1) read the first symbol in q0 and stay in q0 if its a '0', or go to a trap state otherwise; or (2) we can take an formula_19 transition to q1 and read the first symbol in q1 and stay in q1 if its a '1', or go to a trap state otherwise; or (3) we can take an formula_19 transition to q1 and then another formula_19 transition to q2 and read the first symbol in q2 and stay in q2 if its a '2', or go to a trap state otherwise. In general, formula_19 transitions can be a source of considerable nondeterminism. Any of these 3 options will be the first steps of different recognition trajectories. In each of these three options, we chose to take the formula_19 transitions BEFORE reading an input symbol, but its also allowed to take the formula_19 AFTER reading a symbol. So, for example, we could expand option (2): from q0 we could take an formula_19 transition to q1, read a '1', and take an formula_19 transition to q2, all before reading the second input symbol. Epsilon-closure. An important construct is the "epsilon-closure" or formula_19-closure of a state, say p. formula_19-closure(p) is the set of all states that are reachable from p using only formula_19 transitions. formula_19-closure(p) includes p itself. In Figure NFAExample2 formula_19-closure(q0) = {q0, q1, q2}, formula_19-closure(q1) = {q1, q2}, and formula_19-closure(q2) = {q2}. Updating frontiers. In general, when processing a string from left to right, the process traces out multiple trajectories, and if at least one trajectory or path ends in an accepting state then the string is accepted. As each symbol in the string is processed, the trajectories so far are expanded, with as many formula_19-transitions as are allowed extending paths, as well as the current input symbol. Figures NFAProcess2a through NFAProcess2c illustrate the process for the first two symbols of '0112'. The captions explain the conventions used, notably on what constitutes a frontier at each step. Processing the last two symbols of '0112' is left as an exercise. Translation to DFAs. The process of translating NFAs with formula_19 transitions (ε-NFAs) to DFAs is much like the previously presented procedure of translating NFAs without formula_19 transitions to DFAs, with one important additional step. In the previous translation procedure, the start state of the NFA is used as the start state of the DFA being constructed. But when epsilon transitions are present, the start state of the DFA will correspond to the epsilon closure of the start state of the NFA. In the example of Figure NFAExample2 the formula_19-closure(q0) = {q0, q1, q2}. So (q0, q1, q2) is the start state of the DFA, and because q2 is an accepting state in the NFA, (q0, q1, q2) will be an accepting state in the DFA too. Figure NFAtoDFAExample2 should the DFA that is equivalent to the NFA. As a thought experiment consider what would have resulted without taking the first step. q0 would have been the DFA start state, but it would not have been an accepting state, so the DFA would not have accepted the empty string, though NFAExample2 does accept the empty string. Thus, the two representations would not have been equivalent in terms of the language that they represented. In general, always do a sanity check when doing translations between representations, testing on selected example strings, but of course placing your highest reliance on understanding and using equivalence-preserving operations for translation. Equivalent Representations of Regular Languages. To recap, there are two possible sources of nondeterminism or choice in an NFA: ε transitions and multiple possible transitions on the same (state, input) pair. In general, both kinds of nondeterminism can appear in the same NFA, as in Figure NFAExample3, and both can be removed by the procedure we have covered for translation to a DFA. Given the discussion so far, there are the class of languages defined by NFAs with no ε transitions and the class of languages defined by NFAs in which ε transitions "are allowed". By definition, the latter (ε transitions allowed) are a superset of the former (no ε transitions). But are they a proper superset? Does the addition of ε transitions add any computational power at all beyond the elegance of expression? No, because all NFAs, ε transitions or not, are translatable to DFAs. We have described a number of ways of representing regular languages, which are summarized in Figure RegularRepresentations1. Equivalence between two representations is demonstrated if there is a path from one representation to another, and vice versa. Thus, the class of languages of regular grammars is equivalent to the class of languages of DFAs since there is a path from one to the other in each direction. The DFAs are trivially NFAs, and the NFAs with no formula_19 transitions are trivially NFAs with formula_19 transitions allowed. FYI, we could eliminate the central two-way arc between NFAs (no formula_19 transitions) and DFAs (i.e., we could have chosen not to demonstrate those translations), and the ablated graph that resulted would still be sufficient to indicate the equivalence of all four representations. Regular Expressions. Regular expressions (REs) are yet another way of representing regular languages. A RE is a "declarative" represention, which expresses a pattern that strings in a language adhere to, whereas a FA specifies a process for recognizing strings in a language and a grammar specifies a process to generate strings in a language. An RE is an expression over an alphabet Σ, using binary operators for "concatenation" (×) and for "choice" (+), and the unary operator of "repetition", often known as "Kleene Closure" or simply "closure". All symbols in Σ are REs, which are atomic. If Σ = {a, b, c} then 'a' is an RE representation the finite language {a}, 'b' represents {b}, and 'c' represents {c}. If r is an RE and s is an RE then r + s is an RE representing the language represented by r ORed (union) with the language represented by s. Strings in the language specified by r + s would be strings that matched r or strings that matched s. If r = a and s = b, then L(r+s) = {a, b}. If r is an RE and s is an RE then r × s (or simply rs) is an RE representing the language represented by r concatenated with the language represented by s. Concatenating two languages results in another language with strings that are a concatenation of a string in the first language with a string in the second language. If r = a + b and s = c + b then L((a+b) × (c + b)), alternatively L((a+b)(c+b)), equals {ac, ab, bc, bb}. As another example, suppose r = (a+b)(b+c) and s = c × b, then L(rs) = {abcb, accb, bbcb, bccb}. (I use c × b instead of cb to make it clear that I am using concatenation, so that cb is not confused an atomic identifier). And another example, r = (a × b c c) + (c × a) and s = (b + a). L(rs) = {abcb, abca, cab, caa}. Note that when using '+' the order of the arguments does not matter. In contrast, when using '×' the order does matter. The repetition operator is unary and signified by *. If r is an RE then r* is an RE that represents the infinite language of strings in which strings in L(r*) are repeated 0 or more times. So if r = a, then L(r*) = L(a*) = {formula_19, a, aa, aaa, aaaa, ...}. Notice that the empty string is a member of L(r*), for the case where r is "repeated" 0 times. Assume r = (a+b)(b+c), then L(r*) = L(((a+b)(b+c))*) = {formula_19, ab, ac, bb, bc, abab, abac, abbb, abbc, acab, acac, acbb, acbc, bbab, bbac, bbbb, bbbc, bcab, bcac, bcbb, bcbc...}. Occasionally, you might see notation that fixes the number of repetitions to some constant. For example, L(r0) = {formula_19}, L(r1) = {ab, ac, bb, bc}, L(r2) = {abab, abac, abbb, abbc, acab, acac, acbb, acbc, bbab, bbac, bbbb, bbbc, bcab, bcac, bcbb, bcbc}. This notation could be helpful to some students in understanding the repetition operator itself. For example, L(r*) = L(r0) formula_9 L(r1) formula_9 L(r2) formula_9 L(r3) formula_9 ... . If we wish to express repetition 1 or more times we can write rr* or equivalently r(r*) since * has higher precedence than concatenation. But it is also common to write repetition 1 or more times with +, so if r = a+b then L(r+) = {a, b, aa, ab, ba, bb, aaa, aab, aba, abb, baa, bab, bba, bbb, aaaa, ...} = L(rr*), with formula_19 excluded. Consider the set of all strings over {0, 1} with at most one pair of consecutive 0s. 0n, where n is greater than 0, would count as n-1 consecutive pairs. A first stab at a solution might be (01)* 00 (10)*, BUT it doesn’t allow a string that starts with 1, it requires exactly one 00, doesn’t allow a string that ends with 1, and it doesn’t allow for any repeating 1s. (1+ε) (01)* 00 (10)*(1+ε) is a second attempt, but it still requires exactly one 00, and it doesn’t allow for any repeating 1s. How about (1+ε) (01+1)* (00+ε) (10+1)*(1+ε)? What about a single 0? So, how about (1+ε) (01+1)* (00+0+ε) (10+1)*(1+ε)? Consider wherther this last example represents the language that's requested. If so, can you simplify the RE so that the revised version still represents the same language? In general, finding a correct (and elegant) RE will require some search. REs represent the same class of languages as those of Figure RegularRepresentations1. Since the four previously studied schemes for regular languages are known to be equivalent, we can show the equivalence of REs to these other representstions by showing how any of the previous schemes can be translated to REs, and showing how REs can be translated to any of the other representations. Converting DFAs to Regular Expressions. Converting DFAs to Regular Expressions (REs) uses a process of "state elimination". At the beginning, single terminal symbols label the arcs of the DFA. Of course, each terminal symbol is a primitive RE. The state elimination algorithm eliminates a state at a time, and as it does so, REs of increasing complexity will label the arcs of the shrinking DFA. This calls for a generalization of what we mean by a DFA, but it is a natural generalization, in which arbitrary REs can label arcs instead of simply primitive, single-symbol REs. For example, in the simplest case, suppose that states P, Q, and R, are arranged such that there is a transition from P to Q on an input ‘a’ and there is a transition from Q to R on an input ‘b’. Then if we want to eliminate Q next, we will replace arcs from P to Q and from Q to R with a single transition directly from P to R, and the transition will be labeled by the RE ‘ab’. If that is the only involvement of Q in the DFA, then Q is eliminated. But typically a state to be eliminated like Q will be involved with many other states, and for each pair of states Pi and Rk with transitions into and out of Q respectively, the algorithm will create new direct transitions between Pi and Rk with suitable REs labeling the transitions. A Pi and Rk may actually be the same state, so a loop is introduced from Pi (Rk) back to itself when eliminating Q. Figure DFAtoRE1 illustrates the various steps in translation from a DFA to RE. A interesting anomaly occurs in this example when eliminating state A. Since A originates two paths, there is no path in which A is intermediate. We have shown the result as an RE that has no source, but transitions to E, the start state after eliminating A. The meaning of such a sourceless transition should be straightforward, as external context that is relevant to all strings recognized by the remaining generalized FA. We are reminded to giving external context to a large language model, for example, in the form of a prompt. Nonetheless, a practice that remedies this anomaly, particularly if we were to automate the translation procedure from an FA M is to introduce two "dummy" states, Sstart and Send, to M yielding M', where Sstart is the start state of M' and has an formula_19 transition to the start state of M, and Send is the single accepting state of M', and all accepting states of M have an formula_19 transition to Send. Figure DFAtoRE1a illustrates the effect of this policy on the previous example. Its true that the addition of formula_19 transitions makes M' an NFA, particularly because of these transitions from all of the accepting states of M to the single accepting state of M', but if the FA is otherwise deterministic then you would be right to call the process of Figure DFAtoRE1a a DFA to RE translation. Figure DFAtoRE2 illustrates another translation. In principle it does not matter what the order of state elimination is, but I find it helpful to start by eliminating states that are less involved with other states. One of the exercises asks you to translate Figure DFAtoRE2 using a different order of state elimination. Converting Regular Expressions to NFAs with ε transitions. Regular expressions are most easily translated into NFAs with ε transitions, which in turn can be translated to DFAs if desired, but that latter step is not necessary for demonstrating the equivalence of REs and Regular languages. While the basic steps of the algorithm are intuitive (i.e., a concatenation in the RE translates to a strict sequence of states and transitions, a ‘+’ leads to branching into two or more transitions from a state; and repetition (*) in the RE leads to a loop in the NFA), the very liberal introduction of ε transitions into a growing NFA makes the algorithm more easily implementable, but makes the construction very cumbersome for illustration, and so you will see me simplifying the growing NFA at intermediate steps. Let's start with an example of the previous section by translating (0 + 1(1+01)* 00)* to an NFA with ε transitions, and simplifying along the way. I will illustrate by working from the inside out, translating simpler subexpressions first. Figure REtoNFA1a illustrates the translation of (1 + 01)*, which is a subexpression of (0 + 1(1 + 01)* 00)*. The initial NFA of the Figure shows liberal introduction of formula_19 transitions before most all subexpressions. In an automated algorithm an formula_19 transition would be introduced between the 0 and 1 transitions in the lower path between states Q and R as well, but I used shorthand and jumped ahead on the simplification step for reasons of space. Following simplication we are left with essentially a DFA (excepting the formula_19 transitions from the artificial start state and to the artificial end state). Figure REtoNFA1b continues to add the the NFA to include additional subcomponents of (0 + 1(1 + 01)*00)*, now 1(1 + 01)*00. In this image, b, an additional '1' has been placed in front of the earlier NFA, with added formula_19 transitions. It gets ridiculous, particularly the consecutive formula_19 transitions, but again, the rationale is that the liberal addition eases the algorithm implementation, since taking 'shortcuts' can lead to mistakes, and since the very last step in such an algorithm would be to translate to a DFA, getting rid of all the formula_19 transitions, if that were important. Nonetheless, I ran out of room on the page and used shorthand when adding '00' at the end. Figure REtoNFAc shows the final step in the RE to NFA translation at the top, and this is followed by an NFA to DFA translation. Remember that we started with a RE that was the result of a translation from an NFA, as shown in Figure DFAtoRE2. We might expect that translation of that RE to a DFA as shown as the end up as the same DFA that we started with DFAtoRE2. Obviously, as Figure DFAsComparison makes clear, the two DFAs are not the same. But more importantly, do these two DFAs represent the same language? The answer, thankfully, is yes. As we will see in the next section on "DFA minimization", the DFA on the right can be minimized so that it is the same as the DFA on the left, except for the naming of states. In this example, the minimization of the right results in states L and M,L being combined into a single accepting state. Having demonstrated that REs are translatable to and from other regular language representations, we give the updated representational equivalencies in Figure RegularRepresentations2. DFA Minimization. DFAs can be minimized into a unique minimal-state DFA, differing only in the naming of states between two “versions” of the minimal DFA. The minimization algorithm operates by identifying which pairs of states are ‘distinguishable’ or not. After looking at all pairs, if two states are not distinguishable, then they are merged in the final, minimal state DFA. It seems to me that a remarkable property of the minimization algorithm is that it looks only at the transitions out of two states to determine whether those states are ‘distinguishable’ or not. That is, transitions INTO the states being compared is not an overt part the algorithm. Moreover, its worth noting that the minimum state DFAs are identical, except in the naming of states – the transitions between states are identical in structure too, though we don’t get into the proof of this. It is conventional to create a table like that on the right of Figure MinimizingDFA1a that can be used to compare pairs of states. The DFA on the left of the figure is the same as we left off with in NFA to DFA translation in Figures REtoNFA1c and DFAcomparison, but with states renamed. The first step in the minimization process is to mark every pair of states in which one is an accepting state and one is not an accepting state as "distinguishable". This is the state of the table in Figure MinimizingDFA1a. Only one pair of states remain, L and P. Are L and P distinguishable or are they equivalent? They both transition to the same states on '0' and on '1', respectively, and we conclude that they are indistinguishable, aka equivalent. If we combine L and P into a single accepting state, LP, and update transitions into and out of LP (i.e., so state N transitions to LP on a '0', LP transitions to LP on a '0', LP transitions to M on a '1') then we have the DFA at the left of Figure DFAsComparison. The general algorithm for filling in a table is below. The basic idea is to find states that cannot be equivalent, and then states that are equivalent, and merge equivalent states.Algorithm for marking pairs of inequivalent states (“Table-filling” algorithm) FOR p in "A" and q in "Q" – "A" DO mark(p, q) // accepting states cannot be equivalent to non-accepting states FOR each pair of unmarked, distinct states (p, q) in "A" x "A" or ("Q" – "A") x ("Q" – "A") DO     IF for some input symbol ‘a’, (formula_1(p,a), formula_1(q,a)) is marked     THEN  mark(p, q)     ELSE /* no pair (formula_1(p,a), formula_1(q,a)) is marked          FOR all input symbols ‘a’ DO               put (p, q) on followup list for (formula_1(p,a), formula_1(q,a)) UNLESS formula_1(p,a) = formula_1(q,a)) Repeat second FOR loop until no changes in what is marked (this amounts to processing follow-up lists)The DFA minimization algorithm can be used to determine whether two DFAs accept the same language or not – minimize each of the DFAs being compared and then examine the structure of the resulting minimum state DFAs; if the minimal state DFAs are identical then the two original DFAs accept the same language. The DFA minimization algorithm can be used more broadly to judge whether any two descriptions using any of the representations that we studied (RGs, REs, DFAs, NFAs with and without ε transitions) denote the same language – translate to a DFA and minimize and compare the minimal DFAs. The algorithm for answering whether two regular language representations are the same by translating each representation to a minimal DFA and comparing the two DFAs to see if they are isomorphic directed graphs is an example of a "decision algorithm" for regular languages. Pumping Lemma for Regular Languages Revisited. Though we have described the Pumping Lemma for regular languages previously in terms of grammars, its helpful to also consider the typical way of introducing the PL using DFAs. This is equivalent to our earlier presentation and serves to further highlight the equivalency of FAs and regular (or right linear) grammars. The Pumping Lemma says that there must be at least one loop in a DFA accepting an infinite regular language, and this loop can be repeated indefinitely or removed from an accepted string that is long enough to have been forced into taking a loop. We can show a language is not regular by assuming that it is regular, carefully selecting a string that is long enough, and pumping a suitable substring 0 or more times (recall that 0 times is the same as deleting the substring) until the resulting string is known not to be in the language thus violating the Pumping Lemma property of regular languages. Pushdown Automata: Recognizers of Context Free Languages. PDAs are basically FAs with an additional form of memory – a stack. A pushdown automaton M = ("Q", Σ, Γ, formula_1, q0, X0, "A"), where "Q", Σ, formula_1, q0, and "A" have roles similar to those in FAs, and Γ and Z0 are new constructs associated with the PDA's stack memory. formula_1 is a mapping from Q × (Σ ∪ ε) × Γ to finite subsets of Q × Γ*. That is, formula_1 is a finite set of transitions of the form (qi, "a", "X") formula_10 {...(qk, α)...} where the PDA transitions from state qi to qk when the current input symbol is "a" (or ε) and the top of stack symbol is X. Upon transition the top of stack symbol X is replaced with a string of stack symbols, α. That is, if in state qi, 'a' is the input, and "X" is the top of stack symbol, then the transition is made to state qk and X is replaced by popping "X" and pushing a sequence of stack symbols denoted by α, where 'X' might be among the stack symbols pushed back on. Note that if α = ‘WYZ’, then after popping "X", then "Z" is pushed first, then "Y", then "W", and "W" (i.e., the leftmost stack symbol) is the new top of stack. The definition of a PDA allows for nondeterminism in that each formula_1(qi, "a", "X") is a set of one or more possible moves, and ε transitions are allowed. Similar to representations of NFA simulations as breadth-first enumerations of recognition paths or trajectories (e.g., Figure NFASearch1), PDA simulations can be represented as a breadth-first enumeration too, but because of the increased complexity of PDAs relative to FAs, each node in a PDA enumeration is called an "instantaneous description" (ID) or "configuration". Each PDA ID is made up of a state, qk, the top of stack, and for convenience we will typically include the remaining input string as well. Figure PDARecognitionPath shows one path only in what would be a larger enumeration of IDs in search of recognition paths for a string 'abc', using a PDA with transitions that include those at the bottom (e.g., formula_1(q0, "a", X0) formula_10 (q1, X1)). Figures PDAExample1a and PDAExample1b give an example of a single PDA definition, along with in-line comments and illustrations of the function of transitions. In-line comments are generally the minimum that would be desirable in presenting a PDA definition. Acceptance by Empty Stack and by Accepting State. In addition to accepting by accepting state, PDAs can also be designed to accept by empty stack. In particular, if the input is exhausted and the PDA's stack is empty then the PDA accepts. When accepting by empty stack the accepting states, "A", are irrelevant and "A" = { } is the indication that a PDA accepts by empty stack"." We may sometimes use formula_78 notation to denote the language accepted by a PDA M = (Q, Σ, Γ, formula_1, q0, X0, { }) on the empty stack: L(M) = {w formula_80 (q0, w, X0) formula_78 (qi, ε,  ε)} indicates that there is a sequence of 0 or more transitions from (q0, w, X0) to (qi, ε,  ε), where the first ε indicates that the input is exhausted and the the second ε indicates that the stack is empty. The formula_78 notation can also be used to denote a language accepted by accepting state: for PDA M = ("Q", Σ, Γ, formula_1, q0, X0, "A"formula_84{ }), L(M) = {"w" formula_80 (q0, "w", X0) formula_78 ("p", ε, γ)}, where "p" is an accepting state and γ indicates 0 or more stack symbols. The PDA of Figures PDAExample1a and PDAExample1b accept by accepting state (i.e., state q3). One of the exercises asks you to revise this PDA to accept by empty stack. In general, any PDA that accepts by the empty stack can be translated to a PDA that accepts by accepting state, and vice versa. Equivalence of PDAs and CFGs. PDAs and CFGs are equivalent finite representations of the CFLs. Since CFGs were used to delimit the CFLs, we would want to show that PDAs and CFGs are equivalent. We can do this by showing that any PDA can be translated to a CFG, and any CFG can be translated to a PDA. We only briefly sketch the translation strategies here. Thm: If L is a CFL over with alphabet Σ, then there exists a PDA  M, such that L = L(M). Assume that G = (V, Σ, P, S) is a context free grammar in Greibach Normal Form (i.e., all productions of form A formula_10 "a"β, where "'a"' is a terminal symbol, 'A' is a variable, and β is a possibly empty string of variables. We assume that ε is not in L(G). The translation strategy is to construct a PDA M = ({q1}, Σ, V, formula_1, q1, S, {}), accepting by empty stack, where the start state of M is the start symbol of G (i.e., S), where (q1, "a", A) contains (q1, γ) whenever A formula_10 "a"γ is a production of G. Can you confirm that (q1, S) formula_90 (q1, ε) for any string w iff S formula_91 w? Thm: If L is L(M) for a PDA M,  then L is a CFL. Construct a CFG G from PDA M, such that the leftmost derivation of any string w using G simulates the PDA on w (i.e., the variables in a sentential form correspond to the stack symbols). Can you flesh this argument out? Deterministic PDAs. As with FAs, deterministic PDAs (DPDAs) are a special case of (nondeterminism-allowed) PDAs, where the righthand side of every transition is a singleton set in the DPDA and ε transitions are not allowed in a DPDA. Unlike the case of FAs, however, the deterministic and nondeterminism-allowed PDAs are not equivalent. Whenever you see 'PDA' with no qualifier, assume the default, nondeterminism-allowed PDA definition. How can we determine whether a CFL cannot be recognized by a DPDA? We will be satified by intuition. Consider that you are playing the role of a PDA, and are scanning from left to right a string in the language {wwR | w and its reverse are defined over (0+1)*}. How do you know when you are at the boundary between w and wR and that you should therefore switch from remembering (pushing) symbols of w to checking off (popping) symbols from wR? There is no other way than to guess that you are at the boundary, or not, and then follow the implications of that guess along different recognition paths. The language {wwR | w and its reverse are defined over (0+1)*} cannot be recognized by a DPDA. In contrast, consider the language {wcwR | w and its reverse are defined over (0+1)*, and a distinct symbol, c, not 0 and not 1, marks the boundary between them}. There is no need to guess, and the language wcwR can be recognized by a DPDA. As another example, consider the language {wwR | w and its reverse are defined over (0+1)10}, that is, w is exactly 10 symbols. Again, there is no need for the PDA to guess, and this language can be recognized by a DPDA. In fact, the language is regular and it can be recognized by a DFA, a huge DFA to be sure, but a DFA nonetheless. A language for which there exists a DPDA that recognizes it is a DPDAL or simply a DCFL. Deterministic and Inherently Ambiguous (IA) Languages. A question that may occur to some is on the relationship between inherently ambiguous CFLs, as was defined when discussing grammars, and DCFLs, as defined just now by DPDAs. One might suspect that DCFLs and IACFLs are complements, but there was no mention of this rather obvious question in references texts, most notably the 1969 and 1979 editions of Hopcroft and Ullman, where I would have expected at least something like "you might wonder about the relationship between DCFLs and IACFLs, but no relationships are proven as yet". After discussing the issue with chatGPT, with several followups to correct contradictions by the AI, it pointed out that Hopcroft, Motwani, and Ullman (p. 255-256, Section 6.4.4) addressed the issue. All deterministic languages (recognized by DPDAs) have unambiguous grammars, but DCFLs and IACFLs are not perfect complements since counterexamples can be found. For instance, S formula_10 0S0 | 1S1 | ε  is an unambiguous grammar, G, but L(G) = {wwR | w and its reverse are defined over (0+1)*} is not a DCFL. Thus, it might be ok to colloqially refer to the relationship as "somewhat complementary", but not so formally as complements. An interesting additional point is that in the realm of CFLs generally, the languages recognized by PDAs accepting by empty stack and the languages recognized by PDAs accepting by accepting state are the same sets, but in the realm of DCFLs only, the DCFLs defined by accepting state are a proper subset of the DCFLs recognized by empty stack. All DCFLs are recognized by a DPDA accepting by empty stack, but only some DCFLs are recognized by DPDAs accepting by accepting state. Here is a convenient summary of relationships involving deterministic languages. • If there is a deterministic PDA that accepts L then L is a deterministic (CF) language or DCFL; will also call a DCFL this a DPDA language. • All regular languages are also deterministic languages (DCFLs or DPDALs), but not vice versa. That is, regular languages are a proper subset of the DCFLs, which in turn are a proper subset of the CFLs. • All deterministic languages have unambiguous grammars that generate them, but not vice versa. That is, deterministic languages are a proper subset of the CFLs that are not inherently ambiguous. • Languages accepted by PDA accepting/final state and languages accepted by PDA empty stack are the same, and both are exactly the CFLs. • This is not the case with DCFLs – languages accepted by accepting/final state are a proper subset of the languages accepted by empty stack within the class of DCFLs. The subclass relationships present in the CFLs are illustrated in Figure CFLsubclasses. Turing Machines: Recognizers of Unrestricted Languages. Turing machines (TMs) are recognizers of unrestricted languages, which include all the other language classes that we have described (i.e., CSLs, CFLs, RLs). TMs are representationally more expressive than FAs or PDAs. TMs can also enumerate languages and compute functions, and can simulate digital computers and other automata. In fact, TMs are taken to delimit what is computable. TMs are specified by M = ("Q", Σ, formula_93, formula_1, q0, B, "A"), as summarized in Figure TMDefinition. The set of states in the finite control ("Q"), a start state, q0, a set of accepting states "A" formula_95 "Q", an input alphabet, Σ, and a transition function, formula_1, have similar roles as in other automata. A tape alphabet, formula_93, are all symbols that can appear on the infinite tape memory. A special symbol, B formula_98 formula_93, represents a 'blank' on the tape. An input string, w, over Σ, is the initial contents of the tape, with Bs on both sides of w. Thus, Σ formula_95 formula_93. Note that this convention of having the tape be an explicit part of the TM notation is different from the conventions for FAs and PDAs, in which the input string's presence on a READ-ONLY tape is often implicit. The TM tape is a read-AND-WRITE memory, similar to a PDA stack, but different from a PDA stack, which does not initially contain the input string. Finally, the TM can move its read-and-write head one tape cell in either direction (denoted Left or Right) or keeping it unchanged (U), rather than in simply one direction as in the case of an FA's and PDA's (implicit) read-only tape. Textbooks typically define the transition function, formula_1, to be deterministic by default in TMs. That is, for each (state, tape symbol) pair, the TM makes exactly one move (aka transition), going to another state, rewriting the current tape symbol (perhaps to the same value), and moving the read-write tape head left, right, or keeping it unchanged. Succinctly, a transition is formula_1(qi, X) = (qk, Y, D), where qi, qk formula_98 "Q"; X, Y formula_98 formula_93; D formula_98 {L, R, U}. TM's as recognizers. In keeping with our treatment of other kinds of automata, we first consider some examples of TMs as recognizers of languages. Because a TM can move to the right and left an indeterminant number of times, there is no telling when it's input is exhausted. Rather, we will assume that the transitions are defined so that as soon as a TM enters an accepting state, it accepts, and if a TM ever encounters a (state, tape symbol) configuration that is not the lefthand side of any transition then the TM immediately rejects. An example of the TM as a recognizer is shown in Figure TMRecognizer1. TM programming can be quite tedious, even more tedious than programming a computer in assembly or machine language, but one typically gives high level pseudocode descriptions of procedures first, then translates to TM code, as illustrated in bold red in the Figure. Figure TMRecognizer1a traces the TM procedure on input '000111', shown at the top of the leftmost column. As with PDAs, we define "instantaneous descriptions" (IDs) or configurations for TMs. An ID for TMs include (a) the current state, (b) the location of the read/write head on the tape, (c) the tape symbol under the read/write head, and (d) for convenience of illustration we will often include all the current tape symbols as part of the ID, though this is not typically required. Figure TMRecognizer1a shows the IDs in a recognition path for '000111'. Equivalent Variants of Turing Machines. There are variations on the Turing machine definition that neither add nor diminish the computational power of TMs. The variations that are considered here are (a) multi-track, single tape machines, (b) multi-tape TMs, (c) one-way infinite tape TMs, and (d) nondeterministic TMs. Multi-track TMs. We can divide up a single tape into multiple tracks, say "n" of them, as illustrated in Figure TMmultitracks (in preparation). Each track contains cells that are coincident with the cells of the other tracks. Each track "j" has its own finite tape alphabet, formula_93j, and different tracks can have the same tape alphabets or different tape alphabets. There is, however, only one input alphabet, formula_109, and one set of states, Q. There is only one tape head, and at each step the tape head will point at a single tape location with tuple (X1, X2, ..., Xn) for tracks 1 through "n", where each Xj formula_98 formula_93j. The transitions of a multi-track TM are defined as formula_1(qi, (X1, X2, ..., Xn)) = (qk, (Y1, Y2, ..., Yn), D), where qi, qk formula_98 "Q"; Xj, Yj formula_98 formula_93j; D formula_98 {L, R, U}. In sum, the definition of a multi-track TM with "n" tracks is M = (Q, Σ, (formula_931,formula_932, ..., formula_93n), formula_1, q0, B, D), formula_109 is a subset of at least one formula_93j. A multi-track TM adds no computational power to the basic TM model, which has a single track tape. To see the equivalence in computational power, note that the standard single-tape model is a special case of the multi-track model in which "n" = 1. To go the other way, that any multi-track TM with n > 1 has an equivalent single track TM that accepts the same language, recognize that we can create a single tape alphabet, formula_93, by concatenating the symbol names of the different track alphabets, that is the cross-product of the individual track alphabets, formula_931 formula_125 formula_932 formula_125 ... formula_125 formula_93n. Transitions are redefined to use this one alphabet formula_1(qi, formula_131X1 formula_125 X2 formula_125 ... formula_125 Xnformula_135) = (qk, formula_131Y1 formula_125 Y2 formula_125 ... formula_125 Ynformula_135, D), where formula_131X1 formula_125 X2 formula_125 ... formula_125 Xnformula_135 and formula_131Y1 formula_125 Y2 formula_125 ... formula_125 Ynformula_135 are symbols in formula_93. The advantage of the multitrack formalism is that it can make expression of many procedures easier to conceptualize and write, and that includes acting as an intermediate formalism in showing that still other formalisms are equivalent to the basic single tape, single track TM model. Multi-tape TMs. The multi-tape TM formalism, summarized in Figure TMmultitapes (in preparation), is different than the multi-track formalism, in that a multi-tape TM has multiple read/write heads, one for each of "n" tapes, as well as multiple tape alphabets, formula_93j. Each transition is defined as formula_1(qi, (X1, X2, ..., Xn)) = (qk, (Y1, Y2, ..., Yn), (D1, D2, ..., Dn)), where Xj and Yj are members of their respective tape alphabets formula_93j, and each Dj is the direction (L, R, U) of the read/write head for the jth tape.Thus, at any given time in processing the "n" read/write heads need not be aligned at the same cells on their respective tapes, relative to where they all began, which is assumed to be in alignment at the start of the input string on the first tape at the very start of processing. The multi-tape model adds no computational power to the standard single tape model. To see this, note that the standard single tape model is a special case of the multi-tape model where "n" = 1. Moreover, any multi-tape TM has an equivalent single tape, multi-track TM, which in turn can be converted into an equivalent single-tape, single-track TM. We just show a sketch of the first step here, and a sketch of the second step is above. Each tape of the multi-tape TM will correspond to a track in a single tape, multi-track TM. In addition to "n" tracks for the "n" tapes, there will be "n" additional tracks, which give the relative location of the read/write head of the corresponding tape in the original multi-tape TM. So, if the head for tape j had moved three steps L (-3) and one step right (+1) and two steps unchanged (+0), then after the six steps the track representing its location would be -2 relative to its start location. TMs with one-way infinite tapes. its intuitive that the basic TM model can be simulated by associating one track with one side of the standard TM’s read/write head, and another track or stack associated with the other side of the read/write head. Thus, these seeming restrictions to a semi-infinite tape or two stacks don’t reduce the set of languages that can be recognized over the standard TM model. Can we reformulate the multi-tape conversion, using two tracks per tape Nondeterministic TMs. A nondeterministic TM includes at least one configuration in which more than one transition is possible. So, we extend the transition function so that the value of a formula_1(qi, X) is a set, such as formula_1(qi, X) = { ... (qk, Y, D) ...}. In many cases the set for a configuration might still be a singleton, and if we chose we could still consider a deterministic TM as a special case of this more general definition. Nondeterministic TMs add no computational power beyond Compare TMs to the definition of PDAs, which were defined initially as inherently nondeterministic, and deterministic PDAs were defined as a special case of the default definition, rather than nondeterministic versions defined as extensions to the deterministic machines, as is the case with TMs and FAs. Why? Presumably, the reason for this switch in convention among is because for FAs and TMs the deterministic versions accept the same set of languages as the nondeterministic versions, so why not introduce the deterministic versions first and use them as a jumping off point for the nondeterministic version. In contrast, the deterministic PDAs are not as powerful as the nondeterministic PDAs – the former recognizes a proper subset of the latter. So make the nondeterministic PDAs the default, thus conveying an unambiguous equivalence to the CFLs. Why are the nondeterministic and deterministic machines of equivalent power for FAs and TMs, but not PDAs? Offhand I don’t have a definitive answer, but it seems intuitive that it is related to a ceiling effect in the case of TMs (i.e., the most powerful machines) and a floor effect for FAs (the least powerful machines). PDAs are intermediate between the extremes, and it seems to be the case in many settings, far and wide, not just ALC, that much of the most interesting, or at least most variable, behaviors happens between the extremes. TM's to compute functions. Let's start with the multiplication function. TMs as subroutines. Subroutines are an important part of TM programming, where ‘calling’ a subroutine amounts to the TM transitioning to a network of states that have been specially set aside as implementing the subroutine, and the subroutine returns by leaving the machine in a configuration that can be picked up by the ‘calling’ routine. The use of the multiplication subroutine as a means of implementing the square function (i.e., n2) is a good example of this Turing Equivalency of Digital Computers. Finally, there is an equivalence between TMs and standard digital computers. In one direction it seems clear that we could all simulate a TM in our favorite programming language. In the other direction, we illustrate how the basic components of interpreting a low level machine language program could be simulated on a multi-tape machine (which is equivalent to some very complicated single tape TM). The basic components of a digital computer include (a) a central processing unit, (b) a memory, and we single out In showing that TMs are equivalent to digital computers in terms of what can be computed in principle. Examples have been given that show TMs recognizing and enumerating various languages and computing different functions. But something important is missing in a compelling argument for equivalence. Just as there are “universal” computer programs that can translate/compile/interpret other computer programs, there is a universal TM, call it MU, that can interpret or simulate another TM, Mk, on a given input string to Mk. In order for a TM, Mk, to be input to TMU, we need to encode Mk as a string (because any TM, MU included, requires a string as input that is used to initialize MU’s tape). The language accepted by TMU, call it LU, is of the form { (<Mk>$<w>) | <Mk> is a string encoding of a TM, Mk, <w> is a string that is input to Mk, and $ is a delimiter separating the two substrings -- the Mk and its input}. Turing Machines and Language Classes. As already noted, TMs are recognizers of unrestricted languages. Given an unrestricted grammar, G, we can design a TM, M, such that L(G) = L(M). The easiest strategy for constructing such a recognizer, though not the most efficient, is to design a TM recognizer that calls a subroutine TM that is an enumerator of L(G), as already discussed, such that given a string w, if the TM enumerator eventually generates w, then the TM recognizer accepts w as a member of L(G), else the TM recognizer rejects w as a member of L(G). Recursively enumerable (RE) languages. Because TMs can enumerate the members of any language defined by an unrestricted grammar using, for example, a breadth-first enumeration strategy, the languages recognized by TMs are also referred to as the "recursively enumerable" languages. "The recursively enumerable (RE) languages are synonymous with the unrestricted languages." But what happens when a TM recognizer of an RE (aka unrestricted) language is given a string that is not in the language. Ideally, and in most cases with a properly designed TM, the TM will reject "w" and halt. But in the case of some strings that are not in the language, L(G) = L(M), the recognizer may not halt, but may run forever. To intuit this possibility, recall that an unrestricted grammar need not be a noncontracting grammar, and if so, a TM constructed from a "contracting grammar" will result in paths in a breadth-first enumeration that can shrink in size. This complicates the test of when a string "w" of length formula_80"w"formula_80 is not a future sentential form on an enumeration path, and in some cases there will be no such test. To repeat, for some RE languages there is no TM that will halt in all cases where an input string is not in the TM's language. Recursive languages. A TM (or more generally, a procedure) that halts on all inputs, be the input string a member of the TM's language or not a member, is called an "algorithm". The language of a TM that always halts is called a "recursive language". The recursive languages are a proper subset of the RE languages. The recursive languages are said to be decidable or solvable, since TMs to recognize them always halt with a decision of accept or reject. RE languages that are not recursive are said to be undecidable or unsolvable, since there are some cases in which their TM recognizers will not halt with a reject decision. Non-RE languages are undecidable (aka unsolvable) in a second, even stronger sense -- there is no TM at all for recognizing all strings in the language in both the accept and reject cases. We return to issues of undecidability/unsolvability and decidability/solvability when we discuss the properties of the RE languages and properties of the recursive languages, respectively. Languages that are not RE. There are languages that are not recognized by any Turing machine, that is there are languages that are not recursively enumerable (not RE, and thus not recursive either). One might ask what languages could possibly be non-RE. Intuitively, examples of non-RE languages will be those for which there is no basis by which a recognizer (or enumerator) can be constructed. A language in which the members are selected randomly would be an intuitive illustrative example: LR = {w | w in Σ* and random(w) == true} could be formed by enumerating the strings in Σ* in canonical order and randomly deciding in each case whether a string, w, is in the language or not. The reason that a recognizer can't be built for this language assumes that random(w) is applied in the recognizer with no memory of its application in the creation of the language. And we can't simply remember the members, since there are an infinite number. In contrast, if random(w) were a pseudorandom process then such procedures are repeatable and we could implement a recognizer in that case, but if we have a truly random process then no TM recognizer can be built and the language is not RE. In addition to "random languages" there are demonstrably other non-RE languages. A demonstration of non-RE languages uses a diagonalization argument. Consider Figure TableofTMLanguages and recall that we can encode TMs as strings. Each row corresponds to the language represented by one TM encoding -- the actual encoding is not shown but is not important to the argument. We only show a label for the TM. If a row does not represent a valid TM encoding then its assumed to be an encoding of a "dummy" machine that represents the empty language (i.e., a row of all 0s for j formula_159 1). The '1' values in cells of the table, (i, j), indicate that TMi accepts string j. The diagonal represents the set of encodings of TMs that accept their own encodings . That is, a '1' in (4, 4) says that TM4 accepts TM4's encoding. Its possible that the diagonal corresponds to a row in the table, which would therefore represent the TM that represents the language of TMs (encodings) that accept "themselves". But if we complement the diagonal, as shown in Figure DiagonalLanguage, then that complemented diagnonal, formula_160d, is one that differs with every other language in the table by at least one cell -- and so formula_160d cannot correspond to a row in the table, and therefore there is no TM that represents formula_160d, the language of TM encodings that do "not" accept themselves. formula_160d = {<Mk> | <Mk> is not accepted by Mk} is not RE. There are other languages that are not RE. The language of string encodings of TMs for recursive languages is not RE. And thus the language of string encodings of TMs for languages that are not recursive is also not RE (p. 187 second edition). Languages that are not RE are said to be undecidable or unsolvable, but in an even stronger sense than RE languages that are not recursive. In the former case, there is no TM recognizer that halts in all cases with either accept or reject decisions. There might be TMs that are approximate representations of a non-RE language, but there is no TM that precisely represents a non-RE language. Ld and Lu are RE but not recursive. The language LU is recursively enumerable, but is not recursive. This means that the universal TM halts on accept, but does not halt on reject in all cases. Intuitively, if the argument to the universal TM, Mu, doesn't halt, then Mu doesn't halt either. Any programmer who has ever coded an infitite loop will recognize this. But to be convincing, can we identify a TM that doesn't halt in all cases, thus representing an RE language (because there is a TM) that is not recursive (doesn't halt in all cases). Recall from the section above that formula_160d = {<Mk> | <Mk> is NOT accepted by Mk} is not RE. The complement of this language is formula_165d = {<Mk> | <Mk> is accepted by Mk} (i.e., the diagnonal in Figure TableofTMLanguages and the left table of Figure DiagonalLanguage). In the previous section we noted that formula_165d could correspond to a row in Figure TableofTMLanguages, and indeed we can build a TM, Md, that recognizes formula_165d by providing the universal TM with the input (<Mk>$<Mk>), and if the universal TM accepts, then <Mk> is in formula_165d = {<Mk> | <Mk> is accepted by Mk}, else <Mk> is not in formula_165d. The existence of Md indicates that formula_165d is RE. Is formula_165d recursive however? The answer must be no, because if formula_165d were recursive then formula_160d would necessarily be recursive as well. To preview a discussion later in "Properties of Recursive Languages", if L is a recursive language, then there is a TM, call it ML, that recognizes L and that is guaranteed to halt for all accept cases and in all reject cases. We can create a new TM, ML', that calls ML as a subroutine. If ML returns accept for a input string, then ML' returns reject. If ML returns reject for a input string, then ML' returns accept. ML' accepts the complement of L, and always halts, so the complement of L is recursive. So formula_165d is a language that is RE but not recursive, since if formula_165d were recursive then formula_160d would be recursive too, and we know formula_160d isn't recursive (its not even RE). We can say therefore that Lu isn't recursive either, since it won't halt when its argument is a non-halting TM like Md. Reductions. A reduction is a kind of translation. When we have talked about translations previously, as in the case of inductive proofs or ..., we generally used steps in the translation that were symmeteric, and each step yielded an equivalent form, so direction of a demonstration is arbitrary, at least formally. But reduction doesn't require equivalence necessarily, and direction is important. Rather, you can think of a reduction as showing a one way implication. If you hear “"Reduce problem P1 to problem P2"” in computational theory it means to “"find a means of determining a solution for problem P1 using a means of determining a solution for P2".” In the previous section on showing formula_165d and formula_165u were not recursive, we reduced the problem of recognizing formula_165d to the problem of recognizing formula_165u, as illustrated in Figure ReductionExample1. In this example, input to TM Md was an encoded TM <Mk>. This input was preprocessed into <Mk>$<Mk>, which was passed to Mu. Whatever means for determining solutions you devise, the reduction should have the property that for every instance of the P1 problem (e.g., membership in formula_165d) that should return a ‘yes/accept’, the corresponding instance of the P2 problem (e.g., membership in formula_165u) returns a ‘yes/accept’, and for all instances of P1 that should return a ‘no/reject’, so does P2 for the corresponding instances. In a reduction from P1 to P2, you can think about the P2 solution procedure (e.g., Mu) as a subroutine of the P1 solution procedure (e.g., Md) , albeit a subroutine that does the bulk of the work. Again, we are not showing P1 and P2 are equivalent, but only that one problem solution procedure can be found using another problem solution procedure. In a reduction from P1 to P2, we can also say colloqially that P2 (e.g., membership in Lu) is "at least as hard as" P1 (e.g., membership in Ld). Figure ReductionInProof also gives us an alternative way to prove, by contradiction, that Lu is not recursive (i.e., undecidable) given that formula_165d is not recursive. Of the four logically equivalent ways for thinking about how reduction can be used in a proof by contradiction, the first (i.e., P2 formula_98 R formula_10 P1 formula_98 R) follows most intuitively from the reduction in Figure ReductionExample1 where we reduce the problem of membership in formula_165d to the problem of membership in formula_165u. In that reduction we assume that formula_165u is recursive, and thus Mu halts in all cases. But if that were so then the reduction indicates that Md would also halt in all cases, which we know cannot be the case. We are left with a contradiction, and so our assumption that Lu is recursive must be wrong. Linear bounded TMs and the CSLs. Linear Bounded Turing Machines are a form of restricted TMs that exactly recognize the class of context sensitive languages (CSLs). To the set of input symbols a linear bounded TM adds two special symbols to Σ, one that delimits the left end of the input and the other delimits the right end of the input, and these special symbols are only used in that way and they cannot be overwritten. A linearly bounded TM is a nondeterministic single tape TM that never leaves the tape cells on which its input was placed. The input is {w | w is in Σ, but with the two special end markers omitted}. Thm: If L is a CSL, then L is accepted by a linear bounded TM. Intuitively, we know that L can be enumerated by a CSG in canonical order and the derivation of a string cannot include a sentential form that is longer than the string. Thm: If L = L(M) where M is an linear bounded TM then L (excluding ε, if necessary) is a CSL. The CSLs are a proper subset of the recursive languages. A language that is recursive but not context sensitive, thus showing the proper subset relationship, is described next, following Hopcroft and Ullman. A Recursive Language that is not a CSL. Develop a binary encoding scheme for the type 0 grammars (the grunt work is not important to understand the argument), and these can be numbered from 1 to infinity (which will include “dummy” grammars, just as an earlier argument acknowledged “dummy” TMs). Of the type 0 grammars its easy to recognize whether the "ith" grammar in the ordering is a CSG (its productions’ right-hand sides will never be smaller in length than the corresponding left-hand sides). Define language L = {wi | wi is NOT in L(Gi)}. That is, L is the language of binary strings wi that encode CSGs Gi where wi is not not generated by Gi. (i.e., the language of grammars that do not generate “themselves” so to speak). Since Gi is a CSG, there is an algorithm (always halts) that determines whether wi is in L(Gi). Thus, L is recursive, since given a string wi the test for membership of wi in L will always halt -- if wi is not a CSG then reject wi as a member of L, and if wi is a CSG then if it generates itself then accept wi as a member of L (and halt), else reject wi and halt. But L itself is not generated by any CSG. A proof by contradiction shows that L cannot be generated by a CSG. Suppose that there was a CSG Gk that generated L, so L = L(Gk). If wk were in L(Gk) we get a contradiction, because L is the language of binary encoded CSGs that are not generated by “themselves”. Consider then if wk is not in L, then wk is not in L(Gk), and again a contradiction, because L is defined so as to include wk in that case. Thus, L is a recursive language that is not a CSL. If the reasoning sounds like a Escher painting initially, I understand, but reflect on it. Many proofs by contradiction at this level have the feel of paradoxes, though they are not. Rather than simply becoming comfortable with the logic of paradoxical-sounding proofs by contradiction, however, its often instructive to dig deeper. By saying that L is not a CSL is to say that there is no non-contracting grammar that generates L -- none, nil, nada! Remember that CSGs are a normal form of non-contracting grammars. Is there something about L = {wi | wi is NOT in L(Gi) and binary encoded Gi = wi} that precludes a non-contracting grammar? Carrying the implications further, for all Type 0 (unrestricted) grammars that do generate L, there must exist a wj in L How do we connect this reasoning to the fact that the CSLs are non-contracting; the LR~CS must be non-contracting. Does this suggest other "recursive" but not CSL languages, like planning is which sometimes you must undo subgoals? Exercises, Projects, and Discussions. FA Exercise 1: Consider the FA of Figure FAexample3. Rewrite this FA to explicitly show all states and transitions. FA Exercise 2: Describe the language that the FA of Figure FAExample3 represents. FA Exercise 3: (adapted from Hopcroft and Ullman, 1979, p. 48; Hopcroft, Motwani, and Ullman, 2007, p. 53) Define an FA that accepts strings of 0s and 1s that contain three consecutive 1s. FA Exercise 4: (due to Hopcroft and Ullman, 1969, p. 44) Define an FA that accepts strings such that every 0 is followed immediately by a 1. FA Exercise 5: Have a discussion with an AI large language model (LLM) on formalizing and generalizing the discussion under "Equivalence of NFAs and DFAs/Systematic and Judicious Growth". Your goal is to obtain a concise algorithm, or runnable program if you wish, for translating an NFA to a DFA using the strategy that is discussed. You should not accept out of hand whatever description that you get from the LLM, but check it and iterate until you feel confident that you can explain the algorithm and inputs/outputs to the instructor, TA, or other student who is familiar with the area, but. needs help with this particular algorithm. FA Exercise 6: (adapted from Hopcroft and Ullman, 1979, p. 48; Hopcroft, Motwani, and Ullman, 2007, p. 54) Construct an NFA that accepts the set of all strings of 0s and 1s such that the 6th symbol from the right end is 1. Hint: Because the 6th from the right end will be read by the NFA BEFORE it reaches the right end, the NFA cannot know that the current '1' is 6th from the right or not. So use the guessing strategy described in "Equivalence of NFAs and DFAs/Retrospection". FA Exercise 7: (adapted from Hopcroft and Ullman, 1979, p. 48; Hopcroft, Motwani, and Ullman, 2007, p. 54, pp. 64-65) Translate the NFA that you develop in FA Exercise 5 to a DFA. Choose an unambiguous way of representing the DFA. FA Exercise 8: Translate the DFA you construct in FA Exercise 6 to a regular grammar. FA Exercise 9: (due to Hopcroft and Ullman, 1979, p. 48; Hopcroft, Motwani, and Ullman, 2007, p. 54) Give a FA that accepts all strings beginning with a 1 that when interpreted as a binary integer is a multiple of 5. For example 101, 1010, and 1111 are accepted and 0, 100, and 111 are not. FA Exercise 10: (due to Hopcroft and Ullman, 1979, p. 48; Hopcroft, Motwani, and Ullman, 2007, p. 53) Construct a FA over {0, 1} that accepts all strings such that each block of 5 consecutive symbols contains at least two 0s. Use a sliding window interpretation of the problem, so that there are two blocks of 5, for example, in a 6 symbol string: symbols 1-5 and symbols 2-6. FA Exercise 11: Continue processing the input of '0112' in Figure NFAProcess2c, and confirm '0112' is accepted by the NFA. Also confirm that '122' is accepted, and that '120' is not accepted. FA Exercise 12: (a) (due to Hopcroft and Ullman, 1979, p. 48) Give an NFA (which is not also a DFA) for the language described in Figure NFAExample3, except for i formula_159 0, instead of i formula_159 1; (b) Give a DFA for the language of (a). FA Exercise 13: Give an NFA (that is not also a DFA) that accepts the same language as Figure NFAExample2, but with no formula_19 transitions. Can you write a general procedure for translating an NFA with formula_19 transitions to an NFA without formula_19 transitions, and that is "not necessarily" a DFA? RE Exercise 1: There is a pedagogical case, as well as a practical case, to be made for starting this chapter with regular expressions, showing their equivalence to regular grammars, then presenting the generalized version of FAs with transitions labeled by arbitrary REs, showing that these are equivalent to FAs with only primitive REs (i.e., alphabet symbols) labeling the transitions, and continuing on. Carry out a thought experiment in which you redesign the chapter with REs labeling FA transitions. How would NFAs be defined by presenting the generalized FAs first? How might demonstrations of equivalence be affected? How might design and development of FAs be impacted by an encouraged allowance to start with generalized FAs? RE Exercise 2: Translate Figure DFAtoRE2 by eliminating state C first. RE Exercise 3: (due to Hopcroft and Ullman, 1979, p. 50; Hopcroft, Motwani, and Ullman, 2007, p. 92) Construct an RE for the set of all strings over {0,1} such that "the number of 0s and number of 1s are equal" and "no prefix has two more 0s than 1s and no prefix has two more 1s than 0s " (i.e., for any prefix the number of 0s and number of 1s differ by no more than one). RE Exercise 4: Using the textual descriptions and examples as a guide, write an algorithm in pseudocode or a programming language of your choice to translate DFAs to REs. Demonstrate the algorithm on a few test cases. If you use a large language model, insure that you understand the LLM's result, and revise it as necessary. Show the prompts you used. PDA Exercise 1: Give a pushdown automaton for {w | w is a string in (0+1)* and w consists of an equal number of 0s and 1s}. Hint: you can do this with a deterministic PDA. PDA Exercise 2: Give a pushdown automaton for {wcwR | c is a character and w is a string in (a+b)*}. PDA Exercise 3: Give a pushdown automaton for {cwwR | c is a character and w is a string in (a+b)*} that accepts by empty stack. Hint: you can revise the PDA defined by Figures PDAExample1a and PDAExample1b if you wish. PDA Exercise 4: Construct a PDA for palindromes over {a, b}. PDA Exercise 5: Just as we gave an FA explanation of the Pumping Lemma for RLs, give a PDA explanation of the PL for CFLs. TM Exercise 1: Define a formalism for a PDA+ that has two stacks instead of only one stack. Show that the two-stack PDA+ is equivalent in computational power to a TM.
21,926
Theory of Formal Languages, Automata, and Computation/Properties of Language Classes. A comprehensive picture of the hierarchy of language classes is shown in Figure LanguageHierarchy, along with the briefest reference to concepts and factoids that have been covered. It is intended as a reference that can be reviewed quickly to good effect, possibly before an exam, or perhaps years later to confirm a fleeting memory. Kinds of Properties. A property of a "given language" is a statement or predicate that is true of the language. Suppose the language is the set of prime numbers. One property is that there is an algorithm for recognizing any input as a prime number or not. That is, the language of prime numbers is recursive. We've spent a lot of time on such properties -- that a language is in a class of languages, or that a particular language includes a particular string or not. We've covered other properties of specific languages too, perhaps only briefly, such as the property that a given language is "inherently ambiguous" or not. A property of a "language class" is a statement or predicate that is true of all members of the class. We have not spent much time thus far on properties of language classes, other than definitional properties, which we discuss below. Closure Properties. To preview one example, lets consider the class of CFLs. ∀L L formula_1 CFLs → P(L), where L is a language and P is a property. A property of the CFLs is that if L is a CFL then L* is a CFL. This is an example of a "closure property" -- that the CFLs are closed under Kleene Closure, aka repetition. As another example of a closure property, again of the CFL class, ∀L1,L2 (L1 formula_1 CFLs ∧ L2 formula_1 CFLs) → R(L1, L2), where L1 and L2 are languages and R is a property, such as union: if L1 and L2 are CFLs then L1 formula_4 L2 is a CFL; the CFLs are closed under union. But if we say that a property does not hold for a language class, such as the CFLs, this means that the property is not true of all CFLs, or equivalently, there exists a CFL for which the property does not hold. That is, for CFLs, ¬(∀L L formula_1 CFLs → P(L)) ≡ ∃L ¬(L formula_6 CFLs formula_7 P(L)) ≡ ∃L L formula_1 CFLs formula_9 ¬P(L)), where L is a language and P is a property. For example, if L is a CFL, then L's complement, formula_10, is not necessarily a CFL; the CFLs are not closed under complement, or put another way, the class of CFLs does not possess the closed-under-complement property. Its important to recognize that this is a statement about a "class" of languages, the CFLs, not a statement about a particular language. Thus, its not inconsistent to say that the class of CFLs do not possess the closed-under-complement property, but that the complement of a particular CFL is also a CFL. We have talked extensively about certain definitional properties of various language classes already -- the type of grammar that generates languages in the class, the kind of automata that recognize them, and other representations and patterns of languages in the class (e.g., as stated in the Pumping Lemma). These are all examples of definitional properties of selected language classes. A new focus in this chapter is closure properties (or not) of language classes. This text focuses on six closure properties, but there are many other closure properties that are addressed in other texts. A preview of our coverage is shown in Figure ClosurePropertiesSummary. Decision Properties. In addition to closure properties, we will also discuss selected "decision properties" in this chapter. A decision property corresponds to a yes/no question about a language class that is decidable in all cases. We have already covered in considerable detail, for example, membership decision properties. Every language class we have studied, except RE, has the decision property that a test of membership in a language of the class is an algorithm. A test of membership is decidable for regular languages, DCFLs, CFLs, CSLs, and "recursive" languages, and thus each of those classes has the test-of-membership decision property. The "recursively enumerable" languages generally, notably to include those languages that are not "recursive", do not have the test-of-membership decision property because the question of membership of an arbitrary "recursively enumerable" language is undecidable. We've spent so much time on this question already that we don't repeat the analysis in discussing each language class in this chapter, but we do include it in the summary of decision properties in Figure DecisionPropertiesSummary. Recall that a question is decidable if there exists a TM (or computer program) that given a requisite number of language specifications (e.g., as grammars or automata) as input, and correctly answers the question for which the TM was designed to answer. As yet another example of a decision property, in this case of the regular languages, recall we have already sketched an algorithm for answering whether two regular languages, as represented finitely through DFAs, NFAs, RegExps, or RGs, are the same language. So if we are given a RegExp that represents a language and an NFA that represents a language, then clearly the languages represented by each are regular, and we can translate each language specification into a DFA, minimize each of the resulting DFAs, and see if the two minimal state DFAs are identical, aside from state names. This process can be implemented as a TM or computer program that correctly answers whether its two input language specifications represent the same language or not. In my example I seemed to suggest that the inputs can be different in form -- one as an RegExp and one as a NFA -- but for purposes of implementation we could insist on both inputs as RegExps, or both as NFAs, or both as DFAs, or both as RGs; or we could use TMs as the way we represent all languages that are input to decision questions. The "class of regular languages has the property that the question of equivalence is decidable" because we can test for equivalence by using a single correct TM for all (pairs of) regular languages that are inputs to the TM. In contrast, if we say that a decision question is undecidable for a class of languages then that means that there does not exist a TM that correctly answers the question (and halts) for all languages in the class. For example, it is undecidable whether two CFLs, as represented finitely by CFGs or PDAs, or TMs, are the same language. But we want to be careful about language here. We've said earlier that a property of a class of languages is a statement that is true of all languages in the class. So rather than saying that the "class of CFLs has the property that the question of equivalence is undecidable""," which you might see in some sources, we'll say that "the class of CFLs does not have the property of decidability on the question of equivalence". This is consistent with the discussion on closure properties as well. It is still the case that particular pairs of CFL specifications can be shown to be equivalent using a TM created to answer the equivalence questions for CFLs, but no TM can be found that is always correct and always halts. As with closure properties we will only reference a limited number of decision properties, though when we reach the RE languages we will describe Rice's theorem and its astounding conclusion that an infinite number of decision questions are undecidable in the case of the RE languages generally. Properties of Regular Languages. If we want to show that a language is regular it will be possible to construct an FA, regular expression, or regular grammar that demonstrably represents the language, perhaps verified by a proof by induction or contradiction. A construction argument, whether followed by an auxiliary proof or not, often represents a kind of gold standard of demonstration. Closure Properties of Regular Languages. Closure properties can also be used to show a language, L, is regular (or in some other class of languages for that matter). We can do this by applying transformations known to preserve regularity to a language, X, known to be regular (e.g., by construction), until reaching the target language, L, thus demonstrating that X formula_1 RL formula_12 L formula_1 RL. In addition, closure properties can be handy in showing that a language, L, is "not" regular, by applying transformations known to preserve regularity to L, until a language, X, is derived that is known through some other demonstration to "not" be regular. Thus, the original language, L, must not have been regular either (i.e., demonstrating X formula_6 RL formula_12 L formula_6 RL). The regular languages are closed under It will be interesting to compare the closure properties of the regular languages with those closure properties of more inclusive languages – context free, context sensitive, and unrestricted. Regular Languages are closed under Complement. Theorem: If L is a regular language then the complement of L, formula_10, is regular. Since L is regular, there is a DFA, M, that recognizes L. Construct the DFA for formula_10, call it formula_19, by simultaneously changing all accepting states in M to non-accepting states, and changing all non-accepting states in M to accepting states. If formula_19 accepts a string w then w must have taken a path to an accepting state of formula_19, which was a non-accepting state in M, so w is not in L. If formula_19 did not accept a string w then w must have taken a path to a non-accepting state of formula_19, which was an accepting state in M, so w is in L. Regular Languages are closed under Union. Theorem: If L1 and L2 are both regular languages, then L1formula_4 L2 is a regular language. Since L1 and L2 are each regular languages, there are DFAs M1 and M2, respectively, that recognize each. Construct an NFA with formula_25 transitions, M1formula_42, for L1 formula_4 L2 as follows. M1formula_42 is an NFA that recognizes L1 formula_4 L2 and thus L1 formula_4 L2 is a regular language. Regular Languages are closed under Intersection. Theorem: If L1 and L2 are both regular languages, then L1 formula_36 L2 is a regular language. Suppose L1 and L2 are regular languages over alphabet Σ. Then L1 formula_36 L2 is regular. This must be so since L1 formula_36 L2 = ~(~L1 formula_4 ~L2) and the regular languages are closed under complement and union. But we can also directly construct a DFA that accepts L1 formula_36 L2 from the DFAs that must exist for L1 (M1 = (Q1, Σ , formula_411, q1, F1)) and L2 (M2 = (Q2, Σ , formula_412, q2, F2)), respectively. The following is adapted from (pp. 59-60, ; p. 137, ). M1∩2 = (Q1×Q2, Σ, formula_41 , [q1, q2 ], F1×F2) that accepts L1 formula_36 L2. For each pair of states from M1 and M2 respectively, define a state in M1∩2 . For each of these pairs of states, on each symbol "a" in Σ, define a transition in M1∩2                                                                           formula_41([q1i, q2k ], "a") = [formula_411(q1i, "a"), formula_412(q2k, "a")] As an example, consider {0m | m is evenly divisible by 2 and evenly divisible by 3}. Its easy to build a DFA that accepts an even number of 0s, and its easy to build a DFA that accepts an integer multiple of 3 number of 0s. A DFA for their intersection is shown in Figure DFAofIntersection, and thus the intersection of the two regular languages is thus regular. Regular Languages are closed under Concatenation. Theorem: If L1 and L2 are both regular languages, then L1L2 is a regular language. The concatenation of languages L1 and L2, written L1L2, is {w1w2 | w1j formula_1 L1 and w2k formula_1 L2}. That is, any word from L1 followed immediately by any word of L2, is a string in the language of the concatenation. Since L1 and L2 are each regular languages, there are DFAs M1 and M2, respectively, that recognize each. Construct an NFA M12 with formula_25 transitions by M12 is an NFA that recognizes L1L2 and thus L1L2 is a regular language. As a matter of interest and subsequent utility, we can extend concatenation to a sequence of more than two languages, so that L1L2...Lm = {w1w2...wm | w1j formula_1 L1 and w2k formula_1 L2 and ... and wmi formula_1 Lm}. Regular Languages are closed under Kleene Closure. Theorem: If L is a regular language then L* is a regular language. We introduced Kleene closure in defining regular expressions, and also called it the repetition operator. The Kleene closure of a language L, L*, equals {w1w2w3...wm | for all m formula_55 0 and each wk formula_1 L}. That is, strings in L* are each a concatenation of an indefinite number of strings from L. If L is a regular language then there is a DFA, M, that recognizes it. To construct an NFA with formula_25 transitions that recognizes L*, The resulting NFA accepts L* and so L* is a regular language. Regular Languages are closed under Substitution. Substitution is the most complicated of the operations that we will consider. Suppose L is a language, regular or not, over alphabet Σ. Suppose further that for each symbol in Σ, xk, we associate a language Lxk. We will speak of the substitution operation, "Subst", as being applied to each symbol of Σ, to each string of L, and to L itself. If xk is a symbol in Σ then Subst(xk) is the strings in Lxk, i.e., simply Lxk itself. If w = a1a2...am is a string in L, then Subst(w) = Subst(a1)Subst(a2)...Subst(am), that is Subst(w) is the concatenation of the languages (see above) associated with the various symbols in w. Finally, Subst(L) = {Subst(w) formula_60 w formula_1 L}, that is Subst(L) is the set of strings resulting from the substitution applied too all strings in L. Importantly, the alphabets for the various languages, L and the Lxk's don't have too be the same. The alphabet for the language Subst(L) is the union of the alphabets of all the Lxk languages, and that alphabet may or may not share any symbols with the alphabet of L. If L is the set of strings of 0s and 1s with at least two consecutive 0s, and L0 = {w formula_60 w matches (formula_63+formula_64)+} and L1 = {formula_25, a, ba} then Subst(0) = L0 = {w formula_60 w matches (formula_63+formula_64)+} and Subst(1) = L1 = {formula_25, a, ba} . Subst(100) = {w formula_60 w in Subst(1)Subst(0)Subst(0)} = {formula_63, formula_63formula_64, formula_64formula_63, formula_64, aformula_63, aformula_63formula_64, aformula_64formula_63, aformula_64, baformula_63, baformula_63formula_64, ..., aformula_64formula_63formula_64formula_63formula_64, ...}. That is, Subst(100) is all possible ways of drawing a string from L1, i.e., {formula_25, a, ba}, followed by two draws from L0 = {w formula_60 w matches (formula_63+formula_64)+}. Subst(L) is the set of strings over an alphabet of {a,b,formula_63,formula_64} where each b must followed immediately by an 'a' (why?), and there must be at least one consecutive pair of formula_63 and/or formula_64 (why?). Theorem: If L is a regular language over an alphabet Σ and for each symbol, xk in Σ, there is an associated regular language Lxk, then Subst(L) is a regular language. If L and all Lxk's are regular languages then there are DFAs that recognize L and each of the Lxk's. Call these DFAs M (for L) and Mxk (for each Lxk). To construct an NFA with formula_25 transitions for Subst(L), do as follows. The resulting NFA recognizes Subst(L) and thus Subst(L) is a regular language. Decision Properties of Regular Languages. The decision properties of the regular languages include Equivalence test of two regular languages is decidable. Theorem: If L1 is a regular language and L2 is a regular language then there is an algorithm that determines whether L1 and L2 are the same language. We described the proof sketch above under Kinds_of_properties/Decision_properties and do not repeat that here. Test of whether a regular language is empty is decidable. Theorem: If L is a regular language then there is an algorithm that determines whether L is the empty language ({ } or formula_102). There is a unique minimal-state DFA over an alphabet, Σ, that accepts the empty language -- it is a DFA with only one state, which is a non-accepting state, and all transitions for all alphabet members loop back to that one state. Given a finite representation of a regular language use the same process as described earlier of translating the input to a DFA, minimize the DFA, and see if its identical to the single-state DFA just described. If the DFAs are identical then L is empty, else its not empty. Test of whether a regular language is Σ* is decidable. Theorem: If L is a regular language then there is an algorithm that determines whether L is Σ*. Do very much as described immediately above with one small change. There is a unique minimal-state DFA over an alphabet, Σ, that accepts Σ* -- it is a DFA with only one state, which is an accepting state, and all transitions for all alphabet members loop back to that one state. Given a finite representation of a regular language translate the input to a DFA, minimize the DFA, and see if its identical to the single-state DFA just described. If the DFAs are identical then L is Σ*, else its not. Properties of Context Free Languages. Closure Properties of CFLs. The CFLs are closed under In contrast to the regular languages, the CFLs are not closed under complementation and the CFLs are not closed under intersection. CFLs are closed under Union. Theorem: If L1 and L2 are both CFLs, then L1formula_4 L2 is a CFL. If L1 and L2 are CFLs then each has an associated CFG, GL1 and GL2, that generates the respective languages. Assume that the names of the variables in the two CFGs are disjoint (so no possibility of confusion). To construct a CFG for L1formula_4 L2, call it GL1formula_4 L2, create a new start symbol, SL1formula_4 L2, with two associated productions, one to the start symbol of GL1 and one to the start symbol of GL2 (i.e., add SL1formula_4 L2 formula_12 SL1 formula_60 SL2). GL1formula_4 L2 is a CFG that generates L1formula_4 L2, so L1formula_4 L2 is a CFL. CFLs are closed under Concatenation. Theorem: If L1 and L2 are both CFLs, then L1L2 is a CFL. If L1 and L2 are CFLs then each has an associated CFG, GL1 and GL2, that generates the respective languages. Assume that the names of the variables in the two CFGs are disjoint (so no possibility of confusion). Assume the name of the start symbol for GL1 is SL1 and that the name of the start symbol for GL2 is SL2. To create a CFG for L1L2, create a start symbol SL1L2 with a single production to SL1SL2 (i.e., add SL1L2 formula_12 SL1SL2). GL1L2 is a CFG that generates L1L2, so L1L2 is a CFL. CFLs are closed under Kleene Closure. Theorem: If L is a CFLs, then L* is a CFL. We've previously used the Kleene closure or repetition operator only in reference to regular languages, but the operation applies to languages of any class. L* is the set of strings w that are composed of 0 or more concatenated substrings, wi, where each wi is a string in L. If L is a CFL then there is a CFG, GL, that generates L, with a start symbol that we'll call SL. To create a CFG for L*, create a new start symbol, call it SL*, with two productions, one to formula_25 and one to SLSL*. (i.e., add SL* formula_12 formula_25 formula_60 SLSL*). GL* is a CFG that generates L*, so L* is a CFL. CFLs are closed under Substitution. Theorem: If L is a CFL over an alphabet Σ and for each symbol, xk in Σ, there is an associated CFL Lxk, then Subst(L) is a CFL. If L is a CFL then there is a CFG that generates L, call it GL. If all Lxk's are CFLs then there are CFGs that generate each, call them GLxk respectively. To create a CFG that generates Subst(L), GS(L), replace every instance of an alphabet/terminal symbol in the productions of GL with the start symbol for the grammar of the language corresponding to that symbol, and make the productions of GS(L) be the union of all the revised productions of GL and all the productions of all the GLxk's. GS(L) is a CFG that generates Subst(L), so Subst(L) is a CFL. CFLs are "not" closed under complementation. Intuitively, you might think that given a PDA for a language, we can just invert the accepting and non-accepting states as we did with FAs, but the added complexity of the stack makes this insufficient for showing the CFLs are closed under complement. To say that a language class is not closed under an operation, is to say that "there exists" at least one language (for a unary operation such as complementation) or at least two languages for a binary operation such as intersection, that don't result in a language of the specified class. So, finding such a counterexample is sufficient for showing the CFLs are not closed under complementation, for example. But finding such a counterexample can be nontrivial. How do we show that the complement of a particular CFL is not a CFL? We would have to show. that there is no CFG or PDA for it. Consider the language L = {aibjck | i,j,k formula_55 0 and i formula_119 j or j formula_119 k}. CFG Exercise 2 in chapter "Context Free (Type 2) Grammars and Languages" asked you to give a CFG for this language, thus demonstrating that it is a CFL. The complement of L is {anbncn | n formula_55 0}, which is not a CFL because ... Properties of Context Sensitive Languages. Closure Properties of CSLs. The CSLs are closed under However, the CSLs are not closed under substitution. The CSLs are closed under Complement. If L is a CSL then its complement, Lc, is a CSL. If L is. a CSL then there is a linear bounded TM, ML, that recognizes L, and that halts on all inputs. A linear bounded TM for Lc can be created that calls ML as a subroutine and that accepts if ML rejects, and that rejects if ML accepts. The CSLs are closed under Union. If L1 and L2 are CSLs then L1 formula_4 L2 is a CSL. If L1 and L2 are CSLs then there are CSGs, GL1 and GL2, that generate L1 and L2 respectively. To construct a CSG to generate L1 formula_4 L2 copy all the productions GL1 and GL2 with variables renamed as necessary so as to not confuse variables from different grammars, and create a start symbol SL1 formula_4 L2 for the new grammar, GL1 formula_4 L2, with two productions SL1 formula_4 L2 formula_12 SL1 and SL1 formula_4 L2 formula_12 SL2. This new grammar is a CSG since the new productions are all noncontracting, and it generates L1 formula_4 L2. The CSLs are closed under Intersection. If L1 and L2 are CSLs then L1 formula_36 L2 is a CSL. The CSLs are closed under Concatenation. If L1 and L2 are CSLs then L1L2 is a CSL. The CSLs are closed under Kleene Closure. If L is a CSL then L*, is a CSL. We can say instead that L+ = L* - formula_25 is a CSL if it is important to exclude formula_25. The CSLs are "not" closed under Substitution. A table from the first edition of Hopcroft and Ullman on closure properties of languages in the Chomsky hierarchy show that the CSLs are not closed under substitution, but the table from the second edition show that the CSLs ARE closed under substitution. Properties of Recursive Languages. As I have already noted under section "Turing Machines and Language Classes/Recursive languages", the definitional characteristic of the class of recursive languages is that there is a TM that recognizes the language and that is "guaranteed to halt in both accept and reject cases". Because of this gaurantee, recognition of a recursive language is said to be "decidable". We also say that the recursive languages are recognized by an algorithm. Unlike all the other language classes discussed so far, there is no class of grammar that precisely delimits the recursive languages, though of course there are grammars that generate a subset of the recursive languages, notably CSLs. Closure Properties of Recursive Languages. The recursive languages are closed under However, the recursive languages are not closed under substitution. Since there is no equivalent class of grammars for the recursive languages, all our arguments about closure (or not) of recursive languages will be based on TMs. The Recursive languages are closed under Complement. If L is a recursive language, then there is a TM, call it ML, that recognizes L and that is guaranteed to halt for all accept cases and in all reject cases. Create a new TM, ML', that calls ML as a subroutine. If ML returns accept for a input string, then ML' returns reject. If ML returns reject for a input string, then ML' returns accept. ML' accepts the complement of L, and always halts, so the complement of L is recursive. The Recursive languages are closed under Union. If L1 is a recursive language and L2 is a recursive language, then L1 formula_4 L2 is recursive. Let ML1 and ML2 be the TMs recognizing L1 and L2 respectively and each is gauranteed to halt. From ML1 and ML2 we construct an always halting TM, ML1formula_4L2 , for L1 formula_4 L2. Since TMs and computers are equivalent in terms of what can be computed, we represent ML1formula_4L2 in programming language pseudocode with the understanding that this can be translated to a TM. Each of the component TMs for L1 and L2 are being used as subroutines. The construction for ML1formula_4L2's behavior covers all conditions and is gauranteed to halt. The Recursive languages are closed under Intersection. If L1 is a recursive language and L2 is a recursive language, then L1 formula_36 L2 is recursive. Let ML1 and ML2 be the TMs recognizing L1 and L2 respectively and each is gauranteed to halt. The Recursive languages are closed under Concatenation. If L1 is a recursive language and L2 is a recursive language, then L1L2 is recursive. Let ML1 and ML2 be the TMs recognizing L1 and L2 respectively and each is gauranteed to halt. At a high level, <ML1L2 , w> = BOOLEAN FUNCTION Concatenation (ML1, ML2, w) { FOR each of the |w|+1 ways to partition w into xy            IF M1(x) == accept and M2(y) == accept THEN RETURN accept RETURN reject The Recursive languages are closed under Kleene Closure. If L is a recursive language then L* is recursive. Let ML be the TM recognizing L and that is gauranteed to halt. <ML*, w> = BOOLEAN FUNCTION Repetition (ML, w) { FOR each of the 2|w|-1 ways to partition w into x1x2…xk,              IF ML(xi) == accept for all xi in x1x2…xk THEN RETURN accept // can implement as a loop RETURN reject The Recursive Languages are "not" closed under Substitution. If you are given a recursive language R, and each symbol in R’s alphabet corresponds to a recursive language too, then the language that results from applying the substitution to R is not necessarily recursive. See the end-of-chapter exercises below for more. Runtime Complexity. Many students who are taking a class on formal languages, automata, and computation will have already studied algorithm efficiency, perhaps in the form of big-O notation. Runtime efficiency is typically regarded as a characteristic of recursive languages (i.e, membership algorithms defined by always-halting TMs or computer programs), which is why we address efficiency in this chapter on properties of the recursive languages. It would make less sense to talk about efficiency in the case of undecidable problems, but a project asks you to investigate the relevance of efficiency and undecidable problems. Big-O notation is used to indicate an upper bound on runtime cost, or space requirements or some other resource, but we will focus on time initially. Precisely, if "n" is a measure of the size of the input to a procedure, and I say that the procedure is O("f(n)"), then that means that for big enough values of "n", the procedure’s actual runtime "g(n)" ≤ c*"f(n)", for "n ≥ t". "t" is the value that tells us what value of "n" is “big enough”. In English, this tells us that the actual runtime of a procedure, g(n), never exceeds some constant c times f(n) for big enough n. Where do the values of constants "c" and "t" come from? For practical purposes they could come from experiments with a procedure on different size "n", or from analysis, but for theoretical purposes (at least to many theoreticians much of the time) we don’t care. Its enough to know that these constants exist, and that they depend on the algorithm’s implementation details (i.e., what is the language of implementation, the hardware the procedure is run on, etc), and this is precisely why we don’t care about their particular values from a theoretical standpoint. And so we typically don’t fret about the constants and simply say the procedure is O("f(n)"), where "f(n)" is typically a member of some general and simply-stated class of functions, like n0 or 1 (constant), log "n" (logarithmic), "n" (linear), "n" log "n", "n2" (quadratic), "n3", 2"n" (exponential), and "n!" (combinatoric). I have listed these simply-stated function classes in order of increasing complexity or growth rate. That is, "n" log "n" will always be less than "n2" for sufficiently sized "n", for example. If I say that an algorithm runs in O("n3") time, then that means the actual run time, "g(n)", of the procedure will be bounded above for big enough n: "g(n)" ≤ "c*n3", for "n" ≥ "t." Notice that if a algorithm is O("n"), for example, then it is also O("n2") and O("n3") and … and O("n!"). Convince yourself that this is true given the formal definition of big-O notation. The problem, however, with saying that an algorithm’s runtime is O("n"3) when it is also O("n"2) is that the former is misleading because "n3" is not as "tight" an upper bound as is possible. This concern with tightly characterizing the run time complexity of an algorithm is one reason that we also like to characterize algorithms by lower bounds. Ω("f(n)") means that for big enough values of "n," the algorithm’s actual runtime "g(n) ≥ c*f(n)", for "n ≥ t." The constants for a big-Omega characterization of an algorithm may be different than the constants for a big-O characterization of the same algorithm, but again, we don’t typically care about the constants. If an algorithm can be characterized by both O("f(n)") and Ω("f(n)") (i.e., upper and lower bounds for the same "f(n)") then we consider "f(n)" as a tight characterization of the algorithm’s runtime, and we signify this with big-Theta notation Θ("f(n)") means "c1*f(n) ≥ g(n) ≥ c2*f(n)" for "n ≥ t" ("t = max(t1,t2)"). It would be easy to dive still deeper into complexity theory, but under the assumption that you have or will look at this more deeply in an advanced course on algorithm analysis, I’ll sum up with three miscellaneous points. First, its common to say that if an algorithm is characterized by O("f(n)") behavior (or Omega or Theta), then the actual runtime, "g(n) = O(f(n))". This may seem like an odd use of the equality symbol and it is indeed an odd convention (if I had my druthers I might have overloaded the membership symbol instead, so g(n) ∈ O(f(n))). Second, when using any of the notations it is appropriate to say “in the worst case” or “the best case” or the “average case”, though you might think that big-O naturally corresponds to the worst case, big-Omega to the best, and big-Theta to the average, and you wouldn’t be wrong in some sense. But imagine cases, in say AI, where there is a partial ordering on run times, and in one subset of instances performance ranges from best to worst for that subset, while a different subset has a different values for best, average, and worst. In any case, you’ll hear and read that worst case for insertion sort is O("n2") and best case is O("n"), as but one example where O-notation serves double duty. Thirdly, in theory settings, particularly in introductory texts like this one, worst case performance is taken to be of most importance – just how bad can this algorithm be!?! Worst case performance that is greater than polynomial (i.e., with growth rate greater than np for any positive value of p), notably O(2n) and O("n!"), will determine whether a problem is "intractable". In short, using big-O (or big-Omega or big-Theta) there is a hierarchy of language (problem) classes: O(1) formula_142 O(log "n") formula_142 "O(n") formula_142 "O(n" log "n") formula_142 "O(n2") formula_142 "O(n3") formula_142 O(2"n") formula_142 "O(n!"). There are many other language classes defined in terms of runtime of course. Intractability. A dictionary definition of an "intractable problem" is one that is extremely hard or impossible to solve. Problems that are in RE but not recursive are undecidable, and are intractable in a strong sense. Problems that don’t have solutions that are representable by Turing machines at all (i.e., non-RE) are intractable in the strongest sense. But what we will usually mean by intractability are problems for which the only known solution procedures are greater than polynomial in time and/or space as a function of input size (e.g., exponential, combinatorial). "P" and "NP". "P" and "NP" are two broad classes of languages (problems). Any deterministic algorithm that runs in O(np) time, where p is any real number constant, belongs to class "P"(olynomial). Remember that O(np) will include other, smaller growth rate functions, like "n" log "n", as well. An example of a problem in "P" is finding the "minimum weight spanning tree" (MWST) of a weighted graph. "Kruskal’s algorithm" is a polynomial time algorithm that finds the MWST. Any algorithm that cannot be characterized by polynomial deterministic runtime is not in "P". "NP", standing for "Nondeterministic Polynomial", is the class of algorithms, which when run on a nondeterministic TM that is capable of simulating all possible solution paths in parallel, will run in time that is polynomial as a function of the input size. Of course, a NTM cannot simulate all paths in parallel, so a more practical expression of the "NP" class are algorithms where solution paths are a polynomial function of their input in length. "NP" problems have a hard to solve, "easy" to validate character. Finding a solution may require exploring a large number of paths, perhaps O("2n") or O("n!") paths, but each path is polynomial in length, and thus a given solution can be validated in polynomial time, as illustrated in Figure NPprocedure. An example of an "NP" problem is the Traveling Sales Problem (TSP) with a weighted graph. That is, expressed as a language, the TSP asks whether there is a cycle of all nodes in a weighted graph with a total weight that is a specified maximum or less. If so, the encoded graph is a member of the language of graphs for which there is an acceptable tour, otherwise it is not. It is standard that a satisfying tour accompany a 'yes' answer. There are at least O(n!) number of paths (hard to find) of length "n", where n is the number of vertices, so each solution is O("n") in length (easy to validate). Every problem in "P" is also in "NP", so "P" formula_149 "NP". What is strongly suspected, but not yet proved is that "P" is a proper subset of "NP", "P" formula_150 "NP." Despite suspicions to the contrary, you will hear that demonstrating that "P" formula_151 "NP" (or P formula_119 NP) is one of the biggest open problems in computational theory. In light of all this, we will call a problem intractable if it is in "NP", and not known to be in "P". The Satisfiability Problem is NP. The TSP is one example of an "NP" problem that is not known to be in "P". A second, and especially important example of an "NP" problem, which is not known to be in "P", is satisfiability, or SAT for short. The SAT problem is For example, say we are given (¬x0 ⋁ x1 ⋁ x2) formula_153 ( x0 ⋁ ¬x1 ⋁ x2) formula_153 (¬x0 ⋁ ¬x1 ⋁ ¬x2) formula_153 (¬x0 ⋁ ¬x1 ⋁ x3). Since there are 4 variables, x0 through x3, this is called a 4-SAT problem. There are 24 combinations of assignments to the four variables. An assignment that renders the entire expression true is x0 = false, x1 = true, x2 = true, x3 = false: (¬false ⋁ true ⋁ true) formula_153 ( false ⋁ ¬true ⋁ true) formula_153 (¬false ⋁ ¬true ⋁ ¬true) formula_153 (¬false ⋁ ¬true ⋁ false) = (true ⋁ true ⋁ true) formula_153 ( false ⋁ false ⋁ true) formula_153 (true ⋁ false ⋁ false) formula_153 (true ⋁ false ⋁ false). Since there is at least one assignment that renders the entire expression true, the expression is "satisfiable". If all assignments led to a true expression, then the expression would be a "tautology", but that need not be the case for the expression to be satisfiable. Consider this 2-SAT problem: (¬x0 ⋁ ¬x1) formula_153 ( x0 ⋁ ¬x1) formula_153 (¬x0 ⋁ x1) formula_153 (x0 ⋁ x1). There are 22 assignments. Confirm that none of the four assignments result in the entire expression being true. This expression in "unsatisfiable" (aka the expression is a "contradiction"). SAT is "NP" because there are an exponential number of possible solutions, 2"n", for "n" variables, but each solution is size "n." In the worst case, an algorithm like that shown in Figure SATisNP, might have to examine all the assignments, but in point of fact, it's not generally that bad. Even problems for which a solution is nondeterministic polynomial, and there is no known polynomial time solution, its very often the case that answers can be found quickly. The SATisfiability problem is "NP", but a very simple procedure that works well in practice is hill-climbing in search of a value assignment that makes an entire Boolean expression true. In this approach, a solution is guessed randomly, checked for satisfaction of the expression, and revised or simply guessed again. Details of a procedure are shown in Figure SAThillclimbing. Hill climbing, btw, is a popular greedy search procedure that is used in many experimentally-oriented fields, like AI, which are often fast and yield satisfactory results. In general, hill-climbing is one of many ways of effectively dealing with intractability in practice. Deterministic Polynomial Time Reductions. We have previously studied reductions of a language/problem Q to a language/problem R. Recall that if we say Q reduces to R then if we have a solution for R we can use it to create a solution for Q; R is at least as hard as Q (else a solution for R would not assure a solution for Q). As one implication of this hardness observation, to show that R has a certain hardness characteristic (i.e., undecidability, intractability), or something “worse”, then show a correct reduction from a procedure that we know has that hardness characteristic to R. Previously, this hardness characteristic was undecidability. Now, it can be intractability too. We know that SATisfiability is in "NP" (with no known solution in "P") – we constructed an algorithm for it that was clearly "NP" in Figure SATisNP. We can reduce SAT to another problem, say "solving simultaneous integer linear inequalities", thereby showing that this latter problem is "at least" "NP". Consider the earlier SAT problem: (¬x0 ⋁ x1 ⋁ x2) formula_153 ( x0 ⋁ ¬x1 ⋁ x2) formula_153 (¬x0 ⋁ ¬x1 ⋁ ¬x2) formula_153 (¬x0 ⋁ ¬x1 ⋁ x3). This is easily translated to an integer linear program (ILP) by replacing each Boolean variable xk with an integer variable tk, and replacing each negated variable ¬xk with (1 - tk). That's the reduction! So, the example SAT problem becomes the ILP: (1-t0) + t1 + t2 >= 1 t0 + (1-t1) + t2 >= 1 (1-t0) + (1-t1) + (1-t2) >= 1 (1-t0) + (1-t1) + t3 >= 1 A solution to this ILP is t0 = 0 t1 = 1 t2 = 1 t3 = 0 which can be mapped to a solution to the SAT problem, again through an easy substitution: xk = false iff tk = 0, and xk = true iff tk = 1. Confirm the correctness of this substitution to the SAT problem. The significance of all this is, again, that an algorithm for solving the ILP can be adapted to solving SAT. Moreover, the adaptation of the solution is easy in this case -- its O("n"), where "n" is the number of source variables tk, and also number of target variables xk. So, if we should ever find an efficient, deterministic polynomial time algorithm, O(np), for solving ILPs, we will have a polyomial time algorithm for solving SAT, since O("n"p) + O("n") = O("n"p)! In general, to the idea of reduction, we add a constraint that in cases where we care about algorithm complexity, the reduction must be polynomial in time using a deterministic algorithm, so that if efficient polynomial time algorithms are found for what are currently regarded as intractable problems, the cost of adapting the more efficient solutions to still other problems does not push the adaptation back into intractable territory. In the case of SAT and ILP, we have a deterministic polynomial time (i.e., O(n)) reduction from SAT to ILP. Confirm that we can also construct a deterministic polynomial time reduction from ILP to SAT (also of deterministic O(n) time). So these problems are of comparable complexity. This need not always be the case, and in general the reduction in one direction may be more costly than the other direction, but still O(np) for it to be a deterministic polynomial time reduction. NP Completeness. A problem Q is "NP hard" if every problem in "NP" deterministically polynomial reduces to Q (i.e., Q is at least as hard as every NP problem). Moreover, if (1) Q is also in the class "NP", and (2) every problem in "NP" deterministic polynomial reduces to Q, then Q is said to be "NP complete". Note that there are two conditions, as just stated, that have to be satisfied for a problem to be "NP complete". So that this is concrete, let’s say that Q is SAT. If we can show that every problem in "NP" reduces to SAT, then that says that a solution to SAT can be adapted to solve every other "NP" problem as well. And moreover, because the reductions from all "NP" problems to SAT will be polynomial time reductions, then if an efficient deterministic polynomial time solution is ever found for SAT (i.e., if SAT formula_1 P), then every problem in "NP" can be solved in deterministic polynomial time, and "P" formula_151 "NP". That's significant! But how do we show that every problem in "NP" reduces to SAT in polynomial time? There are an infinite number of problems/languages in "NP" ! We know that every problem in "NP" is solved by a non-deterministic Turing Machine that always halts, and we can express a "generic" NDTM for any "NP" problem in terms of SAT. This is one insight of Cook's Theorem. Cook's Theorem. "Cook’s Theorem" says that every problem in "NP" (i.e., a question of membership of an input "w" in the language of any non-deterministic Turing Machine, "NTM", assured of halting) can be deterministically polynomial-time reduced to SAT (i.e., a question of whether a Boolean expression is satisfiable). How can we reduce "<NTM, w>" to a Boolean expression, ENTM,w, where ENTM,w is judged satisfiable iff "NTM" accepts "w"? Any "NP" problem, by the definition of "NP", has paths of configurations or instantaneous descriptions (i.e., solution or nonsolution paths) that are each O(np) in length (i.e., a (non)solution is polynomial in the size of the input, "n" -- remember that this is why "NP" solutions are 'easy' to validate). Because each transition of NTM can move its read/write head at most cell on its tape, and can only write at most one symbol of its tape, the longest that a single instantaneous description (ID) can become is also O(np) since there is a maximum of O(np) moves along any path of IDs by "NTM". Cook therefore posited a matrix of O(np) rows, each an ID, and O(np) columns, each corresponding to a tape cell in an ID. Thus, each cell of the matrix is an (ID, Tape Cell) pair. Moving down the rows correspond to making moves along a path of IDs, and making a change in a tape cell is the change to that cell in moving from one ID to the next. Without loss of generality, we assume that "NTM" uses a one-way infinite tape. The size of this matrix is O(np) formula_170 O(np) = O(n2p) as shown in Figure Cook'sThmMatrix. Importantly, this matrix is strictly a theoretical construct, important because it puts bounds on the size of the problem to be reduced. The matrix does not represent an actual run of "NTM" along any one path -- it couldn't because if running "NTM" were part of the reduction, then the reduction would not necessaily be deterministic polynomial time! Rather, the matrix is a schema that represents the totality of the set of possible paths that can be pursued by "NTM" on "w", just as the reduction to ENTM,w will imply the set of of possible n-tuple truth value assignments to the "n" variables of the ENTM,w. Also, recognize that the O(np) formula_170 O(np) matrix is an upperbound on the dimensions of a possible path followed by "NTM". Some problems will require fewer rows/IDs than O(np) and/or fewer tape cells in an ID than O(np). But for convenience of demonstration, we think of the upperbounds as the actual dimensions, and we assume that every row is filled with blanks of unused rightmost tape cells within an ID up to the O(np) limit on columns, and we assume that the row/ID corresponding to entering an accepting state is duplicated up to the O(np) limit on rows. Ask yourself, if you were given the definition of "NTM", and an input "w", how would you confirm that an arbitrary matrix of IDs, as described above, followed from the "NTM" definition and that "NTM" accepted "w". Your procedure for analyzing a matrix would necessarily have to look at the following: You could define this as an automated procedure for taking "NTM" and "w", and you could embed this in a loop to look at all possible matrices. If any one of those matrices indicated acceptance of the input, then your procedure would indicate that "NTM" accepted "w", and if all matrices corresponded to non-acceptance, then your procedure would reject "w" as a member of L("NTM"). Presumably you recognize the analog between determining whether one of many "NTM" paths accepts and determining whether one assignment of truth values satisfies a Boolean expression. So, lets convert the algorithm into a SAT problem, ENTM,w, which you can think of a "logic program" for those who have used Prolog or another logic programming language. "Step 1:" To translate step 1 above into a Boolean subexpression, the first row/ID of the matrix corresponds to "NTM" being in its start state, q0, and "NTM"'s read/write head being at the left end of the input "w," followed by blanks. Introduce Boolean variables corresponding to the following "Conjoin" these Boolean variables into one Boolean subexpression. Call it ENTM,w,initial. This is illustrated in Figure Cook'sInitialConditions. Note that the cost of creating this subexpression is linear relative to the input size, formula_177 = O("n"). "Step 2": To translate step 2 into a Boolean subexpression, that the last row/ID of the matrix indicates that the NTM state is an accepting state, "Disjoin" these Boolean variables into one Boolean subexpression. Call it ENTM,accepting. This is illustrated in Figure Cook'sFinalConditions. Note that this subexpression does not depend on a particular w, and the cost of creating this subexpression is therefore constant time relative to the input size, O(1). "Step 3": To translate step 3, that for each pair of consecutive rows/IDs, the second of the pair should be immediately obtainable from the first of the pair given the definition of "NTM"'s transitions, formula_41. The subexpression that we write should specify conditions that must apply to each row/ID of the matrix (and its successor row/ID), except the last row, starting with row/ID 0. The translation requires going through the transitions of "NTM"'s formula_41 functions and composing subexpressions for each. Figure Cook'sIntermediateConditions illustrates the translation of one transition for (q3, a). For each possible outcome, say (q5, b, R) for example, becomes a set of conjoined conditions, (csk+1= q5 ∧ Lock+1 = j+1 ∧ Zk+1,j = b). This must be repeated for each possible outcome of (q3, a) listed in the transitions' function, as well as for each entry formula_41(State, Input Symbol) pair listed in formula_41. In the blue, the Figure additionally shows that for all cells other than the one under the read/write head, the values will remain the same between ID k and ID k+1. Given what we have said so far, we would then have a general Boolean expression, of which Figure Cook'sIntermediateConditions only shows a small part, that stated conditions necessary to the validity between an ID k to ID k+1. Given a matrix we could then loop through the consecutive IDs and check them against this Boolean expression. But to fully reduce to SAT, we cannot use explicit looping, but must explicitely replicate the Boolean expression for each value of k, from ID 0 to the penultimate ID/row. While the process of building the subexpression does not directly depend on a particular word, w, the number of terms we must write in the subexpression, call it ENTM,|w|,intermediate depends on the size of "w", i.e., the size of the matrix, O("n"2p). Taking Steps 1-3 together, the Boolean expression representing the SAT problem can be written as There are factors that have not been directly addressed here, notably While important to the formal proof, these further details aren't needed to appreciate the genius of Cook's theorem, and for that matter the genius of conceiving of the concept of "NP complete"ness generally. While the Boolean expression representing the SAT problem corresponding to an arbitrary "NP" problem is long and involved, it is so because it is general to an infinite number of NP problems. As we saw in relation to ILP and SAT, polynomial reductions between two specific problems are often much simpler. Again, one consequence of demonstrating that SAT is "NP complete" is that if it is discovered that SAT has a deterministic polynomial time solution, and thus a member of "P", then all "NP" problems have a polynomial time solution, if by no other means than using SAT as a subroutine, and thus "P" = "NP". Other NP Complete Problems. Knowing that SAT is NP complete, we can now show that other "NP" problems are "NP complete" by polynomial reducing SAT to these other problems. You’ll see that we are essentially reducing both ways. Cook’s theorem polynomial reduced from every "NP" problem to SAT, indicating that SAT was at least as hard as any other "NP" problem. This is illustrated in Figure AllNPsReducedtoSAT. And now, by polynomial reducing SAT is another "NP" problem, we are showing that the other problem is at least as hard as SAT. For example, we already showed that SAT polynomial reduces to ILP. With both directions of reduction demonstrated, we can say that the other problem (e.g., ILP) is comparable to SAT in complexity – it too is "NP" complete. In Figure ExtendingNPComplete, assume that ILP is Lk in row (1) column (a), and the two-way arrow indicates that SAT polynomial reduces to Lk, and vice versa. Since Lk has now been shown to be "NP complete", all "NP" problems reduce to Lk as well, as shown in row (1) column (b). One important point to emphasize is that the other problem, Lk, can now be used in subsequent reductions to show that still other problems are "NP complete", since the demonstrations of equivalent complexity are transitive. Furthermore, as illustrated in row (2) of Figure ExtendingNPComplete we can take another problem in "NP", say Lj, and by showing that Lk reduces to Lj we have shown that every "NP" problem reduces to Lj as well, and so Lj is NP complete. By now a large number of problems have been shown to be "NP complete". A question that you might be asking yourself is whether there is any problem in "NP" (and not known to be in "P") that is not "NP complete"? Or not known to be "NP complete"? See the exercises on these questions. Again, one consequence of demonstrating a large class of "NP" "complete" problems that are equivalent in terms of complexity is that if any one of them turns out to have a deterministic polynomial time solution, and thus a member of "P", then they all have a deterministic polynomial time solution and "P = NP". Learnability Theory. Leslie Valiant, Probably Approximately Correct: Nature's Algorithms for Learning and Prospering in a Complex World, pp. 76-81 Properties of Recursively Enumerable (or Unrestricted) Languages. The RE languages are the broadest class of languages that we will address, and the class of RE languages includes all other language classes that we have considered -- recursive languages, CSLs, CFLs, DCFLs, and regular languages. The RE languages are equivalently defined by unrestricted (Type 0) grammars and by TMs that may not halt on their input in the case where that input is not a member of the language defined by the TM. We will start with the closure properties of RE languages, then talk about the inherent undecidability of other decision questions of RE languages. Because membership in RE languages that are not also recursive is undecidable, and therefore are not implementable by algorithms, we don't include issues of algorithmic runtime or space efficiency in this section. Closure Properties of RE languages. The RE (aka unrestricted) languages are closed under However, the RE languages are not closed under Complement. The RE languages are closed under Union. If L1 is a RE language and L2 is a RE language, then L1 formula_4 L2 is RE. It is left as an exercise to show by construction of an unrestricted grammar or a TM for the language represented by L1 formula_4 L2 that the RE languages are closed under union. The RE languages are closed under Intersection. If L1 is a RE language and L2 is a RE language, then L1 formula_36 L2 is RE. It is left as an exercise to show by construction of an unrestricted grammar or a TM for the language represented by L1 formula_36 L2 that the RE languages are closed under intersection. The RE languages are closed under Concatenation. If L1 is a RE language and L2 is a RE language, then L1L2 is RE. It is left as an exercise to show by construction of an unrestricted grammar or a TM for the language represented by L1L2 that the RE languages are closed under concatenation. The RE languages are closed under Kleene Closure. If L is a RE language, then L* is RE. It is left as an exercise to show by construction of an unrestricted grammar or a TM for the language represented by L* that the RE languages are closed under Kleene Closure. The RE languages are closed under Substitution. If L is a RE language and for each symbol, xi, in the alphabet of L there is an associated RE language, then Subst(L) is RE. It is left as an exercise to show by construction of an unrestricted grammar or a TM for the language represented by Subst(L) that the RE languages are closed under substitution. The RE languages are "not" closed under Complement. If L is a RE language then its complement, formula_10, is not necessarily RE. We can show this by example of an RE language RE with a complement that is definitely not RE. We have already given such an example earlier in the text. Its left as an exercise to find and understand the example. Decision Properties of RE languages. We covered decidability and undecidability of selected computational problems, notably questions on the membership in recursively enumerable languages. The universal language, which we re-expressed as the halting problem (i.e., will the universal Turing machine always halt on its input) is undecidable (because MU may not halt when its argument TM does not accept its input). In general, undecidability can signify one of two conditions. Membership in Lu, for example, is undecidable in one sense, which is failure to say no and halt if a string is not in Lu. More broadly though undecidability could mean failure to answer at all and halt 'yes' cases and/or 'no' cases. This second, broader notion of undecidability would relate to languages outside of RE, so we don't deal with it here. We restrict ourselves to undecidability in the former case -- the 'no' case -- as relates to the RE languages. Rice's Theorem. A yes/no question of the RE languages is "trivial" if the correct answer is either 'yes' in all cases (i.e., its a property of all the RE languages) or its 'no' in the case of all RE languages. Otherwise, its a "non-trivial" question or property. For example, the question of whether a given RE language is a regular language is a non-trivial question/property of the RE languages, since some RE languages are regular and some are not. Rice's Theorem says that every non-trivial question of the RE languages is undecidable. Suppose we have a yes/no question Q about the RE languages. Then LQ is the set of RE languages for which Q=yes. Continuing our earlier example, if Q is "Is this language a regular language?" then LQ is the set of regular languages. In fact, LQ can be viewed as a language of languages, where you'll recall that each language in LQ can be represented finitely by an automaton or grammar, and that each such finite representation can be expressed as a (binary) string. Because the questions Q vary widely, we'll always assume that each language in LQ is a binary string representing a TM, regardless of whether it miight be represented in some other fashion (e.g., as a FA). Its also common to refer to LQ as a property -- e.g., every member of LQ has the property of being a regular language. To phrase things differently, a non-trivial property, LQ, of the RE •A property of the RE languages is the subset of the RE languages with that property. •For example •the property of RE of being CFL is the set of CFLs. •the property of being empty is the set {∅}. •A property is trivial if it is either "the set of all RE languages" (universally true) or "the empty set of languages" (universally false). Otherwise the property is non-trivial. •Note that the empty set (no languages at all), ∅, is different from {∅}, the set containing the empty language (which is accepted by a TM that accepts the empty language). •Reminder: a set of languages can be represented by a set of strings, each representing a TM. •Suppose LP is the set of languages with non-trivial property P. •Is the question of a language L’s membership in the LP languages decidable or undecidable? •More particularly, is the question of a Turing machine’s encoding of a language L’s membership in the Turing machine’s encodings of the LP languages decidable or undecidable? •Prove the question of a language L’s membership in the LP languages is "undecidable". •Assume that membership in LP is decidable. Then there is an algorithm, TMP, that recognizes LP. •Regardless of non-trivial P, reduce LU (the Universal Language aka the Halting Problem, known undecidable) to LP, thus showing a contradiction with assumption that LP is decidable. Rice’s Theorem implies an infinite number of undecidable properties for recursively enumerable languages, and it does it in one fell swoop. Rice’s Theorem tells us that for sufficiently expressive languages undecidability is the norm rather than the exception. Of the problems implied by Rice’s Theorem to be undecidable, there are different kinds. Its undecidable whether a TM accepts the empty set, or a non-empty set. Given intuitions about complementation, we might think of these questions as essentially the same, but in fact when dealing with RE languages that are not recursive our intuitions regarding complement may be misleading. In the case of the question of whether an arbitrary TM accepts the empty language is not RE – its "not" a property that can be tested by any TM. Whereas, the question of whether a TM accepts a non-empty language is RE (but not recursive). Expanding the Classes of Languages. Any characteristic defines a set of languages for which the characteristic is true of all members. Any question defines a set of languages for which the answer to the question is 'yes'. Exercises, Projects, and Discussions. RL Props Exercise 1: For each of the constructions described under Regular Languages are closed under Union, Concatenation, Kleene Closure, and Substitution, draw a visualization of the construction, choosing a way of visually representing the component DFAs in the construction, as well as the constructed NFA. Why do this exercise? Because many learners are visual learners, and undoubtedly all learners benefit from some visualization, be it mental imagery or manifest on 'paper'. An important skill for many or most in CS generally is an ability to visualize algorithms and data structures. This will undoubtedly continue to be a desirable skill even as AIs take over much of the software development burdon. As an aside, if you are interested in societal benefits of your efforts, consider and potentially act on the creation of educational materials in CS for the blind and hard of sight. RL Props Exercise 2: For each of the constructions described under Regular Languages are closed under Union, Concatenation, Kleene Closure, and Substitution, give alternative arguments using regular expressions. RL Props Exercise 3: For each of the constructions described under Regular Languages are closed under Union, Concatenation, Kleene Closure, and Substitution, give alternative arguments using regular grammars. RL Props Exercise 4: Regular expressions were defined in terms of three operations -- concatenation, choice, and Kleene closure. Expand regular expressions based on closure properties for regular languages. The intent is not that you increase the representational power of REs, but that the expansion provides syntactic convenience and comprehensibility. RL Props Exercise 5: Show that the regular languages are closed under reversal. That is, if L is a regular language then LR is a regular language. CFL Exercise 1: Show that the CFLs are "not" closed under intersection. Project 1: Investigate the properties of the class of inherently ambiguous CFLs. Project 2: Investigate NP problems that are not "NP complete", or not known to be "NP complete". Are there any such problems? Project 3: Investigate superpolynmial algorithms that are subexponential. Project 4: Investigate Closure properties of P, NP, and NP complete problems Exercise RecursiveSubstition1: Give a demonstration that if you are given a recursive language R, and each symbol in R’s alphabet corresponds to a recursive language too, then the language that results from applying the substitution to R is not necessarily recursive. Exercises (Closure Properties of RE languages) For each of the subsections above, demonstrate the truth value of the closure property as described.
16,708
Theory of Formal Languages, Automata, and Computation/Applications of Language Classes. Context Free Languages, Parsing, Lexical Analysis, and Translation. Recursive Descent Parsing and Translation. CFGs have been used for some time in defining the syntax of programming languages, starting with Algol. Ideally, we can directly translate a grammar into mutually recursive procedures for parsing a computer program, and ultimately for compiling it. Consider the grammars of Figure ExpressionGrammars. The first of these grammars for arithmetic expressions is simple, yet ambiguous, since id + id * id (and other strings) can be generated by two (or more) distinct leftmost derivations or distinct parse trees. So, that is unsuitable as the basis foor automated parsing. The second grammar is not ambiguous, having enforced operator precedence rules to ensure desirable, single parse trees for every string. This is as far as we got when introducing this CFL of arithmentic expressions. As it turns out, grammar (b) presents another prooblem for the recursive procedures we have in mind. This grammar is a left recursive. When this grammar is translated into mutually recursive procedures, we have a function such as Expression (i.e., E) potentially calling itself, as indicated in E formula_1 E + T, with no change in the argument that is passed. This can lead to infinite recursion. Figure AlgolGrammar shows a simplified CFG for the Algol-60 programming language. Algol-60, as in the year 1960, was the first programming language with syntax defined by a context free grammar. The variables in the grammar translate rather directly into mutually recursive functions in a recursive descent parser. Figure AlgolTranslator shows the parsing routines corresponding to the grammar variables of expression, term, and factor. From parse_expression(), a term is what is first expected, and so parse_term() is called to identify a portion of code that was generated by the term part of the grammar. Parse_term() then calls parse-factor() to identify the Algol code that was derived from the factor variable of the grammar. In parse_factor() actual symbols in the program are scanned by the part of the parser called the "lexical analyzer", which is concerned with identifying basic tokens, such as constants, variables, operators, and punctuation in the Algol code. In the sample translator code, the match() function is highlighted in bold blue, and this function contains a call to the next_token() function, which is essentially the lexical analyzer and performs the recognition procedures associated with the parts of the grammar in Figure AlgolGrammar that are highlighted in bold blue. Those productions, in bold blue, correspond to a regular grammar (almost), and you should confirm that productions not conforming exactly to the regular grammar constraints of A formula_1 aB or A formula_1 a can be easily translated to productions that do adhere to the constraints (though its a bit unwieldy to do so because of the space it would require). Though the next_token() code is not shown in this illustration, it is an important part of a parser implementation since the lexical analyzer is what actually scans the input string/program. In Figure AlgolTranslator, the generation of machine code is shown on bold purple. The machine code language assumes that arguments are pushed (loaded) onto a stack, and binary operations such as addition and multiply pop the top two items on the stack, apply the operation, and push the result back on the stack. So in the illustration of Figure AlgolTranslator the machine code translation of the Algol expression '1.2 + Var21 * 3' reads as follows and would execute as indicated in comments when it was run. If I had asked chatGPT to show more code for the parser, then it would include parser code for other grammar variables such as 'assignment statement' and included additional code generation commands such as STORE_VAR for storing the top of the runtime stack in a relative memory address associated with an Algol program variable name. There are other important parts of a translator, such as the symbol table for associating Algol program identifiers with computer memory (relative) addresses, that are not tied to any underlying grammar. In sum, a recursive descent parser and translator follows elegantly from the specification of a programming language grammar. The recursive routines, as with any function calls, will push records onto the parser's runtime stack (not to be confused with the runtime stack that is assumed by the assembly code above), and this reliance on a stack by the parser makes it comprable to a PDA. Relationships to Artificial Intelligence. There are many informal and formal connections between AI and formal languages, automata, and computation. This chapter delves into these connections, as well as the requisite and optional AI material. Notes embedded in the main text indicate how I often present the material over a semester. While this material is at the end of the textbook, it is typically scattered in my lectures throughout the semester. Probabilistic Grammars and Automata. In AI, non-determinism is prevalent, but cannot always be removed entirely while guaranteeing the same results because (a) operations/productions/transitions are uncertain, and (b) because the world/IDs are not typically complete or correct (i.e., the AI doesn’t know some things or it is wrong). Measures of uncertainty are used in AI, most notably probabilities or fuzzy membership. Probabilistic grammars are a type of formal grammar where each production rule has a probability associated with it. These probabilities represent the likelihood that a particular rule will be used to generate a string. Natural Language Processing: Probabilistic grammars can be used to model the probability distribution of sentences in a natural language, making it useful for tasks such as speech recognition and machine translation. Information Retrieval: Probabilistic grammars can be used to generate queries that match documents in a database, allowing for more accurate search results. DNA Sequencing: Probabilistic grammars can be used to model the probability distribution of DNA sequences, making it useful for tasks such as genome assembly and alignment. NFA-to-DFA translation and speedup learning. The first point of inspiration is in the elimination of nondeterminism when translating an NFA to a DFA (or a nondeterministic PDA to a deterministic PDA, if the latter exists). I’ve said that NFAs and nondeterminism generally can lead to simpler and more elegant solutions, and be much easier to specify by a human, but these automata will require enumeration (depth first or breadth first or some heuristic approach) when they are actually used (to see whether a string reaches an accepting configuration along at least one path of the machine), and that is computational costly. So ideally, we can imagine that in the case of a complex problem, someone can specify an NFA solution, but then use automated translation to acquire a more efficient-to-execute deterministic recognizer. The same can be said about AI and machine learning generally. I regard it as definitional of AI that an AI system explore alternatives, requiring enumeration or search – that is nondeterminism. Machine learning can then be used to eliminate nondeterminism, and thus “speedup” problem solving, planning, and other processes by the AI. Its rarely, if ever, the case that all sources of nondeterminism can be eliminated in an AI program, however. The reasons are several. One, operators or “transitions” in AI often do not have perfectly predictable outcomes. A robot may try to pick up a cup, for example, and the cup slips from the robot’s grasp – there may always be at least two outcomes, I’ve got the cup or I don’t, and probably more outcomes if there can be liquid in the cup or not, hot versus cold liquid, etc. While some outcomes may be rare, we still want the robot to be able to respond to them – this possibility of multiple outcomes is the clear manifestation of nondeterminism. Another source of nondeterminism that is related to the first is that the AI may not know with certainty about all relevant aspects of the environment. For example, an intelligent vehicle may not know that a pedestrian is skateboarding behind a building but is about to abruptly enter traffic, and the AI had better anticipate that possibility, as well as mundane circumstances. Nondeterminism again. Even though its unlikely that machine learning can get rid of all nondeterminism, it can “reduce” the nondeterminism. In a real AI application we might measure the extent that the nondeterminism is reduced by the expected number of states that an AI system has to enumerate, both before learning and after learning, and while learning. The expected number of states generated during an enumeration or search is the sum of the states generated across all possible problems, weighted by the probability of each problem. So an expected number is a weighted average. DFA-to-RE Translation and Macro Learning. The second learning task that I want to relate is called macro learning, which is used in AI learning to solve problems more effectively. In particular, if an AI observes that transitions are repeatedly sequenced together, or it discovers that certain transitions that are sequenced together lead to solutions, then the AI might form a macro operator (or macro transition) that concatenates the transitions together so that they can be applied as a single packet, or macro. This process of macro formation will both reduce the nondeterminism found in enumeration by taking large steps in individual paths or derivations, but it also adds to nondeterminism by creating additional choice points in an enumeration or search for a string’s derivation, since the macro is added to, rather than replacing, the operators (or transitions) that are used in its composition. The management of the tradeoffs in macro learning led to a good deal of research on learning choice preferences or biases as well that involved probabilities that are placed on choices. I don’t get into probabilities here, but the slides do touch on their use in probabilistic grammars which are related to AI approaches to solving problems or analogously in deriving strings. AI Planning. While the implication operator in logic theorem proving looks the same as the production symbol, formula_1 , theorem proving is additive. When making an inference, nothing in the current world model is overwritten. In contrast, in AI planning a “mental” model of the current world state is maintained and changed through the application of operators (or productions). A common representation of operators show PREconditions, which have to be true before the operator can be applied, and EFFects, which are the conditions that are true after the operator is applied. For our purposes we can think of an operator as a production, as in a grammar, PRE formula_1 EFF, where effects rewrite preconditions. There is more nuance to it than that, but the details are left to an AI course. The slides show two examples of the derivations of single plans, analogous to a single derivation of a string, but as with languages, there is a search for the derivation (plan), which requires an enumeration of many paths, most of which are unsuccessful derivations (and that’s ok – we are looking for just one plan/derivation). Generating a plan can be modeled as generating a string with a grammar. The operators can be regarded as skeletal productions, that is productions with variables, as illustrated in the blocks-world operators of the slides – Pickup(?X), Putdown(?X), Unstack(?X, ?Y), Stack (?X, ?Y). The last slides of Friday’s lecture give a brief glimpse at this equivalence, though AI planning will generally operate with additional pattern matching capabilities than we have seen with respect to strings. The equivalence also makes reference to ridiculous computational (storage and runtime) requirements in the case where we are interpreting AI states as strings and AI operators as productions, but computational cost is not an issue we are concerned with at this point, and similar equivalence arguments that are not concerned with costs are made by Hopcroft, Motwani, and Ullman 3rd Edition (2007) when comparing Turing Machines and computers (e.g., breakout boxes on pp., 322, 346, 364). The grammars that are equivalent to AI planning problems would typically be context sensitive, but a general or universal AI planning system is unrestricted. I don’t show this, but have a chat with chatGPT to learn more. Here are some handy references too, not just to AI planning but to other material at the intersection of CS 3252 and AI, notably forms of machine learning that have fallen out of favor, but still very interesting – these references are optional material. Solomonoff, R. J. (1964). “A Formal Theory of Inductive Inference, Part 1”, Information and Control, Vol. 7. Biermann, A.W. (1972). “On the Inference of Turing machines from sample computations”, Artificial Intelligence, Vol 3. Angluin, D. (1980). “Inductive Inference of Formal Languages from Positive Data”, Information and Control, Vol. 45. Moore, R.C. (1984). “A Formal Theory of Knowledge and Action” Technical Report 320, SRI. Tate, Alan AI Planning MOOC. Also, plenty of other references online (search for ‘AI Planning’) or take the AI course. Turing Equivalency of Selected Computational Platforms. One of the foundational theorems underlying Automata theory and Computer Science as a whole is the Church-Turing thesis. This thesis, in part, posits that every function that is capable of being computed can be computed on some Turing Machine. A universal TM, being capable of simulating any arbitrary TM, is thus capable of computing any and all computable functions. Note that this does not mean that a universal TM is capable of computing all conceivable functions, as there are functions that are not computable (we’ve already discussed one previously, in fact: the halting problem). An important corollary that falls out of this is that any system that can simulate a universal TM is transitively also capable of computing any function. We call such a system Turing Complete (TC). It is worth noting that a TC system may be more powerful than a TM. If we can show that a TC system can also be simulated by a TM, we can say that it is Turing Equivalent (TE), meaning that it has exactly the same computational power as a TM. In this section, we will survey a small portion of known TC and TE systems. All systems will only be briefly discussed for brevity, but links to more information about each system are provided for interested readers. Important TC systems. The rest of this section will introduce a variety of TC and TE systems, many of them surprisingly so, both to outside observers and to the creators themselves. To help with this, though, we will first look briefly at some of the most fundamental TE systems. The first is the Von Neumann architecture, designed in 1945 by John von Neumann. This architecture is the underlying design for most modern computer systems. In short, this architecture is composed of an expandable memory unit connected to a general-purpose processing unit that can handle arithmetic, logic, and control processes. A similar architecture that has also heavily influenced the design of the modern computer is the register machine, which relies on addressed access memory, rather than a sequential tape. By some formulations, the Von Neumann architecture is a specific case of the register machine model. Lambda Calculus is a system of mathematical logic built around the application of abstracted first-order functions. A computationally universal formulation was developed in the 1930s by Alonzo Church. Church and Turing famously argued that their computational models were equivalent and computationally universal as a foundational aspect of the Church-Turing Thesis. Programming Languages. The vast majority of programming languages are modeled around one or more of the above mentioned systems. Most notably, the class of imperative programming languages are designed to utilize the Von Neumann and Register Machine architectures. Functional programming languages, on the other hand, are largely a direct implementation of some variety of lambda calculus. A specific subset of programming languages of note here are the esoteric programming languages. These languages are very often designed to be difficult to use or use unusual encoding methods. While these languages are, usually intentionally, nonviable for general use, they provide a valuable look into how simple a system can be while maintaining Turing Completeness. A few noteworthy examples are provided here. Probably the most well-known esoteric language is BrainFuck (often shortened to BF). The language consists of exactly 8 commands, each designated by a single character. These commands control the position of a pointer to a 1-dimensional memory and allow altering the value of the cell that is being pointed at, effectively encoding a TM. In addition to minimizing available commands, some esoteric languages experiment with unusual encoding modalities. Two notable examples are Piet and Choon. Both languages are similarly direct implementations of a TM architecture, but instead of being written in text, Piet is “written” using a bitmap image, where pointer position and data manipulation are defined by changes in hue and luminance. Choon instead uses MIDI sound files as source code, encoding commands via sonic frequency. Interesting TC systems. In this section we will briefly introduce a variety of systems that were created as interesting procedures that generate complex output from a simple set of rules. These systems were generally not designed to be Turing Complete, but were found later to be so. First is a variety of systems known as Tag systems. These are machines that are defined by a starting string and a set of production rules. At each iteration, the first character in the string determined via the production rules what substring is to be added to the end of the current string and some number, m, characters are removed from the beginning of the current string. It has been shown that there exist Tag systems that are capable of simulating a TM for all m greater than or equal to 2. A particularly noteworthy class of systems are the cellular automata. These are a broad class of machines defined by an infinite grid (of varying dimensions) where every cell has a boolean value. At each iteration of the automata, a simple ruleset determines which cells will be activated in the next iteration. While not all cellular automata are Turing Complete, there are at least 3 noteworthy examples that are. Probably the most influential is Rule 110. This is the most well known elementary cellular automata. In elementary cellular automata the current state is a 1-dimensional Boolean string. In the subsequent state, the next value of cell x is determined by the values of cells x, x-1, and x+1. This effectively means that the machine is defined using only eight transition rules. The simplicity of Rule 110 makes it a common target for proving Turing Completeness. Rule 110 itself was shown to be TC by showing that it can simulate a TC Tag system. Other noteworthy cellular automata are Conway’s Game of Life and Langton’s Ant. The Game of Life is a 2-dimensional cellular automata where the next state of a given cell is determined by the number of active cells surrounding it. Traditionally, a cell goes from inactive to active only when it is surrounded by exactly three active cells and becomes inactive whenever it is surrounded by any number of active cells that is less than two or greater than three. This machine has been shown to be able to simulate both a register machine and a universal TM. Langton’s Ant is a lesser-known cellular automata in which an “ant” travels around a 2-dimensional grid. At each iteration, the ant turns either left or right, depending on whether the cell it is in is active, flips the value of that cell, and moves one cell forward. This system has exactly two rules, one for each cell value, but has been shown to be able to simulate any Boolean circuit, making it Turing Complete. Which brings us to Boolean circuits. These are probably the most familiar simple TC system to most readers. Boolean circuits are systems of AND, OR, and NOT logic gates. These are the physical implementation of modern computers and are thus TC by virtue of implementing Von Neumann and Register Machine architectures. A noteworthy feature of Boolean circuits is that two gates, NOR (NOT applied to the output of OR) and NAND (NOT AND) gates can each be used exclusively to simulate all other types of logic gates. This means that a system that is capable of simulating either NOR or NAND gates and showing that they can be linked together is enough to show a system is, in principle, Turing Complete. The Games (and Other Software) Begin. Computer scientists and engineers have a history of challenging themselves to show that the software and hardware created by others can be adapted to other applications for the simple fun of it (giving rise to the common internet joke "Can it run DOOM?"). This has led to the discovery of numerous games and software that are themselves Turing Complete. In some cases this is intentional on the part of the developer. A prime example of this is Minecraft, which allows players to use an item called Redstone to create Boolean circuits. In most cases, however, Turing Completeness was unintentional and discovered by dedicated users. I’ll include some noteworthy examples below. A popular city builder game, Cities Skyline , was shown to be TC by way of linking in-game electric and sewage systems creatively to simulate connected NAND gates. DOOM similarly has shown to be able to simulate Boolean circuits when creating custom levels (simultaneously raising and principally affirmatively answering the question: can DOOM run DOOM?). Outside of video games, Turing Completeness has been shown, for Microsoft Powerpoint (note that the reference here is a joke journal, but the work described in the article linked seems legitimate), Microsoft Excel, and Neural Networks. Some amazing examples can even be found in the C++ printf function and the Notepad++ find and replace function. Going back to games, but leaving the world of software, it has even been shown that a tournament legal deck of Magic: the Gathering cards can simulate a universal TM. Even simulation of human heart cells have been shown to be configurable into a working Boolean circuit. Does this matter? While it can be interesting to investigate Turing Completeness of a system for its own sake, it is important to consider why the answer to that question is important from a practical standpoint. While there may be other reasons that are not discussed here, we identify two reasons for why we would want to know that a system is TC: provability and security. Provability, the ability to prove definitively that a system performs as expected in all scenarios, is an essential consideration in safety-critical systems. Unfortunately, a TC system may be impossible to fully prove compliance with expected behavior as a consequence of the halting problem. This is largely because, given specific inputs, the system can be induced to perform any computation. Notably, this can, in principle, be done purposefully to circumvent controls placed on the user. Malicious actors can use these “arbitrary execution exploits” to insert and execute malware. Minimum Turing Completeness. The simplicity of many of the systems introduced here introduces another interesting question for which this work has no sure answer: What are the minimum features needed for a system to be Turing Complete? A candidate set proposed here are unbounded memory (TM: inifinite tape), conditional branching (TM: transition table), and unbounded recursion/iteration. All the systems introduced here have these three properties and it seems clear that they are necessary features. They are clearly not sufficient, however, as many cellular automata also have these features while being known to be not TC because of choice of transition rules (e.g. Rule 204). The question of what features the transition rules need for computational completeness is left to the reader to consider. Author: Kyle Moore Large Language Models. Large Language Models (LLMs) are a class of autoregressive models that generate tokens (word segments) based on a given context with each generated token appended to that context. By continuously appending to the context and generating the next token, an LLM is able to interact via text simply by predicting the next token given the tokens so far. The technology that empowered early LLMs like GPT is called a "decoder-only" transformer model. Transformers are a special class of artificial neural network based on an auto-encoder architecture with maximum inner product based attention mechanisms that permit the network to select features based on the content of the context. The decoder-only variant lacks the encoder portion of the network. It is interesting find that the "vanilla" transformer model is as computationally powerful as a Turing machine and therefore Turing complete. This was proved by showing that a vanilla transformer could simulate an arbitrary recursive neural network which was itself shown to be as powerful as a Turing machine. Later, it was proved that the decoder-only variant of the transformer was Turing complete as well by showing that they could likewise simulate an arbitrary RNN. Since LLMs are based on decoder-only transformer models, the above discussion would seem to suggest that LLMs are capable of being Turing equivalent. However, this is not the case as the models that are trained do not learn to act as a Turing machine. Further, learning to be Turing equivalent is not possible for an LLM given it must also learn to predict the next token of human consistent language. This can be proved by construction. If a language model is asked to recognize a language which requires more space than is present in the latent representation of the network, then it must output the intermediate state as a token and continue to execute over that intermediate representation via recursion through the context. And, by doing this in-context recursion, violates the language modeling directive to output human consistent text. The problem demonstrates that an optimization function may not be able to optimize over all objectives if they are incompatible. The language modeling task is incompatible by nature with in-context recursion which is necessary for the model to be Turing equivalent. Therefore, for a decoder-only LLM, it is impossible to be both Turing complete and pass a Turing test. Author: Jesse Roberts References. A.Vaswani, N.Shazeer, N.Parmar, J.Uszkoreit, L.Jones, A.N.Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” "Advances in neural information processing systems", vol. 30, 2017 Bhattamishra, S., Patel, A., and Goyal, N. On the com- putational power of transformers and its implications in sequence modeling. "arXiv preprint arXiv:2006.09286", 2020. H. T. Siegelmann and E. D. Sontag, “On the computational power of neural nets,” in "Proceedings of the fifth annual workshop on Computa- tional learning theory", pp. 440–449, 1992. Roberts, J. How powerful are decoder-only transformer neural models? "arXiv preprint arXiv:2305.17026", 2023. Exercises, Projects, and Discussions. Project 1: Build a compiler for a non-trivial subset of a programming language of choice.
6,314
Kitchen Remodel/Appliance and sink installation. Appliance and sink installation must strictly follow the manufacturer's instructions. In this chapter, I will just add a few remarks, in piecemeal fashion, that may be helpful in some cases. General. The probably best advice that I can give for appliance installation is to think ahead while you do it: is there a chance that you will move the appliance out of its position, like for a check or a repair or simply because you still need to install floor tiles under it? Make sure the electrical and water connections are long enough to pull it out of its recess and even further (you may want to be able to crawl behind it). We almost overlooked that necessity when we installed our refrigerator, because it doesn't stand against a wall, but we could conveniently access it from behind (see image), for most of the time during our kitchen installation. After the installation of the back row of cabinets, we would have lost that access and run into severe doo doo.<br clear="all"> Microwave oven. The installation of a microwave oven into a "Sektion" cabinet frame requires a trim kit which needs to be purchased in addition to the appliance. That those are two separate products, was the case at least in 2022, when I ordered my kitchen. Due to its weight, a microwave oven must not sit on a regular shelf, but on a "reinforced ventilated shelf". Both this shelf and the set of black "Nyttig" support brackets which are needed for the attachment of the cover frame are separate products which need to be ordered in addition to the cabinet frame and to the trim kit. Although we are very happy with that appliance now, initially I would have preferred not to purchase the Ikea products, namely their rather pricey trim kit. But I didn't dare to because I didn't know at that time if a product of another manufacturer would fit into my cabinet configuration. If you have a similar conflict, maybe the following images can help you a little to decide what you are going to do: Notice that it will be necessary to cut some openings into the cabinet's back panel for the electrical connections (see first image). Ikea give detailed instructions. But if your cabinet configuration deviates from what they suggest or if you disagree with their recommendation, you may also use common sense and only cut what you think is necessary under your special circumstances. However, since those cut-outs won't be visible anyway, there isn't too much to worry to begin with. Oven. The most important thing to consider when installing an oven is that, depending on where you live and on the product, it may require a special electrical connection with higher voltage. In the U. S., that means, too, that you may have to deal with a supply cable that is thicker and stiffer than for most other appliances and therefore will need more place. If you have a built-in oven, the second most important thing is that it will probably be the heaviest appliance which you don't only need to move around in your space, but also to lift into a relatively high position. Our own oven weighs 260 lbs (118 kg). We found it extremely helpful that we, at that point of time, had a team of contractors in our home anyway, who were friendly enough to contribute elbow grease. Like a microwave oven, a oven must be installed on a "reinforced ventilated shelf". If you have an Ikea kitchen designer helping you with your order list, they will add everything you need for your oven installation. This includes elements which will reinforce the shelf on which the oven will sit. They will also give you two "Nyttig" filler pieces which will cover the two gaps above and below the oven (first image). By the way: On their web site, Ikea give the impression that there are only two possible heights on which a built-in oven can be installed. Don't get mislead: as long as you have a plan what to do with the rest of that cabinet frame (including drawer fronts or doors that fit into you design), you can actually place an oven and/or a microwave oven at "any" height that works best for you. The installation of an oven and a microwave uses up 50" of height. So in our 80" high cabinet, we still had 30" of height left, which we decided to use for two 15" high front elements. It could also have been done differently; but considering our body heights, this layout is what worked best for us. Cooktop. Depending on where you live and on the chosen appliance, an induction or electric cooktop may require a special electrical connection with higher voltage. Otherwise, the installation of a cooktop is pretty straightforward. The product's instructions include information about the dimensions and exact position of the rectangular pit that needs to be cut into the plywood countertop base (if such a base is required) and that will contain the appliance. In our case, this very information was somewhat ambiguous, so we decided to use common sense and that the cut-out needs to be just large enough to fit the lower part of the cooktop, while the edge of the glass ceramic part will lie on top of the countertop. The overlap between the wide top part of the cooktop and the countertop should be as large as possible (or to put it the other way around: the cut-out should be as small as possible), because that countertop will bear the load of the appliance. I would not want to attempt to sink the cooktop deeper in and to make it flush with the countertop, because this will make the installation more difficult and more fault-prone, and it will also create a groove around the cooktop which will be hard to keep clean. I don't know about you, but in my kitchen there is no other place where so often an inadvertent spill occurs as on my cooktop. Sink. Our kitchen sink is not an Ikea product, but a Kräus. As I mentioned in a earlier chapter, we had planned on an undermount, with the sink's top side partially being covered by countertop. Although Ikea don't give instructions on how to install a sink of a different manufacturer, the installation was not too difficult. The overall width of our sink matches exactly that of its 30" base cabinet. So in order to install it, we had to cut the cabinet's side panels and the back panel to size. This required very careful measuring and calculating, since the upper edge of the sink had to sit exactly at the right height: aligned with the top side of the plywood that would cover all our base cabinets as a foundation for the future countertop. One of the best advices that I can give for kitchen sink installation is to be really meticulous when you decide about the exact positions of your faucet and and of your soap dispensers. An ill-placed faucet may have a control lever that cannot be properly operated because it hits a windowsill or some other obstacle. An ill-placed soap dispenser may be of limited use if its spout doesn't reach the sink's basin; a soap dispenser should, as a general rule, be installed as close to the sink as possible.
1,625
Planet Earth/Introduction. Every moment of your life will be from the perspective of a single planet—Planet Earth. You were born here and you will die here. This textbook is a guide to your home, to your place in the universe. By taking this course, you will learn about your home planet: how it works and how we know it works this way. This course is a user's manual for planet Earth, with direct recommendations for future generations, such as yourself, to maintain its health and natural wonders. As an astute student, you will be introduced to the theoretical principles of science and of how to defend yourself from the spread of ignorance. You will learn about Earth’s dimensions and motions, as well as how to navigate its surface. You will learn how energy originates from the closest star (the Sun), its Moon, and other sources of energy in the Earth’s active core, as well as how this energy can be used and stored. You will learn basic scientific principles of matter, the makeup of substances that form the field of chemistry. You will examine the planet’s atmosphere, the air that you are breathing as you read this, and how that air is slowly changing. You will explore the vast abundance of Earth’s water, covering the planet in enormous oceans, abundant lakes, and rivers, as well as frozen water locked within snow and ice. You will learn how to predict wind and storms and how climates shift. You will lead your own exploration of the solid interior of the Earth, the composition of mountains, rocks, and dirt. You will learn about life, the most unique feature of the planet. You will explore theories of how life arose and how it has evolved and changed over time, learning that you are of Earth and the story of your own origin on this planet. You will undertake an examination of the great biomes of jungles, forests, and deserts and the life that exists within them. You will survey the important field of biology as you learn about life and its interactions with the planet. In the end, you will come to face the ominous future of your own planet, of the changes that are now occurring. Your planet is not the same as that of your ancestors, nor of your grandparents, nor even of your parents at your age—Earth today is quickly being altered, and you will need to adapt to this change. This course will teach you how to prepare for this change and how to protect the planet from further alteration to the point that it becomes lifeless. This class will be challenging, but with enough dedication and commitment, you will succeed in learning the material cherish the knowledge presented in this class for the rest of your life.
568
Planet Earth/About the Book. About this Textbook. This book was written with the support of a grant offered by Utah State University Libraries, Academic & Instructional Services, and the College of Science to support faculty and instructors at Utah State University Statewide Campuses to create Open Educational Resources to support their online courses in the United States of America. These grants are made to reduce barriers to student success as well as to encourage faculty and instructors to try new, high-quality, and lower-cost ways to deliver learning materials to students through open educational resources. The majority of the first edition of the textbook was written between 2019 and 2020 with the intention that the textbook would be offered free of charge to all participants in GEO 1360 Planet Earth, an online course offered at Utah State University. This textbook is offered for any faculty, instructor, or teacher to adopt for their own courses they teach, and it is distributed under a . If you notice any errors or mistakes, please contact the author. About the Author. Benjamin J. Burger is a geologist who earned his masters of science degree in 1999 at in New York and his Doctorate in 2009 at the , and he spent five years working at the in New York City. He has also worked as a professional geologist in the states of Utah, Colorado, and Wyoming. He joined the faculty in 2011 and continues to teach and conduct research as an Associate Professor in the Department of Geoscience at the located in northeastern corner of Utah. Many of his course lectures and educational content can be found on YouTube or on his website at www.benjamin-burger.org Intended Audience. This textbook is written for an audience of introductory college students in a nonscience degree program. It is intended to provide a detailed comprehensive knowledge of Planet Earth, including basic aspects of physics, chemistry, geology, and biology. As a major scientific overview of the entirety of planet Earth, the intention is to only present key concepts that will enhance, enrich, and engage the readers interest in Earth sciences. It is intended to make any reader, such as yourself, at least a little more knowledgeable of the amazing place that we all live within. Open Text and What That Means. All of the text and modules of the "Planet Earth" course are offered under a with Attributions license, which means that you are free to share and redistribute the material in any medium or format, and to adapt remix, transform, and build upon the material for any purpose, even commercially. Just be sure to attribute the text with the author's name and course name, and indicate where you found the information. The purpose of making this text free to disseminate is that it contains valuable information that you should feel free to share and discuss as widely as possible. Science adapts to new knowledge, and as such this text can be updated and modified as new discoveries are made. An open text also ensures that the knowledge remains affordable to the average student, such as yourself. Feel free to pass on the information that you learn in this course and you are free to make printed copies. The referenced text is available as a Wikibook on the Wikipedia website. Digging Deeper. will be referenced throughout the text to encourage further reading on any particular topic, most of these will point toward a article or an original scientific publication. These referenced resources will follow a similar style and format as seen on the popular Wikipedia website, where sources of specific information can be referenced and verified with a simple link. Every attempt was made to ensure the external links that you will find within the modules are verified in print and online sources, including peer-reviewed scientific papers, publications of scientific societies, government organizations, and mainstream news organizations. There is no guarantee these external links will remain available online or whether they will be archived for future electronic access. Furthermore, there is no guarantee that your university or college will have a subscription to the article to view online. However, most of these external references should be accessible to you if you wish to explore a topic more in-depth than provided in the text, especially many of the Wikipedia entries. Only information covered within the text of this course will be used on quizzes and exams, as the reference hyperlinks serve to support statements and data within the main body of this course. You are not responsible for information that exists outside of this course on external webpages. Vocabulary and Glossary of Terms. Important scientific terms will be in bold print and may have a to a clear definition of that term. These terms should be defined in your notes, as they will likely be referenced in quiz and exam questions. Use of flashcards with the term and its definition might be an important study tool for the exams.
1,043
Debates in Digital Culture 2019/Preamble. As the title suggests, this is a book which seeks to record contributions to the understanding of a specific set of topics, loosely grouped under the subject area of "Digital Culture", of particular salience to 2019. It is put together through the combined talents and efforts of a cohort of students taking the undergraduate module FMSU9A4 during the Spring Semester 2019 at the in Scotland, UK. It is an assessed educational project. We would like to thank the Wikibooks community for assistance given in the course of this project, and also encourage leniency in dealing with our work - we are all beginners in the world of wiki here, but are keen to learn! The aim of this educational project is, firstly, for students to record the content of their learning and their contributions to this book will reflect their studies on one of the featured themes. This will appear in the form of a series of "Collaborative Essays", and therefore this Wikibook is a sort of edited collection or anthology, much like examples found in academic publishing. However, secondly and most importantly, the hope is that students will learn the values associated with working at different levels as individual researchers, as research teams, and as part of a research community on Wikimedia and other open knowledge platforms. That is to say: producing knowledge; collaboration and sharing; and peer-reviewing the work of others for the good of the community throughout Wikimedia, but in particular on Wikibooks. Students will thus gain hands-on experience of a wiki environment, and what it is like to be part of a knowledge-building community, within the auspices of one of Wikimedia's large projects (i.e. Wikibooks) and make something that adds to currents in the academic field of digital media and society. Note that while this is a class project, anyone may contribute as Wikibooks does not permit "ownership" of material.
438
Education in Uzbekistan/Introduction. Uzbekistan is a state of youth. Children, adolescents and young people under the age of 25 make up approximately 60% of the total population. The ancient heritage of Uzbekistan is characterized by love for children, concern for their health, well-being and education. At five million children are studying in school, and more than a million are preparing to become schoolchildren at the kindergarten level. The education of children and youth is one of the main priorities of the state policy of the country. That is why the law on education was adapted in June 1992, which became one of the first laws in our young sovereign state. The essence of the education reform in Uzbekistan is to preserve the existing intellectual potential of the educational system and change our goals and actions in order to develop people capable of building and living in a democratic civil society and a free market economy. Since gaining independence, the people of Uzbekistan have realized their great responsibility as citizens of the international community and as citizens of our planet. Therefore, one of our main goals is to raise a healthy generation, both physically and mentally. The core principles of our new education policy support this endeavour. Our goals are defined as follows: humanistic, democratic methods of education and socialization, the priority of human values, national and cultural traditions, as well as the separation of educational institutions from the influence of political parties and social and political movements. Currently, when transforming all social activity and development prospects of the country, textbooks are being updated, new subjects are being added, and retraining of teaching staff is being carried out. The development of new state educational standards is nearing completion. A large number of new types of schools are being created. These schools specialize in foreign languages, economics and environmental issues. Along with this, research to continue the educational complex is being carried out, which includes a kindergarten, a secondary school, research and educational institutions. Of course, all these processes are not simple. The government must solve many economic and social problems, in addition to other issues. With the support of half a million hardworking teachers who share the same vision, we hope to reach our goal. We understand that one of the main results of the educational reform is a change in the thinking of our people and society. At the same time, we understand that by striving to create a new education system in accordance with world standards, we will succeed by sharing our knowledge and experience with developing countries, contributing to the world education system.
539
Education in Uzbekistan/Preface. Education cannot be stereotypically attributed to an area of departmental or sectoral policy, but should be approached as a nationwide, strategically important issue. The initial significance of education for socio-economic development is defined at the governmental level: the system of lifelong education in Uzbekistan has all the conditions for the renovation of both traditional and innovative forms of education, developing life-long learning activities and becoming an integral part of everyday human life. Let us consider the stages of lifelong learning, established in Uzbekistan. In the era of globalization, education becomes an essential component of economic development and the accumulation of national wealth. The high spiritual level of the population can organically create a legal culture, the ability of people to live and work in a democratic state, being aware of their rights and freedoms, and being able to use them in the interests of individuals, state and society. The state is interested in the development of the intellectual and spiritual potential of the country: about 35% of Uzbekistan’s population is aged 16 and more than 62% is under the age of 30. The government expenditures on education are considered to be the most important investment in the growth of national wealth: Uzbekistan annually spends 10-12% of GDP and 35% of the costs of the state budget on the development and reforming of the education system. The prestige of the pedagogical professions is increasing, and thus teachers’ salaries also increase, with the growth in wages of teachers and professors over the past 10 years being 1.5 times the average rate of wage increases in other sectors of the economy. At the heart of educational reform is the establishment of a sense of prestige of knowledge, education and high intelligence in society. Only people who are aware of the need for harmony in national and universal values and have the latest knowledge and intellectual capabilities as well as advanced technologies can achieve the strategic goals of development.
435
Algorithms/Preamble. This book aims to be an accessible introduction to the design and analysis of efficient algorithms. Throughout the book we will introduce only the most basic techniques and describe the rigorous mathematical methods needed to analyze them. The topics covered include: The goal of the book is to show you how you can methodically apply different techniques to your own algorithms to make them more efficient. While this book mostly highlights general techniques, some well-known algorithms are also looked at in depth. This book is written so it can be read from "cover to cover" in the length of a semester, where sections marked with a codice_1 may be skipped. This book is a tutorial on techniques and is not a reference. For references we highly recommend the tomes by [Knuth] and [CLRS]. Additionally, sometimes the best insights come from the primary sources themselves (e.g. [Hoare]). Why a Wikibook on Algorithms? A is an undertaking similar to an open-source software project: A contributor creates content for the project to help others, for personal enrichment, or to accomplish something for the contributor's own work (e.g., lecture preparation). An open book, just like an open program, requires time to complete, but it can benefit greatly from even modest contributions from readers. For example you can fix "bugs" in the text (where the bug might be typographic, expository, technical, aesthetic or otherwise) in order to make a better book. If you find an opportunity to fix a bug, simply click on "edit", make your changes, and click on save. Other contributors may review your changes to be sure they are appropriate for the book. If you are unsure, you can visit the discussion page and ask there. Use common sense. If you would like to make bigger contributions, you can take a look at the sections or chapters that are too short or otherwise need more work and start writing! Be sure to skim the rest of the book first in order to avoid duplication of content. Additionally, you should read the Guidelines for Contributors page for consistency tips and advice. Note that you don't need to contribute everything at once. You can mark sections as "TODO," with a description of what remains to be done, and perhaps someone else will finish those parts for you. Once all TODO items are finished, we'll have reached our First Edition! This book is intentionally kept narrow-in-focus in order to make contributions easier (because then the end-goal is clearer). This book is part two of a series of three computer science textbooks on algorithms, starting with "" and ending with "Advanced Data Structures and Algorithms". If you would like to contribute a topic not already listed in any of the three books try putting it in the "Advanced" book, which is more eclectic in nature. Or, if you think the topic is fundamental, you can go to either the or the and make a proposal. Additionally, implementations of the algorithms as an appendix are welcome.
651
Linux Basics/Introduction. The primary author, Attila Kun(ottwiz), thought that there should be an easy to use Linux tutorial for those who learn it in a formal or informal educational setting, but have no idea what to do with Linux. Attila Kun(ottwiz) studied Linux Basics as a subject in the school year 2018/2019 in high school, so his previous versions has a date like "2019.03.16." This wikibook is translated from the Hungarian version of and covers the basics of Linux systems. Of course there will be things that work differently in other distributions such as files, but it mostly reflects what is seen in common distributions of Linux. The number of sources indicates that we should not start from one source, but we should dive deep in to see things clearly. We should seek after on the Internet that how the article reflects reality (we can test it under live/virtual system, and if we can, we should search that if it's right). Any contributions to this book are more than welcome! Acknowledgments. I'd like to give big thanks to thottee from the Hungarian PenguinPit Discord community who helped a lot creating the original document(it was originally created in LibreOffice but I didn't want to suffer with making of it if I need to edit it sometimes), Balázs Úr who supervised this document and corrected the errors, and Balázs Meskó who corrected the grammar mistakes.
335
Minecraft Speedrunning/Introduction. Speedrunning is the art of completing a game in as little time as possible. Minecraft is a sandbox block building and survival game. This book fuses the two to act as a strategy guide to speedrunning "Minecraft" Java Edition. In the 2010's both "Minecraft" and speedrunning gained massive popularity in the gaming community, and as a natural result "Minecraft" speedrunning gained popularity. "Minecraft" is almost unique among freeform games for having an ending, and as a result beating "Minecraft" is usually defined as defeating the final boss known as the Ender Dragon. (Minecraft has 2 more bosses, but they don’t count for the any% category, but they do count for the “all bosses” category)
179
OpenSCAD Tutorial/Introduction. About OpenSCAD. OpenSCAD is a solid 3D CAD modelling software that enables the creation of CAD models through a scripting file. The domain specific language designed for this purpose allows the creation of fully parametric models by combining and transforming available primitives as well as custom objects. About this tutorial. This tutorial assumes zero programming or CAD knowledge and is designed to guide you step by step through examples and exercises that will quickly build your understanding and provide you with the right tools to create your own models. Emphasis is placed on parametric design principles that will allow you to rapidly modify your creations and build your own library of reusable and combinable models. The majority of presented examples and solutions to exercises are available as separate OpenSCAD scripts here. "As of 29-11-2019 this tutorial as well as all accompanying material were completely developed as a Google Season of Docs project."
210
Chess/Introduction. Chess is an ancient strategy game that originated in India. It is played by two individuals on an 8×8 grid. The objective is to maneuver one's pieces so as to trap the opposing king in "checkmate". This book will cover the basic pieces of chess, before going on to some more advanced topics. The history of chess began in India during the Gupta Empire, where its early form in the 6th century CE was known as "chaturanga", which translates as "four divisions of the military" – infantry, cavalry, elephants, and chariotry, represented by the pieces that would evolve into the modern pawn, knight, bishop, and rook, respectively. In Sassanid Persia around 600 CE, the name became "shatranj", and the rules were developed further. Shatranj was taken up by the Muslim world after the Islamic conquest of Persia, with the pieces largely retaining their Persian names. In Spanish, "shatranj" was rendered as "ajedrez", in Portuguese as "xadrez", and in Greek as "zatrikion", but in the rest of Europe it was replaced by versions of the Persian "shāh" ("king").
284
Open Source Church/Introduction. This book will illustrate open source software, downloadable from Github, useful for managing a church or parish. In particular, the functions of the ChurchCRM program will be analysed. ChurchCRM is released under the MIT license and installable both in a web hosting equipped with apache, php, mysql, phpmyadmin or simply on your PC as a standalone application via XAMPP. ChurchCRM allows you to record data of people, families and groups who attend the parish and for each of these categories to establish properties, roles and classifications. For example for persons the roles could be head of household, wife, child etc., classifications could be member, visitor, guest etc., while properties could be disabled (specifying the type of disability), needs (financial, loneliness, need for assistance, etc.) and resources (bricklayer, lawyer, engineer, etc.). For families, by creating one, its members can be entered immediately as people and as properties, for example single parent, single mother, etc. You can then create all the groups that you want, for example Singers of the Mass, Parish choir, elderly group, Caritas Listening Center, Caritas Food Bank, Hospitality in Church, Musicians, Volunteers - visits to the sick, Scouts etc. People can be placed in multiple groups or moved from one group to another. The properties of a group could be youth, senior etc. As regards data and reports it is possible to make queries of any type in order to search, for example, for disabled people, with economic needs, who have some resources, who are head of household, spouses, members, visitors, etc. and send emails to selected people. Other features are deposit management, fundraising and event creation...
382
Foundations of dynamics. What is dynamics? "dunamis" and "energeia" are terms used by Aristotle, which are generally translated as power and actuality. "dunamis" is also translated as potential. "energeia" is associated with "ergon", the act, or work. Potential and actuality are inseparable. Everything that is actual is the actualization of a potential. Conversely, if there were no actuality, there would be no potential, since potential is always the potential to do something. Dynamics is the science of potential and its actualization. To exist is always to dance. The actualization of a potential is always a movement. Dynamics is therefore the science of movement and its causes. What modern physicists call potential energy resembles what Aristotle called "dunamis". Power, according to modern physics, is closely associated with potential energy, because it is the amount of energy that can be supplied per unit of time. Kinetic energy is the energy of moving masses. It resembles Aristotle’s "energeia". Forces are the causes of variations in the movement of masses. They resemble Aristotle's driving causes. Dynamics, in the modern sense, is at the same time the science of the movement of masses, the science of energy, or power, in all its forms, and the science of forces. Kinetic energy is more actual than potential energy because it is more visible, more manifest. But we can also say of potential energy that it is actual. Because to be actual, or simply to be, to exist, is always to be able to have an effect on other beings. A being that cannot have an effect on other beings cannot physically exist. A being's actual being is therefore defined by what it can do, by its potential. It is what it can do. It is its being, even if it hasn't done it yet, and even if it never will. To be actual is to have potential. Potential is always the potential to have an effect on other actual beings, or on oneself. So potential is always the potential to have an effect on the potential of other beings, or on oneself. Everything that exists has momentum. Everything that exists physically always has momentum. Proof: what exists physically always acts on other beings which exist physically. A being that never acted on other physical beings would never have any effect and could never be observed. It would have no physical existence. When a being acts on another, it modifies its movement, therefore its momentum formula_1 for a body of mass formula_2 and speed formula_3. But the total momentum is always conserved. If one body increases the momentum of another, it loses momentum. If one body decreases the momentum of another, it gains momentum. Therefore a body without momentum cannot act on another and cannot exist physically. Having momentum is a necessary condition for physical existence. Physical quantities are therefore often defined from momentum, and particularly forces and energies: What is a force? If a body is not subjected to any force, it retains its mass and its velocity vector, and therefore its momentum. Its movement is in a straight line at constant speed. It is a uniform rectilinear movement. Newton's first law: "The movement of a body which is not subject to any force is uniform rectilinear." It is a law of inertia of movement. The velocity vector does not vary if nothing makes it vary. From this first law, we deduce that if the momentum of a body is not constant, then there exists a force that causes its momentum to vary. The fundamental law of dynamics: "A force is the rate of change of a momentum." formula_4 where formula_5 is the force acting on a body of momentum formula_6. Newtonian physics assumes that the mass of a body does not depend on its speed, but the theory of relativity does not. For speeds small compared to that of light, the variation in mass is negligible. If we neglect the variations of formula_2 as a function of the speed formula_3, we obtain Newton's second law: formula_9 where formula_10 is the acceleration of a mass formula_2 which experiences a force formula_12. What is energy? We define energy from the work of forces: the energy gained or lost by a body is the work of the forces exerted on it. When a body moves against forces, it must give up energy. When a body is pushed by forces, it gains energy. We can move a heavy object effortlessly on an ice rink, because we don't have to fight against the force of gravity. On the other hand, it takes a lot of effort to lift a heavy object vertically, because we have to oppose the force of gravity. In the first case, the force of gravity does not work, because the movement is horizontal. In the second case, the force of gravity works, because the movement is vertical. The work W of a force f on a mobile moving in a straight line over a length d is equal to the scalar product of the force vector "f" and the displacement vector "d": W = f.d = f d cos formula_13 where formula_13 is the angle between the force vector f and the displacement vector d. f and d are the lengths of the vectors f and d. We can consider with Newton that gravity is a force. We then understand that we must provide energy to lift a heavy body, because we must exert a force which works against the force of gravity. The force of gravity is vertical. It does not work for a horizontal movement because it is perpendicular to the movement, cos 90° = 0. The only energy that must be provided to move a heavy body horizontally is the work against the friction forces. The work of the force of gravity formula_15 on a mass formula_2 which is raised from a height formula_17 is formula_18 If in W = f.d = f d cos formula_13, formula_13 > 90°, then cos formula_21 < 0 and W < 0. The work of the force has a negative value because it is the energy lost by a body that moves while fighting against the force. This lost energy can be the kinetic energy E = 1/2 mv2. The speed v decreases because the body is braked by the force. If formula_13 < 90°, cos formula_13 > 0 and W > 0. The work of the force has a positive value because it is the energy acquired by a body that moves by being pushed and accelerated by force. In the international system of units of measurement (MKSA, meter, kilogram, second, Ampere), the unit of energy is the Joule (J). One Joule is the work required to move a body one meter against a force of one Newton (N). 1 J = 1 N. 1 m = 1 N.m The force of gravity on the surface of the Earth is approximately equal to 9.8 N, almost 10 N. One Joule is therefore approximately the energy required to lift a 1 kg body ten centimeters. The fundamental law of dynamics formula_24 and the definition of energy from the work of a force teach where to find energy and how to appropriate it. Energy is where there are forces. To appropriate energy, it is enough to make the forces work. So formula_24 gives us the secret of power. For example, we can find E = m c², the Einstein equation which revealed the power of the atom, from formula_26. The kinetic energy of a moving mass. Consider two masses formula_27 and formula_28 connected by a spring. When compressed or stretched, the spring exerts two forces formula_29 and formula_30, one on formula_27, the other on formula_32. If the mass of the spring is negligible compared to the masses it connects then we always have formula_33. We assume that each mass is only subjected to the force of the spring and that they are released against each other, at rest, after having extended the spring: According to the fundamental law of dynamics: formula_34 where formula_35 is the momentum of formula_27. formula_37 is its speed. The forces formula_29 and formula_30 on the masses formula_27 and formula_28 are always in the direction of movement. The work formula_42 of formula_29 on formula_27 for a small displacement formula_45 is formula_46 The work formula_47 of formula_29 between two instants formula_49 and formula_50 is formula_51 The energy gained or lost by the mass formula_27 is therefore formula_53 formula_54 formula_55 is the kinetic energy of a mass formula_2 which goes at the speed formula_57 . When a force is exerted on a mass in the direction of its movement, it gives it kinetic energy by increasing its speed. When a force is exerted on a mass in the opposite direction of its movement, it takes kinetic energy from it by slowing it down. When a force is exerted on a mass in a direction perpendicular to its movement, the mass retains its kinetic energy, therefore the magnitude formula_57 of its velocity vector formula_3 . Einstein understood that all energy has mass, even kinetic energy. So the mass of a body depends on its speed. The calculation above is not exact. The faster a body goes, the more its mass increases and the more difficult it becomes to accelerate it, because the acceleration given by a force is inversely proportional to the mass of the body on which it is exerted. A mass can never reach the speed of light because it would need infinite energy to do so. A flywheel is a massive wheel that is rotated around its axis. Mass is put especially at the periphery because that's where it goes the fastest. If friction is low, a flywheel can maintain its rotational speed for a very long time. It thus functions as an energy reservoir, because it retains its kinetic energy of rotation. The potential energy of a spring. A spring can serve as an energy reserve. For example, 16 mm cameras work without electricity and allow continuous filming for several minutes, simply with a spring, which is wound by the crank. To extend or compress a spring, the two forces exerted on it are in the same direction as the movement of each of its ends, we must therefore transfer energy to the spring. We have to expend energy, so we have to make an effort to tension or compress a spring. A spring of stiffness formula_60 near its equilibrium position exerts two forces formula_61 and formula_62 on each of the bodies, at its left and its right, which compress or extend it: formula_63 where formula_64 measures the variation in length of the spring compared to its equilibrium length formula_65. This is Hooke's law of elasticity. It is true for all solids, provided that they are little deformed by the forces exerted on them. Springs are designed to respect Hooke's law even if they are very deformed. If formula_45 and formula_67 are small displacements of the ends of the spring, the work formula_68 that must be provided to compress or extend the spring is formula_69 formula_70 because formula_71. To extend or compress a spring over a length formula_64, it is therefore necessary to provide it with energy formula_73 formula_74 formula_75 is the elastic potential energy of a spring of stiffness formula_60 where formula_77 is its variation in length relative to its equilibrium length. The cohesive forces of matter are electric. Elastic potential energy is electrical potential energy. It is a difference in energy of the electric field produced by the electrons and the nuclei of the spring. When we compress or extend a spring, we increase the electric potential energy conserved in the electric field produced by its electrons and its nuclei. Conservation of energy. When the spring sets the masses in motion, its elastic potential energy is transformed into the kinetic energy of the masses. When the movement of the masses compresses or extends the spring, their kinetic energy is transformed into elastic potential energy of the spring. Let formula_78 be the sum of the kinetic energies of the two masses and the elastic potential energy of the spring. formula_79 now formula_80 so formula_81 The total energy formula_82 shared between the spring and the two masses is therefore conserved. The law of conservation of energy: "The amount of energy gained or lost by a physical system is always equal to the amount of energy it received or gave up to another physical system." We can also state it: When two physical systems transfer energy from one to the other, their total energy does not change. Conservation of momentum. The momentum formula_6 of a body is always the product of its mass formula_2 and its velocity formula_3 : formula_86 Let formula_87 be the total momentum of the two masses formula_27 and formula_27: formula_90 formula_91 So the total momentum of the two masses is conserved. The force formula_30 exerted by the mass formula_27 on the mass formula_28, via the spring, is equal and opposite to the force formula_29 exerted by the mass formula_28 on the mass formula_27, if we neglect the mass of the spring. The action of formula_27 on formula_28 is equal and opposite to the reaction of formula_28 on formula_27. Conservation of total momentum is therefore equivalent to the equality of action and reaction. The law of equality of action and reaction "If two bodies A and B interact, the force exerted by A on B is equal and opposite to the force exerted by B on A." "Theorem": we cannot go to the Moon by pulling on our boots. "Proof": the hands exert on the boots a force directed upwards exactly equal and opposite to the force exerted downwards by the boots on the hands. The sum of the two is zero and therefore cannot give an upward acceleration. The law of equality of action and reaction is Newton's third law. It is strictly exact only if A and B are in contact. The two equal and opposite forces are then exerted at the same point, the point of contact. But if A and B are far apart, they cannot be instantly sensitive to variations in each other's movement, because there is no instantaneous action at a distance. Information and physical beings always move with finite speed, never with infinite speed. The law of conservation of momentum is better than the law of equality of action and reaction, because it avoids the problem of instantaneous action at a distance: The law of conservation of momentum: "The momentum gained or lost by a physical system is always equal to the momentum it received or gave up to another physical system." We can also state it: When two physical systems transfer momentum from one to the other, their total momentum does not change. Angular momentum. A body which rotates on itself and which is not subject to any external force maintains its rotational movement. The axis and speed of rotation do not change. This is what happens to a spinning top if it is in free fall. The rotational inertia of their wheels balances bicycles in motion. A bicycle at rest falls, because there is no longer any rotational inertia. Angular momentum is a rotational momentum. It is to constant rotational movement what momentum is to uniform rectilinear movement. Like momentum, it is always conserved in the absence of an external force. The angular momentum lost or gained by one body is always the angular momentum gained or lost by another body. The laws of the gift of momentum and energy. A body cannot give more momentum, angular momentum and energy than it has. If it gives all its energy then it is no more, because its mass is energy. To provide momentum and angular momentum, it must exert forces. To give energy, it must also exert forces, because variation in energy comes with variation in momentum. When a force is perpendicular to the speed of a mass, it does not work. It varies the momentum vector, but not its length, so it does not increase the kinetic energy of the body on which it is exerted. The body that exerts this force varies the momentum without losing its energy. The fundamental laws of physics determine the forces that bodies can exert on each other. They are therefore the laws of the gift of momentum, angular momentum and energy. Energy and momentum of fields. A field is a physical quantity defined at each point in space-time. There is no instant action at a distance. Two bodies far apart that interact always do so through a force field, where information is propagated at a finite speed, always equal to or less than that of light. For example, two masses linked by a spring interact through the pressure field in the spring. In a pressure field, information is propagated at the speed of sound. Fundamental forces are the forces between particles. When a body A exerts a force F on a body B, F is the vector sum of all the forces exerted by a particle x on a particle y, for all the particles x of A and all the particles y of B. The force fields between particles are the electromagnetic field, which exerts electric and magnetic forces on particles, and the nuclear force field, which explains the stability and instability of the nuclei of atoms. Radioactivity is the consequence of nuclear instability: unstable nuclei disintegrate spontaneously, without the slightest force being exerted to break them. According to Newtonian physics, gravitation is a field of forces between all masses, but the speed of propagation of information is infinite, because the law of universal gravitation imposes instantaneous action at a distance. According to Einstein's theory of general relativity, gravitation is not a force, but a field of space-time distortions, where information can never propagate faster than light. Like everything else that exists physically, force fields have energy, momentum, and angular momentum. When a force is exerted on a particle, it always varies its momentum, and it can also vary its energy and angular momentum. These variations are caused by transfers of momentum, energy and angular momentum between the field and the particle on which it acts. The energy or momentum of a physical system is always the sum of the energies or momentum of the particles that constitute it plus the sum of the energies or momentum of the fields that these particles produce. We often count the energy of the field by assigning potential energy to the particles on which it exerts its forces. For example, we attribute electric potential energy to an electrically charged body. It is the sum of the electrical potential energies of all its electrically charged particles. But this potential energy is not an energy carried by the particles, because it does not vary their mass. It is an energy of the electric field, localized in the space around the particles. Counting the potential energy of particles is only one way of counting the energy of the field they produce. The energy of mass. A particle is restless if and only if there does not exist an inertial frame of reference where it is motionless. A particle is with rest if and only if there exists an inertial frame of reference where it is motionless. Particles with rest have rest mass. Everything that exists physically has mass, because everything that exists physically has momentum formula_86. Restless particles have mass, like particles with rest, but they have no rest mass, because they have no rest. Photons are restless particles. The mass of a photon is formula_103 where formula_82 is its energy, formula_87 its momentum and formula_106 the speed of light. Even a mass at rest is the energy of a field, the field of force that gave it rest. The rest mass of a particle resembles the work of the force on a path that took it from no rest to rest. For an electrically neutral particle, this work is equal to formula_107 where formula_108 is the rest mass of the particle. For an electrically charged particle of rest mass formula_108, we must add to the work of the force which gave it rest the energy of the electric field that it produces around it to obtain formula_110. Let formula_111 be the work of the force which gave a charged particle rest. Let formula_112 be the mass of the Coulomb field produced by this charge if it were isolated. formula_113 In general, we measure formula_108, not formula_115 and formula_112 separately, because we cannot undress a charged particle and ask it to leave its Coulomb field in the locker room before measuring its mass formula_115. When an electrically charged particle and its antiparticle, an electron and a positron for example, are exactly superimposed, the electric field they produce together is equal to zero everywhere in space. The mass formula_2 of the pair, if it were at rest, would therefore be formula_119 because the energies formula_120 of the Coulomb field of a particle and its antiparticle are equal, and their rest masses formula_108 too. The minimum energy required to create a particle-antiparticle pair is equal to formula_122 not formula_123. When a pair is created, the difference formula_124 is the minimum kinetic energy of the particle or its antiparticle, for the pair to be created. After being created, the electrically charged particle and its antiparticle move away from each other. They thus produce a dipolar electric field. They give energy to this field while losing part of their kinetic energy. The kinetic energy formula_125 of a particle is always the difference between its mass multiplied by c² and its rest mass multiplied by c² also: formula_126 The kinetic energy of a particle depends on the frame of reference in which it is measured, and therefore its mass too. The mass of a particle, including its kinetic energy, is always the energy of one or more fields, the energy of the field that put it at rest, if it is a particle with rest, plus the energy fields that it produces around it. When we count the energy of a physical system, we must add up all the energies of the fields associated with it, including the force field that gives the particles rest. Particles do not exist without the fields that they produce or that produced them, because they are quanta of fields. There are only fields and their quanta, the particles. The fundamental laws of physics are therefore always the laws of the gift of momentum, energy and angular momentum from one field to another field. When the kinetic energy of a charged particle is transformed into electric potential energy, the field that gives the particles their rest and their kinetic energy gives up part of its energy to the electromagnetic field. The gift of kinetic energy. The mass of an electrically neutral body with rest is the mass of the field which gives it rest. It is the sum of its rest mass and the mass of its kinetic energy. When one mass gives up kinetic energy to another, it loses part of its mass. Part of the energy of the field which gives it rest is transferred to the field which gives rest to the other mass. Let two equal masses formula_27 and formula_28, formula_129, which bounce on each other with speeds formula_130 and formula_131 equal and opposite, measured in a reference frame R. If the masses are perfectly elastic, they retain their kinetic energy after the rebound: formula_132 so formula_133 The same for formula_28. In a reference frame R' which goes at speed formula_57 with respect to R, formula_27 is initially at rest. After the rebound it gains a kinetic energy equal to formula_137, given up by the mass formula_28. In a reference frame R" which goes at speed formula_139 with respect to R, formula_28 is initially at rest. After the rebound it gains a kinetic energy equal to formula_141, given up by the mass formula_27. From the point of view of R', formula_27 gives up kinetic energy to formula_28. From the point of view of R", formula_28 gives up kinetic energy to formula_27. From the point of view of R, formula_27 and formula_32 each retain their kinetic energy. How is this possible? Energy always has mass. Mass is always the mass of the quanta of a field. If particles move from A to B from the point of view of one frame, they move from A to B from the point of view of all frames, because the presence of a particle in A or in B does not depend on the point of view. So it seems that a transfer of kinetic energy cannot depend on a point of view. Restless particles carry energy at the speed of light. Their energy formula_149 depends on the frame of reference, because their momentum formula_87 depends on the frame of reference, by Doppler effect. When the two masses bounce off each other, there are two particle flows, one from formula_27 to formula_28, the other from formula_28 to formula_27. From the point of view of R, these energy flows are exactly equal. This is why the two masses each retain their kinetic energy. But from the point of view of R' or of R", these two flows do not transfer the same energy, because of the Doppler effect. The difference is the transfer of energy from one mass to the other. The principle of least action. We can find the fundamental equations of the dynamics of all physical systems by reasoning with the following three principles: A path AB followed by a physical system is a path of least loss if it is such that all mathematically possible paths from A to B have a higher loss. It is a path of maximum gain if it is such that all mathematically possible paths from A to B have a lower gain. The difference formula_157 is called the Lagrangian of the system. The integral formula_158 formula_159 formula_160 formula_161 is called the action. It has the dimensions of energy multiplied by time. It is the analogue of a loss: total expenses minus total income. Its opposite formula_162 is the analogue of a gain. The principle of least loss, or maximum gain, is called the principle of least action. Time is reversible. Nature does not differentiate between the past and the future. Proof: if AB is a path of maximum gain, the same path BA taken in the opposite direction is also naturally possible, because it is a path of minimum loss. This theorem is almost always true for the fundamental equations of microscopic motions. The arrow of time, from the past to the future, does not appear in these equations, but only in the equations of macroscopic movements given by statistical physics and thermodynamics. The principle of least action does not impose that all naturally possible paths go towards increasing gains, since time is reversible, it only imposes that all naturally possible paths maximize gains or minimize losses. We can find Newton's three fundamental laws from the principle of least action, provided that we choose the appropriate Lagrangian. Consider a race from A to B between a hare and a tortoise. The tortoise moves forward at a constant speed while the hare stops to take a nap. To arrive at the same time as the tortoise, the hare must make up for lost time. So he arrives all out of breath while the turtle arrives calmly. We can calculate that the tortoise chose a path of least action for the Lagrangian formula_163, while the hare did not minimize its losses . We thus find Newton's first law: in the absence of forces, movements are always uniform rectilinear. That there is no force is reflected by the absence of interaction energy in the Lagrangian. Points A and B are points in configuration space. A configuration is defined by the positions of all the bodies that are part of the system. If there is a single moving point, the configuration space is real, three-dimensional space. If there are n moving points, the configuration space has 3n dimensions. An optimal path is determined by two points A and B and by a delay to reach B starting from A. The path of least action is the optimal path among all the paths which have the same endpoints and the same delay. A is like a starting point at a fixed time and B is a meeting point, at an equally fixed time. The path of least action is the optimal path to arrive at the meeting place at the appointed time. When we know the path of least action, we can calculate the speeds along the entire path, therefore the speeds at point A and point B. We therefore obtain a function between the final speed at B and the initial speed at A. We can thus calculate the final speed after a certain delay according to the initial speed. By making the delay tend towards zero, we thus find the rate of variation of the speeds of the points of the system, therefore the rate of variation of their quantities of movement, therefore the forces which are exerted on them.
6,664
Kitchen Remodel/Countertop. Plywood base. The type of quartz countertop that we were planning to install required a plywood foundation. The thinner the countertop material, the more the manufacturers encourage you to have such a foundation. Since we didn't know the first thing about the specifics of a plywood installation, we hired an experienced craftsman who assisted us with most of it, until we fully understood the concept and could finish without help. We used 5/8" (15 mm) plywood. The idea is to cover the entire surface that will have a countertop. The front edges of the plywood have to be aligned to the front edge of the cabinet frame (while the drawer fronts and doors are being ignored). That included, in our case, the recesses for the dishwasher and wine chiller and the bar surface. There were cut-outs for the sink, the faucet, the cooktop and for our pop-up outlet. We didn't drill holes yet for the soap dispensers and the button for the insinkerator, though. The two edges which would get a waterfall had to be covered with plywood, too. However, at the edge of the bar peninsula we were planning a wrap-around (= countertop material both on the edge's outside and the inside) and therefore omitted the plywood where the wrap-around would go. Another spot that needed special attention was the bar surface. Our bar is very wide (more than 9 ft. resp. 289 cm), and the countertop sales person had recommended us to add a good reinforcement to the plywood cover there. So we used a wood router to cut a number of long grooves into the plywood surface into which we mounted iron bars. The very cool thing about having the plywood in place is that you can finish installing your drawers and shelves and doors, if you haven't done so yet, and you can start to fill those cabinets with your things. We also finished the installation of our sink and faucet and began to actually use the kitchen. We had been without one for more than four months. With no date set for the countertop installation (we were still waiting for a call back to get a date for the measurements), we didn't have another option anyway. The one major appliance though that we couldn't use yet, without a proper countertop, was the cooktop. Countertop. The selection of a countertop material is a great opportunity to cut back on expenses if someone figures that they went over budget with their cabinets and appliances. Laminate countertops come in all sorts of designs today, some of them even looking more interesting and appealing than the pricey alternatives. And there are many other options, too. However, my family does weird stuff in the kitchen, like soldering or other works that you normally would do in a workshop, so we decided not to take risks and to buy a material that is really heat resistant and tough: engineered quartz. We reduced the costs a bit by picking a product that was on sale in one of our local home improvement stores. My other criteria, besides a reasonable price, was that the material must be mostly white, but picks up the color of the cabinet fronts, too, so we were quite pleased to find a cost-effective product with gossamer grey structures in a white base. I prefer a fine pattern over a large-scale one, because if there ever occur any damages, they will hardly be noticeable. The white was important to me because I wanted these very large horizontal surfaces to brighten up our kitchen space, both during daytime, with natural light coming through the windows, and after dusk when the lights come into play that we installed directly above. The home improvement store which sold us the product, with the installation work in the price included, delegates the job to local firms. We learned during this process that contractors who install countertops cover a huge range of different ways of working. When we bought our first countertop, in New York State, a decade ago, we had someone coming to our place and pulling a laser-operated gimmick out of a case. He finished the measuring within a few minutes, obviously creating a set of digital data which the would probably feed directly into their stone cutting device. Well, now, in California, we saw exactly the other end of the spectrum: the contractor didn't use laser technology or even a tape measure, but simply created a huge template, glued together from wooden strips. What can I say? Both methods worked out. More or less. A few weeks later the crew came back, with the prepared countertop material, and installed it. At that point, we could offer them our own template with the exact positions of our soap dispensers and the button for the insinkerator. The best advice that I can give to anybody who pays for a countertop installation is to closely monitor the work while it is being done. Our contractors for example didn't use a mechanic's level but eyeballed literally everything, with the result that one of our waterfalls is not quite vertical, but a fraction of a degree off; they also seemed not to care too much about installing the countertop edge in constant distance to the cabinet fronts. The latter is most likely the result of their method of measuring. The good news is that to an innocent viewer, those flaws are not perceptible at all. And now some details. Notice that the quartz material is actually only ¾" (19 mm) thick, but appears to be much thicker because there are strips of material added at the edges. By the way, our contractor offered to cut and to install a matching windowsill, too. We declined, since we wanted a wooden one, matching our shelves, but I still found this an interesting idea.
1,270
Electricity and magnetism/Electrostatics. Electric forces can be produced simply by rubbing different materials together, such as hair on plastic. This creates equal and opposite electric charges on the two surfaces that have been rubbed. We can then observe the attraction of charges of opposite signs, and the repulsion of charges of the same sign: Coulomb's law. "Two motionless electric charges exert an electric force on each other proportional to the product of their charges divided by the square of their distance. The two forces, that exerted by A on B and that exerted by B on A, are in the direction of the line which connects the centers of the two charges A and B. These two forces are attractive for charges of opposite signs and repulsive for charges of the same sign." A field is a physical quantity that can vary at every point in space at every instant. For example, temperature is a field. Coulomb's law says that a motionless electric charge formula_1 permanently produces an electric force field formula_2 throughout space: formula_3 where formula_4 is the vector which goes from the charge formula_1 to the point considered, formula_6 is its length, formula_7 is the unit length vector in the direction of formula_4 and formula_9 is a constant which depends on the choice of units of measurement. The force formula_10 exerted by an electric field formula_2 on a charge formula_12 is formula_13 So the force exerted by a charge formula_1 on a charge formula_12 is formula_16 The field formula_2 is like a mathematical intermediary for calculating the force exerted by one charge on another. But it is much more than a simple mathematical intermediary. It has an autonomous existence. Maxwell showed that light is an electromagnetic wave, that is to say a movement of propagation of the electromagnetic field (formula_2, formula_19) of which the electric field formula_2 is a component. The electric field on the surface of a positively charged sphere: The force exerted by two charges on a third is the sum of the forces exerted separately by each of them. All the charges in the Universe produce together throughout space an electric field which is the sum of the fields produced separately by each of them. The Coulomb force is mathematically similar to the gravitational force: two masses exert a gravitational force on each other proportional to the product of their masses divided by the square of their distance. But the gravitational force is always attractive, because there is no negative mass. On Earth, the weight of a body is the gravitational force that the Earth exerts on it. According to the theory of relativity, there is no instantaneous interaction at a distance. A charge cannot therefore instantly exert the Coulomb force on another. The force propagation time must be taken into account. This is why Coulomb's law is only a law of electrostatics. It allows the forces between motionless charges to be correctly calculated. The cohesion of matter is electrostatic. A tiny grain of salt Each line represents a bond between two ions in a NaCl crystal (sodium chloride is common salt). The blue and green colors represent the two ion species. It is a centered cubic crystal. Each Na+ ion is surrounded by 8 close neighbors Cl-. Likewise, each Cl- ion is surrounded by 8 close neighbors Na+. Two nearby ions always have opposite charges, because opposite charges attract each other while charges of the same sign repel each other. The force that explains the cohesion of solids, atoms and molecules is the Coulomb force. The cohesion of liquids is also caused in part by this electrostatic force, but because the atoms or molecules are in motion, electrodynamic forces can also come into play. A hydrogen atom consists of one proton, zero, one or two neutrons, and one electron. A proton is a positive charge. An electron is a negative charge, equal but opposite. A neutron is neutral. The electron is linked to the proton by the Coulomb electric force. More generally, an atom consists of a nucleus surrounded by electrons. The charge of the nucleus is positive, equal but opposite to the total charge of the electrons. When the hydrogen atom is in its ground state, the lowest energy state, the presence of the electron around a proton resembles a spherical cloud: The proton is at the center. The cloud around represents the presence of the electron. If the atom is excited, the presence of the electron can have various forms: A molecule is an assembly of atoms. Electrons are like glue that binds nuclei together. Two positive charges repel each other, but two positive charges separated by a negative charge can attract each other. An ion is an atom or molecule that has lost or gained one or more electrons. Since atoms and molecules are always electrically neutral, ions are always charged. An atom or molecule that has gained one or more electrons is a negative ion. An atom or molecule that has lost one or more electrons is a positive ion. A solid is an assembly of atoms, molecules or ions, linked to each other by electrostatic force. To break an ionic crystal, one must separate ions attracted by electric force: The attractive forces between charges of opposite signs are greater than the repulsive forces between charges of the same sign. This difference makes all the materials cohesive. The mass of the nuclei is approximately 5000 times greater than the mass of the electrons accompanying them, because the mass of a proton or neutron is approximately 2000 times that of an electron, and an atom always has the same number of protons and electrons, and a number of neutrons approximately equal to or a little greater than the number of protons (except ordinary hydrogen, which has no neutrons). Since almost all mass is carried by nuclei (made of protons and neutrons), it is more natural to say that electrons are a glue that binds nuclei than to say that nuclei are a glue that binds electrons. Two point charges of opposite signs should attract each other until they come together. We can calculate that the energy it could provide by falling on each other in this way is infinite. But matter doesn't usually collapse on itself, and it never releases an infinite amount of energy. Bodies always have a lower energy state, which is their ground state. In this state, they cannot give up energy, because they have no lower energy state to go to. Atoms, molecules and ions are in their ground state or in excited states which have higher energy. The hotter a gas is, the more excited its atoms or molecules are. A solid is in an excited state unless its absolute temperature is zero Kelvin. To explain why an electron from a hydrogen atom does not fall on the proton, or more generally why materials do not collapse on themselves, Coulomb's law is not enough, we need quantum physics. What is electric voltage? In steady state, the electric voltage is the difference in electric potential between two points. Its unit of measurement is the Volt (V). What is potential? To understand potential, we must understand the work of a force. We can reason about the electric potential in the same way as about the gravitational potential. We can move a heavy object effortlessly on an ice rink, because we don't have to fight against the force of gravity. On the other hand, it takes a lot of effort to lift a heavy object vertically, because we have to oppose the force of gravity. In the first case, the force of gravity does not work, because the movement is horizontal. In the second case, the force of gravity works, because the movement is vertical. The work W of a force f on a mobile moving in a straight line over a length d is equal to the scalar product of the force vector f and the displacement vector d: W = f.d = f d cos formula_21 where formula_21 is the angle between the force vector f and the displacement vector d. f and d are the lengths of the vectors f and d. The force of gravity is vertical. It does not work for a horizontal displacement because cos 90° = 0. The work of the force of gravity formula_23 on a mass formula_24 that is raised from a height formula_25 is formula_26 The work of a force is energy. If formula_21 > 90°, cos formula_21 < 0 and W < 0. The work of the force has a negative value because it is the energy lost by a body that moves against the force. This lost energy can be the kinetic energy E = 1/2 mv2. The speed v decreases because the body is slowed down by the force. If formula_21 < 90°, cos formula_21 > 0 and W > 0. The work of the force has a positive value because it is the energy acquired by a body that moves by being pushed and accelerated by the force. In the international system of units of measurement (MKSA, meter, kilogram, second, Ampere), the unit of energy is the Joule (J). One Joule is the work required to move a body one meter against a force of one Newton (N). 1 J = 1 N. 1 m = 1 N.m The force of gravity on the surface of the Earth is approximately equal to 9.8 N, almost 10 N. One Joule is therefore approximately the energy required to lift a 1 kg body ten centimeters. When the trajectory of a mobile is a curved line, we calculate the work of a force by reasoning on broken lines which follow the curved trajectory. If the lengths of the segments are shorter and shorter, a broken line becomes more and more similar to the curved line it follows. We find the work of the force by taking the limit of the sum of the work on each segment, when the length of the segments tends to zero. The work of the force is the integral of the scalar product of the force and the displacement vector on the path followed. By definition of the scalar product, the work of the force of gravity g on an inclined segment AB is equal to g(hA - hB) where hA - hB is the altitude difference between its two ends. Now the altitude difference between A and C is equal to the sum of the altitude differences between A and B and between B and C: hA - hC = (hA - hB) + (hB - hC) So the work of the force of gravity g on any path going from A to Z is always equal to g(hA - hZ). It only depends on the difference in altitude of the ends A and Z, not on the path which connects A to Z. We have thus proven in the particular case of the gravity field on the surface of the Earth: Theorem: the work of the gravitational force does not depend on the path followed. If we neglect friction, the kinetic energy of a mobile on a roller coaster is the work of the gravitational force from initial rest. It only depends on the height difference formula_25, not on the path followed: The work of a force can depend on the path followed. For example, the work of a friction force is greater as the path followed is longer. When a force field is such that the work of the force between two points does not depend on the path followed, we can introduce a potential. It is defined by choosing a point of zero potential. The potential at each point in space is then defined by the work of the force on a standard body between this point and the point of zero potential. For the force of gravity, we take as a standard a body whose mass is equal to one. For electric force, the standard is a body whose electric charge is equal to one. The potential is well defined because the work of the force does not depend on the path followed. Let WXY be the work of the force on a unit body moved from X to Y, VX the potential at point X and O a point of zero potential. By definition of V, VA = WAO and VB = WBO. So VA - VB = WAO - WBO. Or WAO = WAB + WBO. So VA - VB = WAB. The potential difference between two points is therefore always equal to the work of the force on a unit body displaced between these two points. The electric voltage between two points is the work of the electric force on a unit of charge moved between these two points. In steady state, the work of the electric force does not depend on the path followed, this force therefore derives from a potential. The electric voltage is then a potential difference. It is for the electric force what the difference in altitude is for gravity. The unit of electric charge is the Coulomb. One Volt (V) is one Joule (J) per Coulomb (C). This is the potential difference necessary to give a charge of one Coulomb an energy of one Joule. 1 V = 1 J/C When an electric voltage is imposed across a metal wire, an electric current passes through the wire, which is a current of electrons. The electrons are accelerated by the electric force, but their average kinetic energy does not increase, because they are slowed down by the metal. They lose by heating the metal all the energy they gained, through the Joule effect. Electric light was invented by heating a metal wire carrying an electric current to white heat. The electrical resistance of a material measures its ability to slow down electrons or ions passing through it. Potential energy. The gravitational potential energy of a body of mass formula_24 located at the point formula_33 is equal to formula_34 where formula_35 > is the gravitational potential at point formula_33. When a body is at rest, its energy is not visible, not actual, because its kinetic energy is zero. When a body at rest is released into free fall, its gravitational potential energy is transformed into kinetic energy. The energy was already present when the body was at rest, but it had not manifested itself, it was only potential. Its transformation into kinetic energy is an actualization of the potential. For a body in free fall, released at rest, its kinetic energy at the end of the movement is the actualization of its gravitational potential energy, initially invisible. It is equal to the work of the force of gravity. So the work of force is the actualization of potential. Potential energy is the potential to cause a force to work. We have potential when we have a force that we can put to work. Energy has many forms: kinetic energy formula_37 is the energy of a mass formula_24 due to its speed formula_39. Heat is the kinetic energy of atoms and molecules. Gravity energy, electrical energy, and nuclear energy are potential energies that depend on gravitational, electrical, and nuclear forces, respectively. Chemical energy is the electrical energy of atoms and molecules. Light energy is the kinetic energy of photons, the particles of light. The equation for kinetic energy formula_40 is only a first approximation. A relativistic calculation, in accordance with Einstein's theory, gives a more precise result. The transformation of gravitational potential energy into kinetic energy is an example of conservation of energy. The energy never disappears. When a body loses energy, it always gives it to another body, or it transforms it into another form of energy. A hydroelectric dam transforms the energy of gravity into electrical energy. An electric oven converts electrical energy into heat. A nuclear power plant converts nuclear energy into heat and heat into electrical energy. The same goes for all forms of energy production or consumption. The total amount of energy in the Universe is constant. The potential energy of the mass: E = mc². Even mass is potential energy. The mass formula_24 of any body is proportional to the energy formula_42 that could be released if it were annihilated: formula_43 or formula_44 where formula_45 is the speed of light. A mass always has the potential to be annihilated. The equation formula_46, discovered by Einstein in 1905, can be proven from the fundamental equations of electromagnetism, Maxwell's equations, discovered in 1865. It is explained and proven in the last chapter of this book. The gradient of a potential. When the work of a force does not depend on the path followed, we say that the force field derives from a potential, because we can calculate the force from the potential by taking its gradient. To understand the concept of gradient, the simplest way is to think about the relief of a mountain. The height h(x,y) of a point on the mountain surface above a point (x,y) in a plane at sea level is a scalar field. We can call it an altitude field. From this scalar field we can define a vector field. For each point (x,y) we define a vector v(x,y) whose direction is the line of greatest slope of the mountain at this point, which is directed upwards and whose length is the slope of the line of greatest slope (if we rise 10 m when we move horizontally 100 m, the slope is equal to 10% = 10/100 = 0.1). The field of the vectors thus defined is the gradient of the altitude field: v = grad h In these images a scalar field is represented by shades of gray. Its gradient is represented by the arrows. If the scalar field represents the altitude of a relief, the first represents a cone, the second, an inclined plane. Level lines are lines of equal altitude. If h were a potential, we would call them equipotential lines. The lines of greatest slope are the lines that always follow the greatest slope. They are always tangent to the vector field at each point. If h were a potential, we would call them field lines. The lines of greatest slope are always perpendicular to the level lines. Likewise, the field lines are always perpendicular to the equipotential lines. The black lines are field lines, and the brown lines are the equipotential lines of the electric field created by two equal and opposite charges. On a terrain map, adjacent contour lines always indicate the same difference in altitude. They are tighter as the slope is greater. If in the same way we draw adjacent equipotential lines for the same potential difference, the lines are tighter as the field is larger. For a scalar field formula_25 in a two-dimensional space, we calculate its gradient by taking its two partial derivatives. grad formula_25 has components formula_49 and formula_50. The gradient grad formula_51 of a scalar field formula_51 in three-dimensional space is the field of vectors whose components are formula_53, formula_54 and formula_55. By definition of the potential, if we move a standard body by the distance dx, its potential variation is dV = -fx dx, where fx is the component of the force f in the x direction. So fx = formula_56. Similarly, fy = formula_57 and fz = - formula_58. The force f on a standard body is the opposite of the gradient of the potential formula_51: f = -grad formula_51 This is the general formula that allows us to calculate a force field from a potential. The Coulomb potential produced by a charge. According to Coulomb's law, the electric field E created by a negative charge -q is the field of vectors directed towards the center of the charge whose magnitude is q/r2 where r is the distance from this center. For a positive charge q, it is the same field, except that the vectors are in opposite directions. The electric field E created by an electric charge q derives from the coulomb potential V = q/r where r is the distance from the center of the charge: E = -grad V = -grad q/r The equipotential surfaces are the spheres centered on the charge. Field lines are all straight lines that extend from the center of the charge. This potential in a plane which passes through the charge can be represented by an altitude field: Lines of equal altitude are equipotential lines. They are the intersections of the equipotential spheres with a plane which passes through the electric charge. -dV/dr = q/r2 = E, the length of the vector E if q is a positive charge. The Coulomb potential of an electric charge is similar to the gravitational potential of a mass: The lines are the field lines of the gravitational field produced by the Earth, the gravity field. The potential produced by several charges. The electric field created by several charges is the sum of the fields created by each of them separately. Now the gradient of a sum is the sum of gradients, because d(f+g)/dx = df/dx + dg/dx. The electric field created by several charges therefore derives from the sum of the electric potentials created by each of them separately. We thus prove that the electric field created by several charges also derives from a potential. Since Newton's potential is mathematically similar to Coulomb's potential, the same is true for the gravitational force. The field producing two equal and opposite charges can be represented with field lines: with equipotential lines: or by representing its potential as a relief: We can also see the field lines in space: The gravitational potential produced by the Earth and the Moon is identical to the electric potential produced by two negative electric charges which would have electric charges proportional to the masses of the Earth and the Moon: The field lines are in blue, the equipotential lines in red. We can also represent this potential by an altitude field: The charge of a capacitor. A capacitor is made up of two conductive plates, very close to each other, and separated by an insulating material. Each of the terminals is connected to one of the plates. When the capacitor is turned on, one plate loses some of its electrons and becomes positively charged, while the other plate gains electrons and becomes negatively charged. We prove with Gauss' theorem (in the chapter on Maxwell's equations) that the electric force field formula_2 produced by an infinite charged plane, whose charge density is uniform and equal to formula_62, is also uniform on each side of the charged plane, perpendicular to it, that its magnitude is formula_63, if it is surrounded by emptiness, and its direction on each side of the plane is opposed to its direction on the other side. Since the electric field produced by several charges is the sum of the fields produced by each of them, the electric field produced by two parallel charged planes, whose surface charge densities are equal and opposite, is therefore uniform between the planes, perpendicular to them, and its greatness is formula_64 where formula_62 is the absolute value of the surface charge density, while it is equal to zero in all the space exterior to the two charged planes. The field produced by a finite surface capacitor is identical to the previous one except at the edges: The work of the electric force on a unit charge between the plates is equal to formula_66 where formula_67 is their distance. This remains true for finished plates far from their edges. The voltage formula_51 between the plates is therefore proportional to their electric charge formula_69. The electric charge is therefore proportional to the voltage: formula_70 where formula_71 is the capacitance of the capacitor. formula_71 measures the amount of charge received by one of the plates for a given voltage. formula_73 for plates of surface formula_74 separated by vacuum, separated by a distance formula_67. Proof: formula_76 This is why we make capacitors with large surfaces wound on themselves, separated by an insulating film as thin as possible. formula_77 is the vacuum permittivity. If the plates are separated by an insulating material of permittivity formula_78, the capacitance of the capacitor is formula_79 Electrostatics of conductive materials. A material is conductive when it contains electrical charges that are free to move. In metals, these charges are the conduction electrons. In semiconductors, they can also be holes, absences of electrons that move in a sea of electrons. In ionic solutions like salt water, the mobile electrical charges are positive and negative ions. Inside a conducting material, the electrostatic field is always zero. Proof: if the field was not zero, the moving charges would be subject to electric forces which would move them, and the field would not be static. On the other hand, electric charges can accumulate on the surface of conductive materials. If we place a metallic object near electric charges, its conduction electrons move to exactly compensate for the electric field imposed from the outside to the inside of the metal. The sum of the two fields, the one imposed from the outside and the one imposed by the electrons which have accumulated on the surface, is always equal to zero inside the metal: On the outer side of the surface of a conductive material, the electric field is perpendicular to the surface. Proof: if the component of the electric field parallel to the surface was not zero, the surface charges would move to cancel it. If the surface charge density is positive, the surface electric field is directed outward from the conductive material. If it is negative, it is directed inward. Proof: if it were the opposite, the electrical forces would push the mobile charges inside the material. The electric field exerts a force on the surface electric charges but cannot move them, because they cannot move out of the material. On the surface of a conductive material, the electric field is formula_64 where formula_62 is the surface charge density. The proof is given from Gauss' theorem, in the chapter on Maxwell's equations. Electric charges always move in a direction that tends to cancel out the cause of their movement. Two equal and opposite electric charges produce exactly zero electric field if they are exactly superimposed. When they attract each other, they move in a direction which tends to cancel the electric field which is the cause of their movement. When the terminals of a capacitor are connected with a metal wire, the conduction electrons are subjected to electrical forces, imposed by the charges on the capacitor plates, which move them in the direction of discharge, not in the direction of an increase in charge. Electrical polarization. A body is electrically polarized when a charge difference appears between two of its points. Conductive bodies are polarized when they are placed in a uniform external electric field. Opposite surface electric charges appear on two opposite sides of the object. Insulating bodies are also polarized by a uniform electric field. Surface electric charges appear on two opposite sides. How is this possible, when by definition, an insulating body has no free electric charges to move? In an insulating body, the electrons are attached to the nuclei. There is no free conduction electron to pass from one nucleus to another. Around the nucleus to which they are attached, the electrons have retained a little mobility. They can concentrate on one side or the other in order to compensate for the external electrical force to which they are subjected. We can think of electrons trapped in an atom or molecule as an electric fluid which is retained by the nuclei and which can be deformed. This is why insulating bodies can be polarized by an electric field like conducting bodies. Inside an insulating body, electric charges are displaced uniformly when it is polarized by a uniform external field. The charge density therefore remains zero inside the material. The surface charge density is the only one that varies. The movement of positive and negative charges inside an insulating body at the moment of its polarization is a transient electric current. But a permanent electric current cannot flow through an insulating material. When a body is electrically polarized, it orients itself in the direction of the applied electrical forces: The potential energy of a system of electric charges. How to calculate the electric potential energy of a system of charges? A charge formula_1 placed in an electric potential formula_51 has an electric potential energy formula_84. If we add up the electric potential energies of all the charges with the formula formula_85, we do not find the right result. Why ? To calculate the potential energy of a system, we must calculate the energy spent or received when it is assembled. When we assemble electric charges, the electric field they must pass through is not the same at the beginning and at the end of the assembly. The potential formula_51 that we calculate when the system is assembled is not the potential of the field that the charges encountered when they were assembled, it is only the potential of the field that a new charge would encounter if brought near the system. The first of the assembled charges does not encounter any electric force field to be put into place, because no charge is present before it. The second charge meets the field produced by the first. The third charge meets the field produced by the first two, and so on until the last charge, which encounters the field produced by all the previous ones. "Example: the potential energy of a charged capacitor" To charge a capacitor, electrons must be moved from one of its plates to the other. The first charge formula_87 moved encounters no field, because the capacitor is not charged. The work formula_88 that must be done to move it is therefore formula_89. If the capacitor is already charged by a charge formula_90, a new charge formula_87 encounters an electric force field, therefore a potential difference between its starting point and its arrival point formula_92. So formula_93 formula_94 formula_95 is therefore the electric potential energy of a capacitor charged by a charge formula_70, where formula_71 is its capacitance and formula_51 the potential difference between its plates. Where is the electric potential energy? Electric potential energy is attributed to electric charges. Is this energy carried by the charges? Is it located on the charges? formula_99. Mass is always energy. Energy always has mass. This equation is proven at the end of this book, where it is shown that light trapped in a box increases the mass of the box. Light therefore has a mass equal to its energy divided by formula_100. If the electric potential energy were localized on the charges, variations in their energy would cause their mass to vary. But this effect has never been observed. The electric potential energy of a system of charges is in all the space where they produce an electric field, it is carried by the electric field. The volume density of electrical energy in an electrostatic field formula_2 is formula_102 where formula_103 is the electric potential energy contained in a small volume formula_104. When we bring two electric charges of the same sign together, we increase the energy of the electric field: If we bring two charges of opposite signs together, we reduce the energy of the electric field. The same is true for the nuclear forces that hold neutrons and protons together in a nucleus. We have to provide them with energy to separate them. This is why their mass is greater when they are separated than when they are united: It takes infinite energy to separate the three quarks that make a proton, or a neutron. This is why we can never observe an isolated quark. Let us show that formula_102 allows us to correctly calculate the electric potential energy of a charged capacitor: formula_106 where formula_74 is the surface area of the capacitor, formula_67 the distance between the plates, formula_109 the volume between the plates and formula_110 the field formula_42 produced by a surface charge density formula_62. Now formula_113 and formula_114 so formula_115 When we define the potential, we are free to choose a point of zero potential. To calculate electric forces, any choice is suitable because two potentials that differ by a constant have the same gradient. But to calculate the energy and the presence of its mass, we are obviously not free to modify the energy of all the charges by choosing another point of zero potential. The point of zero potential must be chosen in such a way that an electric charge can be placed there without making any effort. This is why when we calculate the electrostatic energy of a system of charges, we assume that they are placed in empty space, and we define a zero potential at infinity. The force exerted by electric charges tends to zero at infinity. There is no effort to be made against their electric force if we are very far from them. When a dipole moment is created by the separation of equal and opposite charges, as when charging a capacitor, we can reason as if the charge density were initially zero everywhere. So there is no electric force anywhere. Any point can therefore be chosen as a point of zero potential.
7,413
Moving objects in retarded gravitational potentials of an expanding spherical shell. Appendix. __NOEDITSECTION__
26
Electricity and magnetism/Electrodynamics. An electric circuit can be represented by a circuit of moving water: With such a representation, the equality of tensions in parallel branches is clearly visible: Batteries and generators. The electric force on a positive charge pushes it in the direction of decreasing potential. Positive charges go down the potential under the effect of the electric force. Conversely, negative charges go up the potential. We generally reason with the conventional direction of electric current, as if it were a current of positive charges which go down the potential. This is why the conventional direction of current is the opposite of the electron current. In the following we reason with the conventional direction, because it is more intuitive to think of charges which go down the potential than of charges which go up it. In a passive dipole, the current always goes down the potential. Batteries and generators are active dipoles, capable of imposing a current which go up the potential, like a pump capable of raising water. In batteries, it is the chemical energy stored in the battery materials that is used to pump electrical charges. In turbine generators, current is pumped through the turbines through magnetic forces. The voltage in an electrical circuit. An electrical circuit is an assembly of elements brought into contact by their terminals. Connecting wires, generators, batteries, resistors, capacitors and coils have two terminals. Transistors have three terminals. A circuit can be very simple, a battery that powers a lamp for example, or very complex, a microprocessor. Where do the voltages across the elements of an electrical circuit come from? They are produced by electric forces which are themselves produced by electric charges. The plates of a charged capacitor exert electrical forces that produce an electrical voltage in the circuit between them. When the voltage varies, the charges which produce these voltages must also vary. A movement of electric charges is an electric current. The electric currents which produce voltage variations are charging currents. The electric currents which charge the capacitors, or which discharge them, are charging currents, or charge variation currents. Electric forces are the accelerations, not the speeds, of electric charges. Mass means that acceleration can have a very different direction than velocity. For example, in a constant speed turn, the direction of acceleration is perpendicular to the direction of speed. But when friction forces dominate, they cancel all inertial effects, and it is no longer the acceleration but the speed which is then proportional to the applied force, and in the same direction as it. We can generally ignore the inertial effects of electric current, as if friction forces on electric charges always dominated the inertial effects. So the electric current is in the direction of the electric field inside the conductive materials. Positive charges move in the direction of the electric field. Negative charges go in the opposite direction to the field. The elements of an electric circuit are generally electrically neutral. The sum of their negative charges is exactly equal and opposite to the sum of their positive charges. When we connect two terminals of a circuit to a generator, we almost instantly modify the electric potential energy of all the charges it contains. But any energy gained or lost by one charge is exactly offset by the energy lost or gained by an opposite charge. As if the charges were the two sides of a balanced scale. Any change in the gravitational potential energy of one of the plates is exactly compensated by the change in the potential energy of the other. If potential energy compensation did not take place when turning on a circuit, one might have to supply energy to connect the terminals of a circuit. But this effort is generally not necessary. We can turn on the light without making any effort. Most of the time the connection wires of a circuit are chosen so as not to heat up. Very little energy is lost by the flow of current through a wire. This is why the voltage across a wire in a connection is generally considered zero or negligible, as if electrons could travel through the wires without losing energy. The laws of voltages in a circuit. Three theorems are fundamental for calculating voltages in a circuit: Proof: UXY = VX - VY is the voltage between X and Y. For a closed loop ABC, UAB + UBC + UCA = VA - VB + VB - VC + VC - VA = 0 Proof: UAC = VA - VB + VB - VC</sub > = UAB + UBC Proof: let BC and DE be two dipoles placed in parallel between points A and F. AB, AD, CF and EF are connection wires. The voltage across their terminals is therefore zero. UAF = UAB + UBC + UCF = UBC = UAD + UDE + UEF = UDE "Remarks :" The power of the electric current. Electric current is a current of electric charges. In metals, it is a current of electrons, which are negative charges. In salt water, electric currents are ion currents. The intensity I of an electric current is measured in a way similar to the flow of a river or a jet of water, but instead of counting cubic meters or liters, we count electric charges. This is why the intensity I is measured in Coulombs per second. One Ampere (A) is one Coulomb (C) per second(s). 1 A = 1 C/s A charge equal to one Coulomb which crosses a potential difference equal to one Volt loses electrical energy equal to one Joule, by definition of Volt. The energy lost by the charge is energy it provides. The power supplied by a current equal to one Ampere passing through a potential difference equal to one Volt is equal to one Joule per second equal to one Watt (W): 1 W = 1 J/s = 1 V. 1 A = 1 V.A The power P supplied by an electric current passing through a dipole is the product of the intensity I of the current and the voltage U across the dipole: P = UI Where U and I are measured in Volts and Amperes, respectively, P is measured in Watts. The Joule effect and Ohm's law. An electrical resistance is a dipole that resists the flow of electric current. The electrons or ions are accelerated by the electric field but lose all the kinetic energy they have thus gained by giving it up to the material they pass through. The kinetic energy thus released is transformed into microscopic kinetic energy of the atoms or molecules of the material. Heat is the microscopic kinetic energy of atoms, molecules and all microscopic movements of matter. The hotter a body is, the more its microscopic constituents are agitated and excited. When the electrons of a metal are subjected to a potential difference, they are accelerated by the electric force, and slowed down by their collisions with the metal. However, light is produced by the acceleration and braking of electrical charges. So, the higher the electrical voltage, the more the metal heats up, this is the Joule effect, and the more light it produces. The Joule effect, which makes the light of electric lamps with filament, also means that a short circuit can cause a fire. It is as if the electrical charges were rubbing the resistant material, because the friction slows down and produces heat. We warm our hands by rubbing them. When the voltage across the resistor is constant, the intensity I of the current does not vary, because the electrical charges are not accelerated on average, because all braking compensates for all accelerations. A steady state is established which depends on the voltage U and the resistance R of the dipole: I = U/R which we write instead: U = RI This is Ohm's law. R is a coefficient which measures the resistance of the dipole to the flow of current. The larger R, the smaller I, for the same voltage U. The unit of measurement for electrical resistance is the Ohm (formula_1). From P = UI and U = RI we deduce P = RI2 = U2/R. This is the power supplied, or dissipated, by the Joule effect. The resistance of a connecting wire is close to zero. If U is different from 0, U2/R can be very large. The electrical power dissipated in a short circuit can be very large and cause a fire. If a wire is resistant, its resistance is proportional to its length. When a current flows through it, the potential decreases linearly with length in the direction of the current. A material is superconductive when it is perfectly conductive, when its electrical resistance is always exactly equal to zero. Inside a superconducting material, the electric field is always zero. Proof: if the electric field were not zero, there would be an electric voltage between two points, and according to Ohm's law, there would be an infinite electric current, since the resistance is zero. But an infinite current is impossible. So the field is zero. The omnipresence of electrical circuits. The study of electrical circuit dynamics is not just for electrical circuit designers, because electrical circuits are already naturally present everywhere. Materials can always receive or transfer electrons and thus be electrically charged. Several bodies combined therefore always behave like capacitors. Even an ion is similar to a plate of a charged capacitor. Ohm's law shows that electric currents follow paths of least resistance. For the same potential difference, the current is greater as the resistance is lower. Electric currents appear naturally as soon as charge differences appear and the materials are not perfectly insulating, because the charge differences cause potential differences to appear, as in a capacitor. The resistance of a perfectly insulating material is infinite, but insulating materials have a breakdown threshold, a voltage beyond which they allow current to pass, this is the electric arc, lightning for example. We can make models of most natural phenomena by reasoning about electrical circuits that connect resistors, capacitors, generators and coils. Transistors are electrically controlled variable resistors. The propagation of the nerve impulse. A nerve fiber is made up of axons, which are extensions of nerve cells, neurons. An axon is a long tube immersed in salt water. Its membrane is electrically insulating, because it does not allow ions to pass through. An axon can behave like a capacitor, because equal and opposite electrical charges can appear on either side of its membrane and thus produce a potential difference between the inside and outside of the axon. The membrane is crossed by ion pumps. These pumps accumulate opposite charges on either side of the membrane and produce a transmembrane electrical voltage. Ion pumps are like small generators, capable of imposing a voltage between their terminals, the interior and exterior of the axon. Ions can flow inside the axon, but water behaves like a material that resists the passage of current. We can make a model of the propagation of the nerve impulse with an electrical circuit made up of resistors and capacitors. The resistors represent the interior of the axon, which resists the flow of current. Capacitors represent the membrane of the axon, which can accumulate opposite electrical charges on its two surfaces. The axon can be considered as a succession of capacitors formula_2 and resistances formula_3 where formula_4 is a capacitance per unit length, and formula_5 a resistance per unit length. This model is a simplification of that of Hodgkin-Huxley. According to the law of charge of a capacitor formula_6 where formula_7 is the charge of one face of the axon membrane between x and x +dx, and formula_8 is the charge per unit length. According to Ohm's law formula_9 so formula_10 formula_11 so formula_12 formula_13 It is better to write: formula_14 This is the diffusion equation. This means that the electrical signal propagates very slowly, like a dye diffusing in a liquid. Nerve fibers can be several meters long. If we put a dye on the end of a pipe, we have to wait a very long time before it makes its presence felt on the other side. How then is it that nerve impulses can propagate at several meters per second? Signal propagation is accelerated by amplifiers along the length of the axon. The membrane is crossed by pores which function like electrical switches. They may or may not pass current. These pores are electrically controlled by transmembrane voltage. Like transistors, they are electrically operated electrical switches and can function as amplifiers. Even if the control signal is weak, the effect, the electric current, can be strong. The pores are distributed at regular distances along the axon. They function as signal transmission relays. Most of the time a pore is closed, and the transmembrane tension has a constant value, produced by the ion pumps. But if the transmembrane tension decreases sufficiently, a pore can open, allowing ions to pass through, and thus further reduce the transmembrane tension. This reduction in tension propagates to the next pore, which in turn opens. The pores open successively like a chain of dominoes such that each one falls on the next. This is the propagation of nerve impulses. The speed of propagation is very slow, from a few tens of centimeters per second to a few tens of meters per second, because current is required to discharge the membrane, and because the interior of the axon resists the passage of the current. If the axons were metal wires, the signal propagation could be much faster, close to the speed of light, 300,000 km/s. This is why computers are much faster than brains. Fast-transmitting axons (especially those running from the feet to the head) are surrounded by myelin. These are insulating cells, like an insulating wall on an electrical wire. They reduce the capacity of an axon wall, because they increase its thickness, and thus accelerate the propagation of the nerve impulse, because thanks to them, it takes less time to discharge the membrane. Myelin is made of cells that wrap around an axon: The axon is in the center. Its wall is thickened by a myelin cell (a Schwann cell) which has wrapped around the axon. The insulating myelin wall is interrupted at the nodes of Ranvier to allow the flow of ion currents which charge the axon (the ion pumps) or discharge it (the signal transmission relays): (a) dendrite, (b) cell body, (c) nucleus, (d) axon cone, (e) myelin, (f) nucleus of a Schwann cell, (g) node of Ranvier, (h) axon terminal The decision to emit or not a signal is made in the axon cone and is relayed by the nodes of Ranvier. Myelin is the white matter of a brain, neurons are the gray matter. The white matter is particularly visible between the two cerebral hemispheres because signal transmission must be rapid: Computers are not the only machines that use electric current to make calculations and transmit information. Nature invented electrical calculators before us: neural systems, and especially brains. Like computers, brains operate with a binary system: either the signal passes through an axon, or it does not. There is no third possibility. God has given us the power to find science. When we seek the laws of all that is, we find them, provided we work well. With science, we can understand everything there is to understand, including ourselves. God has not deprived us of the laws that explain what we are. He teaches us the truth about all beings. By giving us the laws of electromagnetism, he gives us laws that explain almost everything, even us.
3,559
Kitchen Remodel/Tile work. Floor tiles. There is not too much new wisdom that I could possibly contribute to the topic of tile selection or tile laying. All imaginable information can be found on the internet, including tons of video tutorials on Youtube. But there are still some ideas that I would like to share. Product selection. For my own kitchen project, I picked a 1x2 ft tile from my local home improvement store that would match the colors both of my cabinet fronts and of my wall shelves (grey and brown). Since we had gone considerably over budget with our remodel in its entirety, we were happy to find tiles that were reasonably priced but turned out to be quite decent in quality, that is they were neither bent nor chipped. For the grout, we decided on the darkest shade that was available (after noticing somewhere else that a dark tile with a grout that is even slightly less dark looks really mismatched). Another consideration was that we did not want every crumb that falls on the floor to show; the natural answer to that was to pick a tile with an irregular and distractive pattern. We have also been thinking back and forth about the question of whether the space should have a bright or a dark floor. We decided for dark, because the eye catching element of our kitchen is the countertop, with its vast surfaces that we worked so hard to highlight – with our lighting scheme, with the very understated grey cabinet fronts and with the two waterfalls. A white or even tan colored floor would have undermined that. But the dark floor tiles point the bright countertop out, strongly so. Mapping. The best advice that I can give for tile laying is to plan it all out in detail before you start the project. Whenever I have tiles to lay, I start with an accurate floor plan. I do this in Inkscape, but any other vector graphics software will do. Next, I create a tile pattern, based on the exact dimensions of the tile and on the exact distance between the tiles (grout line). Then I play around with that pattern, shifting it left and right and up and down, in order to find the best layout in terms of beauty as well as of practical considerations (where will the first row of tiles being laid out?) and of avoiding of "ugly" cuts, especially avoiding of very narrow strips of tiles and of complicated cuts; we have a simple wet tile saw, which serves us well for our humble purposes, but it cannot, for example, perform U-shaped cuts. By the way, there is much more in the realm of tile patterns than just the same old same old regular offset (1st and 4th images). Tile laying is a great chance to be creative and to invent your own. If you can draw it, you can probably do it. I figured that in my project and with the type of tile that I had chosen, an irregular pattern (roughly represented in the 5th image) would work best. When we laid those tiles out, we followed strictly the horizontal lines that were shown in our map, but operated very flexibly with the length of the first tile that we put down in every single row. Whenever we had a cutting left from the previous row that made sense to use again immediately, we did so, sometimes starting from the right end, sometimes from the left. This saved us a lot of material, too. Other preparations. Another important thing, at least to me, was to "sort" the tiles – at least the dozen that were immediately to be used. Many modern tiles seem to have a naturally varied appearance, every specimen unique, but they are actually "printed", and the design will repeat. The number of variations probably very much depends on the product. I noticed that my product featured roughly 20 different variations, which is probably more than average. Anyway, I sorted them to avoid clusters of identical tiles, and I also turned half of them 180 degrees for maximum diversity. I also had some variations that I outright disliked; so I kept the majority of those for either hidden spots, like under the dishwasher and the fridges, or for pieces that had be cut anyway.
896
Chamteela. Chamteela, also known as Luiseño, is the indigenous language of the (Luiseño) people native to Southern California. This Wikibook will teach you the basics of how to speak, read, and write in the language. Table of Contents. More parts coming soon!
71
Chamteela/Part 1. The Chamteela language is written using a purely phonemic orthography, meaning that all of the words can be pronounced as they are written. In English, the word "tear" has two different pronunciations depending on the context, but all Chamteela words exactly match their spelling. As a result, it would be impossible to perform a spelling bee in Chamteela. Vowels. The following table shows the pronunciation of the Chamteela vowels. The bolded vowel letter (or letters) in the English words on the right column matches the Chamteela pronunciation of the corresponding vowel sound on the left. Consonants. Except for the following exceptions listed below, all of the consonants in Chamteela (along with the digraph "ch") are pronounced identically to how they are English, but without any aspiration. Aspiration is the puff of air that is pronounced in English when pronouncing the letters "ch", "k", "t", and "p" in certain contexts. To give you an idea of what aspiration is, place your fingers in front of your mouth while saying the English words "pin" and "spin". You will notice that you will release a puff of air at the beginning of "pin", but not at the beginning of "spin". This puff of air is aspiration. Now try pronouncing the word "pin" without releasing any air. Try it again with "chin," "tin," and "kin." Once you have got that down, you will be able to correctly pronounce Chamteela words. Here are all of the exceptions to the Chamteela consonants in terms of how they are pronounced: Syllable Structure and Stress. Chamteela words divide their syllables similarly to English, with vowel-initial syllables occurring only at the beginning of words or after another vowel. All other syllables begin with a consonant. By default, Chamteela words are stressed on the first syllable after the possessive prefix (which is discussed later on in this Wikibook) if one exists in that word. The word Chamteela, for instance, is stressed on the second syllable since cham- is a possessive prefix. However, words that do not meet this general rule of thumb are stressed on the syllable with an accent mark on the vowel. For example, the words "polóv" (good) and "koyóowut" (whale) are stressed on their second syllables; stressed long vowels are written with the accent mark only on the first letter of the long vowel. If there is only one consonant between a stressed and an unstressed vowel, then the consonant is pronounced as both the final consonant in the stressed syllable and as the first consonant in the unstressed syllable, unless it is one of the following consonants that are pronounced differently than they are spelled in this context. In the following table, the hyphen represents the normally unwritten syllable break, the capital letter S stands for the stressed vowel, and the capital letter U stands for the unstressed vowel. Typing Chamteela. Any mobile or nonmobile keyboard that can type Spanish and has the dollar sign symbol can also type Chamteela. If you would like to type in Chamteela, research a Spanish language keyboard layout with a dollar sign symbol that works right for you.
802
Transportation Deployment Casebook/2024/Philippine National Road Network. The Philippine National Road Network is a collection of roads constructed, maintained, and upgraded by the government of the Republic of the Philippines, through its road infrastructure arm, the Department of Public Works and Highways (DPWH). Roads are a primary mode of land transportation in the Philippines. Although the country is an archipelago, its national highway network is interconnected with roll-on, roll-off ports which connect most of the islands. Advantages. The growing road network of 35,164 kilometers at present has always been given primary focus by the government for continuous improvement as it is strongly tied with the economic development of the country. Its main advantages being: Main Markets. Funding for the road network is sourced from government revenues and as a result, the national road network of the Philippines is not tolled. Unlike the United States Interstate Highway System, permitted types of vehicles, including motorcycles, private cars, trucks, and buses, can freely traverse the road without payment. Nevertheless, inter-island movements would require payment to the ships which carry the vehicles from one island to another. Background and History. Geography and its Implications. The Philippines is an archipelago situated in Southeast Asia surrounded by the Pacific Ocean in the east, Taiwan in the north, South China Sea in the west, Malaysian state of Sabah in the southwest, and Celebes Sea and Indonesia in the south. The country is divided into three major island groups namely, Luzon, Visayas, and Mindanao. While Luzon and Mindanao are larger islands with smaller islands, Visayas is a collection of several islands. With the said geography, land transport may only be limited to the network of roads in each island, if there was any in the pre-colonial times. At present, land transportation is complemented by maritime travel by accessing roll-on, roll-off ports which ferry vehicles from one island to another. Moreover, moving around the Philippines can be done by land, sea, or air travel with the latter being the fastest and convenient, but at the same time a fairly more expensive option. Pre-Colonial Era. While records of pre-colonial land transport mode are not evident, it has been apparent that the area where the present-day Philippines is situated played an important role in the maritime movement of people, material culture, and ideas. Inter-island movement of people, including farmer Austronesian-speaking communities from China and Taiwan, already existed. Fishing and farming are the main sources of food of the indigenous Filipinos who lived mostly in small village communities. Its well-preserved rice terraces in the modern-day province of Ifugao showed pre-colonial prowess in the field of irrigation. Spanish and American Colonial Period. Despite various communities existing in the archipelago in the prior years which can potentially influence the development of the road network of the Philippines, its earliest record can be traced during the Spanish occupation of the Philippine archipelago. Forced labor enabled the construction of the first roads in the Philippines dating as early as 1565. A well-preserved cobblestone road example is in Calle Crisologo located in Vigan, a city in the province of Ilocos Sur in Luzon island. The city is considered as one of the most intact Spansh planned colonial towns in Asia . Forms of transportation began to evolve especially with the use of the calesa, a horse-drawn carriage. Horses are not a local fauna in the Philippines and were imported from other countries. Prior to its availability, carabaos or an indigenous species of water buffalos were used to draw the carriages . The roots of the present-day DPWH came from two colonial government entities, the Bureau of Public Works and Highways (Oficina de Obras Publicas y Carreteras) and the Bureau of Communications and Transportation (Oficina de Comunicaciones y Transportes) in 1868 . Thirty years later, with the declaration of Philippine independence, an organic revolutionary government decree was issued by the first Philippine president, Emilio Aguinaldo, on June 23, 1898 creating the Department of War and Public Works. This government office was tasked to primarily construct and maintain roads and bridges in the country. Building of trenches as well as fortifications necessary for the ongoing war were also under the jurisdiction of this body. Not long after its declared independence, the Philippines went under the American rule after Spain ceded the archipelago with the United States (US). In turn, the construction and maintenance of the national roads went under the control of the US Army engineers. When the World War II broke out in 1941, the Philippine government ceased its operation almost entirely and was occupied by the Japanese from 1942 to 1945. The Philippine National Road Network. The United States recognized the independence of the Philippines on July 4, 1946. At this time, most of the infrastructure including roads were damaged during the World War II. Thus, the United States Bureau of Public Roads assisted the war-torn country in undergoing repairs as well as implementing a national highway program under the Philippine Rehabilitation Act of 1946 . The US authorized the release of USD73.00 million or USD1.15 billion adjusted for present-day values for this purpose . The rehabilitation works eventually paved the way in the realization of the Philippine National Road Network. The first recorded comprehensive policy for the classification of roads in the Philippines was Executive Order No. 483, Series of 1951 , which established the limits of public roads and declaring which roads are under the jurisdiction of the national government for maintenance as well as construction in the case for new roads . Philippine Highway Act of 1953. In 1953, the Philippine Highway Act was passed which supported an administrative mechanism for the improvement and maintenance of national roads and bridges . An equitable management of funds across the provinces of the Philippines was also mandated to ensure the strategic development of the road network. Some city and municipal roads gained more strategic value for national development. However, due to it being under the watch of local government units, it did not enjoy the same amount of funding the national roads received for maintenance and upgrading. Thus, Executive Order No. 113, Series of 1955 was signed by then President Ramon Magsaysay, which aimed to reclassify roads by categorizing national roads into two namely, primary and secondary, and important provincial and city roads were labeled as “national aid” roads. This allowed local governments access to national government funding for their provincial and city roads . From Growth to Maturity. As the interpretations of the existing policies over national roads continued to evolve, the national road network length became shorter or longer over the next 15 years. In the same duration, the Department of Public Works and Communications had several reorganizations of its structure including becoming the Department of Public Works, Transportation, and Communication (BPWTC). The Pan-Philippine Highway System. A priority project of Diosdado Macapagal who served president from 1961 to 1965, the Pan-Philippine Highway began construction. The Bureau of Public Highways under the DPWTC fostered the construction of 7,633 kilometers of national, provincial, city, and municipal roads between 1962 and 1964. Several bridges were also built and improved with a cumulative length of 3,500 meters . Throughout the Philippines, the Pan-Pacific Highway has an overall length of 3,379.73 kilometers connecting the three major island groups of the Philippines – Luzon, Visayas, and Mindanao . This highway which forms part of the Philippine National Road Network that is presently known as Asian Highway 26 (AH26) of the Asian Highway Network. Most of the construction of the said highway and other national roads continued during the presidency of Ferdinand Marcos, Sr. funded through foreign loans. Between 1978-1982, the completion of its regional network in Visayas and Mindanao were undertaken. The massive infrastructure development at this time aimed to generate jobs as well as develop local skills and indigenous resources . Martial Law Years. Marcos, Sr. served for three terms as president of the Philippines from 1965-1969, 1969-1972, and 1981-1986, wherein during the second term he declared Martial Law (1972-1981). The government at this time consolidated all infrastructure functions into the Department of Public Works, Transportation, and Communications (DPWTC) as part of its Integrated Reorganization Plan to optimize its operations . Under this Department, the Bureau of Public Highways became the Department of Public Highways in 1974, separate from the DPWTC. In 1976, there was a shift in the form of government which yet again resulted into the renaming of the two departments into ministries – Ministry of Public Works, Transportation, and Communications (MPWTC) and Ministry of Public Highways (MPH). In 1979, further restructuring happened and the MPWTC were divided into two more entities, Ministry of Transportation and Communications (MOTC) and Ministry of Public Works (MPW). During the leadership of Marcos, infrastructure development was a vital economic tool. However, in 1981, the increased interest rates resulting from the recession of the United States bloated the cost of loans of the Philippines. The Philippine economy declined along with other debt dependent countries in the developing world . This likely also resulted with the merger of MPW and the MPH in 1981 as the Ministry of Public Works and Highways (MPWH) as an austerity measure as well as streamlining the infrastructure services of the Philippines. Post-Martial Law Years. Despite the stunted growth of the road network due to economic pressures, the national road length was aimed to be increased by 55 percent by 1987. The road network development was now focused on the intensive construction of farm-to-market linkages. Visayas and Mindanao which did not receive much infrastructure support was given priority. In addition, an estimated 7,600 kilometers of the national roads were due for rehabilitation, improvement or upgrading . The shift of power from Marcos, Sr. to Corazon Aquino marked more funding for other infrastructure sectors as well as support for the science and technology as well as education sector. In 1987, following the new constitution of the Philippines being applied up to present, the education sector is mandated to receive the top share of the national budget. In addition, the MPWH reverted into a government department and is now called the Department of Public Works and Highways (DPWH). The government targeted that 55 percent of the national roads will be paved . By the end of the administration of Aquino transitioning to the leadership of Fidel Ramos in 1993, 3,858 kilometers of national roads were constructed, upgraded, or rehabilitated by the DPWH . In 1997, the total road network (including local roads) of the Philippines reached 190,030 kilometers and translated to an overall 0.63 kilometers of road per square kilometer but continued to lag behind Malaysia and Thailand in terms of paved road ratio . Thus, most of the efforts done in this time was to increase the paved road ratio. To access more funding, roads which earned more strategic importance were converted into national roads. Strong Republic Nautical Highway System. In 2004, the Philippines now has approximately 202,000 kilometers of roads of which a mere 15 percent was within the national road network maintained by the DPWH. The roll-on, roll-off vessel operations became a new way of further integrating the national road network across the archipelago and complement the earlier Pan-Philippine Highway System situated at the country’s east. Although started in 2003 by then President Gloria Macapagal-Arroyo, the Strong Republic Nautical Highway System became more apparent as more traffic demand was generated prompting improvements in the national road network. Public transport also benefited in the 919-kilometer nautical highway system wherein buses from Manila as well as key development centers can now travel through the national road up to the ports where they are ferried by ships to other islands and continue with their journey by land. Current Functional Classification of Roads. Between 2009 to the present, several issuances were issued by the DPWH pertaining the national road network. The policies governing the national budget for road maintenance and upgrading kept changing as local governments with minimal funding sources turn to the national government for the improvement of their roads. To resolve such issue, the DPWH issued Department Order No. 133, Series of 2018 which effectively defined in detail the functional classification of roads. This maintained the integrity of the national road network and prevented potential allocation of funds for local roads. Future Prospects. Allocation of government resources have continued to be a challenge to the ever-growing needs of road transport. Thus, the definition of the national road continues to be ambiguous and is prone to misalign limited funding of the government. In addition, the increased car ownership in the Philippines has also manifested continued demand for more road capacity which can fall prey with further traffic congestion due to induced demand. The direction of the Philippine National Road Network can either be build more roads or improve other means of transport such as public transportation. The roll-on, roll-off port which complement the national road network may become less useful in the future as the government eyes connecting the islands with physical bridges with several already in the detailed engineering design stage. This may eventually lead into yet another birth stage with the islands connected and increase for national road demand. Quantitative Analysis. Methodology. With the road length information obtained from the DPWH and the Philippine Statistical Yearbooks, the life cycle stages of the development of the Philippine National Road Network were determined by means of an S-Curve using three-parameter function as described by the equation: formula_1 In estimating the saturation status level K, as well as coefficient, b, the ordinary least squares (OLS) regression was used. The coefficients being calculated through the intercept method, as follows: formula_2   S-Curve and Life Cycle Stage. The S-Curve and the Life Cycle Stages of the Philippine National Road Network from the passing of the Philippine Highway Act in 1953 up to the present shows a good fit. The birth stage shows a rapidly growing system until 1958. The stagnated years are due to the absence of data during this period until 1965 where it decreased significantly due to policy changes under the new administration of Marcos, Sr. A rapid increase can be observed until 1971 when it reached its maturity during the aggressive construction works done in the system. At 1973, where presumably most of the road network was established, it coincided with the inflection point. Interestingly, it can be observed that the actual kilometers of national road were either below or close to the predicted length between 2009 and 2018. And after the passage of Department Order No. 133, Series of 2018, where national road classification was made much clearer, the national road network began to move further above than the predicted values. This can indicate that there can be further growth of the network in the future.
3,540
A-level Computing/AQA/Paper 1/Skeleton program/AS2024. AQA computer science Queue Simulator This is for the AQA AS Computer Science Specification. This is where suggestions can be made about what some of the questions might be and how we can solve them. Section A Predictions. The 2024 paper 1 section A will contain 4 questions worth 21 marks. (Q1 - 5 marks, Q2 - 4 marks, Q3 - 3 marks, Q4 - 9 marks). The EAD provides a table for the Q1 answers which is likely a trace table. Section B Predictions. The 2024 paper 1 section B will contain 11 questions worth 24 marks. (Q5 - 2 marks, Q6 - 2 marks, Q7 - 1 mark, Q8 - 1 mark, Q9 - 2 marks, Q10 - 2 marks, Q11 - 4 marks, Q12 - 2 marks, Q13 - 3 marks, Q14 - 3 marks, Q15 - 2 marks.) Section C Predictions. The 2024 paper 1 section C will contain 3 questions worth 30 marks. (Q16 - 8 marks, Q17 - 9 marks, Q18 - 13 marks.) Q16 - Prediction As there are no major outstanding errors in the program, the only noticeable change in functionality that could be made is to do with the settings menu. Initially, entering any value apart from "Y" would let the user skip editing the settings, however this should be changed so that the program only accepts Y and N as user inputs. An example has been given below in C#: public static void ChangeSettings(ref int SimulationTime, ref int NoOfTills) SimulationTime = 10; NoOfTills = 2; Console.WriteLine("Settings set for this simulation:"); Console.WriteLine("================================="); Console.WriteLine($"Simulation time: {SimulationTime}"); Console.WriteLine($"Tills operating: {NoOfTills}"); Console.WriteLine("================================="); Console.WriteLine(); Console.Write("Do you wish to change the settings? Y/N: "); string Answer = Console.ReadLine(); Answer = Answer.ToUpper(); while (Answer != "Y" && Answer != "N") Console.WriteLine("Not a valid choice, please try again with Y/N"); Answer = Console.ReadLine(); Answer = Answer.ToUpper(); if (Answer == "Y") Console.WriteLine($"Maximum simulation time is {MAX_TIME} time units"); Console.Write("Simulation run time: "); SimulationTime = Convert.ToInt32(Console.ReadLine()); while (SimulationTime > MAX_TIME || SimulationTime < 1) Console.WriteLine($"Maximum simulation time is {MAX_TIME} time units"); Console.Write("Simulation run time: "); SimulationTime = Convert.ToInt32(Console.ReadLine()); Console.WriteLine($"Maximum number of tills is {MAX_TILLS}"); Console.Write("Number of tills in use: "); NoOfTills = Convert.ToInt32(Console.ReadLine()); while (NoOfTills > MAX_TILLS || NoOfTills < 1) Console.WriteLine($"Maximum number of tills is {MAX_TILLS}"); Console.Write("Number of tills in use: "); NoOfTills = Convert.ToInt32(Console.ReadLine());
852
Overview of Elasticity of Materials/Introduction. Currently, there is no suitable open source text for materials science that not only includes the basics of the topic, as seen in W.D. Callister's "Materials Science and Engineering: An Introduction", but also covers additional concepts such as the derivation of Mohr's circle and an introduction to tensors. Never the less, these concepts are key for student to gain an understanding of more advanced topics in materials science and engineering. This text is built as an open-source companion for the currently available texts. It attempts to address these additional topics and give further detail on their application. While this text will go in depth on the linear elastic behavior of materials, there are currently no plans to include boundary conditions, which are essential for advanced analysis. The non-linear response often observed in polymeric and biological materials also is neglected here. Thus, this content should only be considered as an introductory overview of the topic. A basic level of understanding on the subject of materials science, including stress and strain, is required to understand the real-world applications of this text. While the resulting equations and concepts given here can be immediately applied by the reader, a basic understanding of the mathematical foundations presented here is recommended for advanced applications. SPB -- April 2022
279
Technical Analysis/Foreword. Technical analysis is a trading discipline employed to evaluate investments and identify trading opportunities by analyzing statistical trends found in charts gathered from trading activity, such as price movement and volume to determine future price behavior. Technical analysis is applicable to stocks, indices, commodities, futures or any trade-able instrument where the price is influenced by the forces of supply and demand. The roots of modern-day technical analysis stem from the Dow Theory, developed around 1900 by Charles Dow. Stemming either directly or indirectly from the Dow Theory, these roots include such principles as the trending nature of prices, prices discounting all known information and volume mirroring changes in price. Technical analysis assume the market to be 80% psychological and 20% logical. The psychological or logical part may be open for debate, but there is no questioning the current price of trading activities. After all, it is available for all to see and nobody doubts its legitimacy. The price set by the market reflects the sum knowledge of all participants. These participants have considered almost everything under the sun and settled on a price to buy or sell. These are the forces of supply and demand at work. By examining price action to determine which force is prevailing, technical analysis focuses directly on the bottom line: (1) What is the price? (2) Where has it been? (3) Where is it going?
304
Super Mario 63/Introduction. Super Mario 63 is a notable unofficial 2D "" fan game created by Runouw that gained popularity on "Newgrounds.com". It is a single player platforming title, and in part uses music and 2D sprites from Nintendo games to mimic the franchise. "Super Mario 63" is a play on "Super Mario 64", but it features enemies, characters, and abilities from several other games of the "Mario" franchise, as well as a level designer that allows creating custom levels. Players can create and upload additional courses. Note that many web sites have other earlier versions of the Flash game that lack the extended animated intro, as well as some cutscenes, stars, and star coins. History. In 2005, Runouw released a game called Super Mario Sunshine 64 as their first official flash player game. Runouw then decided to remake it and titled Super Mario Sunshine 128. It was planned originally to be playable on the Wii on Wiicade, but was denied (by Nintendo) for IP-related reasons. Later, for aesthetic purposes, it was renamed to Super Mario 63 and released on Newgrounds and SheezyArt in mid-2009. Story. Mario decides to go to Princess Peach's Castle. After he gets there, Bowser springs a surprise attack from his airship and kidnaps the princess. Bowser and Kamek then break the Power Orb and leave. After Mario wakes up and he meets Toad Eddie, he finds a Shine Sprite on the ground. Mario follows Eddie on an adventure to get to the Power Orb and place the Shine Sprite there to restore order to Mushroom Kingdom.
388
How to Program a TI-83 Plus/Foreword. So, you want to program your calculator. If your calculator is not on the list, use another book. If you found your calculator on the list, you've come to the right place. Welcome to the official unofficial TI-BASIC programming tutorial! This wikibook will include each and every feature that you'll need to make a TI-BASIC program. But first, we need to know what TI-BASIC is.
119
Basic Physics of Digital Radiography/Introduction. Digital Radiography heralds a new era for medical X-ray imaging. Radiographs can be recorded using digital image receptors and enhanced using computer processing. They can also be transferred to databases for archival and transmission throughout hospitals and clinics. The change from traditional film-based image receptors is similar in many respects to those occurring in digital photography and digital television. Greater precision can now be applied to each stage of the imaging process so that the balance between image quality and radiation dose can, at last, be controlled accurately. Knowledge and understanding of the basic physics which underlies practice is critical for its successful application in the clinical environment. This wikibook addresses this requirement. The intent is a text which succinctly explains the physical basis of X-Rays and their modern application in Diagnostic Radiography. The wikibook is primarily for students with foundations in anatomy and physiology and could also be of interest to physics and engineering students requiring a topic overview. Math is kept to a minimum and only a basic knowledge of algebra is required of students. The use of calculus and other math techniques is largely avoided by using conceptual descriptions of the more complex processes. Keeping radiation dose as low as reasonably achievable while understanding its effects on image quality is the primary emphasis. The focus extends from general radiography through fluoroscopy to 3D angiography as applied in general hospitals and clinics. The subject treatment is different from that used in conventional textbooks in that radiation biology is not treated as a separate topic but is embedded within the consideration of image formation. Likewise, the topic of image quality is integrated into the treatment of image display. The hope is that more integrated treatment of the subject is developed for those studying the field for the first time. Note that the particular requirements of mammography and of paediatric, dental, chiropractic and veterinary radiography are not covered at this time. A companion WikiBook on the Basic Physics of Nuclear Medicine is also available.
446
The Future of Leadership/Foreword. Leadership is a dynamic topic, always adapting to the changing nature of the world of work. However, in the aftermath of the COVID-19 pandemic and in the midst of trends such as "the great resignation," "quiet quitting," increasing reliance on remote and hybrid work, and the changing nature of the psychological contract between organizations and their workers, the future of work demands more from leaders than ever before. This book, written by the members of an elective course, The Future of Leadership, in the College of Business at Idaho State University, is a compendium of resources and advice from some of the latest thinking about the future of work. Particularly appropriate to the spirit of Wikibook, the format of this book is as dynamic as the topic it covers. We hope that you enjoy reading this work - and please recommend it to friends and colleagues who are seeking to make sense of the future of leadership.
205
Write Yourself a Scheme in 48 Hours/Overview. Most Haskell tutorials on the web use a style of teaching akin to language reference manuals. They show you the syntax of the language, a few language constructs, then tell you to create a few simple functions at the interactive prompt. The "hard stuff" of how to write a functioning, useful program is left to the end, or omitted entirely. This tutorial takes a different approach. You'll start off using and parsing the command-line, then progress to writing a fully-functional Scheme interpreter that implements a decent subset of R5RS Scheme. Along the way, you'll learn Haskell's I/O, mutable state, dynamic typing, error handling, and parsing features. By the time you finish, you should become fairly fluent in Haskell and Scheme. This tutorial targets two main audiences: The second group will likely find this challenging, as this tutorial glosses over several concepts in Scheme and general programming in order to stay focused on Haskell. A good textbook such as Structure and Interpretation of Computer Programs or The Little Schemer should be very helpful. Users of a procedural or object-oriented languages like , , or should beware: You'll have to forget most of what you already know about programming. Haskell is "completely" different from those languages, and requires a different way of thinking about programming. It's best to go into this tutorial with a blank slate; try not to compare Haskell to imperative languages, because many concepts (including classes, functions, and codice_1) have significantly different meanings in Haskell. Since each lesson builds upon code written in previous ones, it's generally best to go through them in order. This tutorial assumes that you'll use GHC as your Haskell compiler. The code may work with Hugs or other compilers, but hasn't been officially tested with them, and may require downloading additional libraries. A prior, non-wiki-edited version of the source code files used in this tutorial is available on the original site.
469
Transportation Deployment Casebook/2024. = Communications Technologies =
17
Transportation Deployment Casebook/2024/Sri Lanka Railways. = Sri Lanka Railways = Overview of Sri Lanka Railways. Introduction. Railways, in all their different forms and states, have long been an integral mode of transportation for people across the world. For centuries since their inception in the early 1800s in England, railways have been one of the conveniently accessible modes of public transportation for people, offering them constant mobility whether it be for commuting or holidays. As a result, railways have often been a popular mode of public transportation in the world. Given that railways are among the most affordable and fastest modes of public transportation, Sri Lanka has not been an exception to this norm. For years since the introduction of railways by the British in 1864, the railway system in Sri Lanka has evolved at a slow pace. At its inception, the railway network was built primarily to carry freight during the British colonization, especially to transport commodities from the hill country. However, it has now evolved into a passenger-oriented transportation system, running around 396 trains daily, including 16 intercity and 67 long-distance trains, carrying close to 3.7 million passengers. At present, Sri Lanka Railways (SLR) is the sole rail transport organization in the country and is considered a key government transportation provider under the Ministry of Transport. Technology and Infrastructure. In comparison to the global context, the technology and infrastructure of the railway system in Sri Lanka are slightly primitive. The railway tracks currently being utilized are aged and poorly maintained. Steam locomotives are no longer in use, and the majority of the operational locomotives and train sets are diesel engines except for the historical trains such as the “Viceroy Special,” which is still powered by steam engines. It is also stated that with the conversion of the “Kelani Valley Line” to broad gauge in the 1990s, all running locomotives are currently broad gauge. Apart from the few trains targeted for tourists, the passenger coaches are also mostly unsafe and in bad condition compared to the global context. Furthermore, the current signaling system makes extensive use of the lock-and-block signaling protocol. In the middle of the 20th century, it was reported that the busiest areas of Colombo were converted to electronic signaling, and they were all connected to a CTC control panel at one of the main stations at Maradana. At present, Sri Lanka Railways owns and operates all 1,561 km of 1,676 mm wide gauge rail tracks, 72 locomotives, 78,565 carriages, and the consequent signaling network. Even though commercial electric locomotives and train sets are not yet available in Sri Lanka, the proposal to electrify the locomotives with the aim of increasing sustainability and energy efficiency is currently underway, with the Institution of Engineers in Sri Lanka (IESL) at the forefront. Railway Routes and Operating Markets. Since the beginning, the government has provided the only means of rail transportation, creating a monopoly in the industry. At present, the operating markets are divided into three functioning regions, with bases in Colombo, Anuradhapura, and Nawalapitiya. Among them, the main operating railway lines are the Main Line, Coastal Line, Matale Line, Northern Line, Mannar Line, Batticaloa Line, Trincomalee Line, Puttalam Line, and the Kelani Valley Line, which includes both the intercity and long-distance train services. The Main line, which is considered to be the backbone of Sri Lanka Railways, runs from Colombo to Badulla, offering one of the most breathtaking train rides in Asia, drawing the attention of tourists from all over the world. It is one of the railway lines that is high in demand and tends to be congested during the holiday season . The coastal line, on the other hand, is also another congested railway line running from Colombo to Matara through Galle, along the coastal line of the country serving a scenic view of the ocean along the way. Apart from them, all other lines are also congested, especially during the peak hours carrying a majority of the commuting passenger traffic towards the city centers. Benefits and Advantages. The railway system in Sri Lanka is not only beneficial to the public but also for the government. Rail transport is considered one of the most affordable modes of public transportation, particularly for people coming from medium and low-income families. It is also the fastest mode of transportation, operating on fixed-time schedules and considered as the most convenient means to travel long distances in Sri Lanka. Along with the scenic views it offers to the passengers, the railway system is one of the most sought-after modes of transportation in the country. On the other hand, the railway system also benefits the economy of Sri Lanka as a whole. The key reason is the tourists it attracts, especially on the Main Line. Among the different passenger coaches, the observation deck is particularly catered towards tourists, giving them a breathtaking view back along the track. As the tourism industry is one of the essential industries in Sri Lanka, the scenic rail network greatly benefited and supported growth in the economy. At present, the government is making efforts to further improve the railway network and working hand in hand with the tourism industry to offer better services and attract an increasing number of tourists in the future. Historical Background of Sri Lanka Railways. Pre-History – Sri Lanka prior to Railways (Before 1845). Sri Lanka, also known as “Ceylon” back in the day, was under the colonization of the British since 1815, who were looking for ways to trade valuable commodities with ease. Due to its rich soil and the climate which made it much more favorable for cultivation by planters, the bulk of the European investments were drawn to the hill country. Starting with coffee plantations, which then later got replaced by tea, the European planters started making rapid strides in producing commercially viable products for their trading purposes. Following this development, the British soon realized they needed a more efficient way to move the majority of the produce to Colombo to be shipped by sea. Due in large part to the distance and varying topography between Colombo and Kandy, the road transport infrastructure was inefficient. Despite the associated costs, the lengthy cart ride from Kandy to Colombo was time-consuming and ultimately unfeasible in the long run. With the increasing demand from the planters for a railway and owing to the support from Colonial Governor, Sir Henry Ward who approved of the idea, the Ceylon Railway Company (CRC), led by Philip Anstruther, was founded in England in 1845 to construct a railway in Sri Lanka. Beginnings – Pre-Independence (1845 to 1948). In 1846, Thomas Drane, the engineer for the Ceylon Railway Company (CRC), conducted an initial survey and suggested three different routes to Kandy, which were the Galagedera, Hingula Valley, and Alagalla trace. The Government originally consented to construct a portion of the route up to Ambepussa, and in 1856, the CRC and the Sri Lankan Government signed a tentative agreement to expand the line up to Kandy. The planters, however, were concerned about the associated high costs and pushed for a new survey at a reduced cost. Captain William Scarth Moorsom, the Chief Engineer of the Corps of Royal Engineers, was dispatched from England in December 1856 to evaluate the project on behalf of Henry Labouchere, the Secretary of State for the Colonies. He examined six different routes to Kandy in his report in May 1857. He suggested that Route No. 3, which goes over the Parnepettia Pass and has a total length of 127 km with a prevailing gradient of one in sixty and a short tunnel at an estimated cost of £856,557 be adopted. The first sod was turned by Governor Sir Henry Ward on August 3, 1858. William Thomas Doyne, the contractor for the Ceylon Railway Company, soon understood he could not finish the work at the stated cost, and as a result, the contract was terminated in 1861. After the capital subscription paid off, the government assumed control of the building project and renamed it Ceylon Government Railway (now Sri Lanka Railway) in 1864 following the Railway Ordinance policy of offering transport services to both passengers and freight. Since the Railway was seen as an urgent necessity, new bids were requested to move forward with the project. On behalf of the Ceylonese government, the Crown Agents for the Colonies approved William Frederick Faviel's lowest bid at the end of 1862 to begin constructing a 117 km railway between Colombo and Kandy. Guilford Lindsey Molesworth was also appointed as the chief engineer during the process, and then later rose to the position of director general of the government railway. Remarkably, Robert Stephenson (1803–1859), the son of engineer George Stephenson (1781–1848), who was involved with the Stockton and Darlington railway, arrived in Sri Lanka as a civil engineer to oversee the building of railroad bridges such as the Kelani bridge. A 54-kilometre main line linking Ambepussa and Colombo served as the starting point for the railways in Sri Lanka. The first locomotive, a 4-4-0 type, two-wheeled coupled engine with a tender, was imported in 1864. Before dieselization, every locomotive was a steam engine, including several Main Line super-heater boilers. Later, the main line was extended to Kandy, Nawalapitiya, Nanu Oya, Bandarawela, and Badulla from the 1860s to 1920s. Over its first century, further lines were built to the rail network, including the Matale Line, the Coast Railway Line, the Northern route, the Mannar Line, the Kelani Valley Line, the Puttalam Line, and the route to Batticaloa and Trincomalee. Since then, no major extensions have been added to the rail network. Growth and Expansion – Post-Independence (1948 to 2010). Under the direction of B. D. Rampala, the general manager of Ceylon Government Railway and the principal mechanical engineer, the railway network was enhanced between 1955 and 1970. Rampala oversaw the renovation of important stations outside of Colombo and the reconstruction of tracks in the Eastern Province to enable heavier, faster trains, with a strong emphasis on punctuality and comfort. He made sure the rail system was modern and provided comfortable travel for its passengers, and he created express trains, several of which bore memorable names. Sri Lanka railways operated using steam locomotives until 1953. However, under the direction of Rampala, they switched to diesel locomotives in the 1960s and 1970s. The first diesel locomotives imported in 1953 were made by British manufacturer Brush Bagnall. Consequently, Sri Lanka Railways has since bought locomotives from China, Japan, India, West Germany, Canada, and France. In 1983, an amendment to the Railways Ordinance Act was also passed which stated the official name change to “Sri Lanka Railways (SLR)” from “Ceylon Government Railway (CGR)”. Over the years following independence, the railways shifted from being primarily focused on freight, as was the British approach at its inception, to becoming more of a passenger-oriented mode of transportation, particularly with the efforts undertaken by Rampala mentioned above. As a result, demand for Sri Lanka Railways grew rapidly, particularly in terms of operating frequencies. Even though the 2004 tsunami severely damaged one of the busiest railway lines—the coastal line—the government launched a 10-year plan to rebuild the railway system in the early 2010s in an attempt to recover from the damage and was upgraded by 2012. Maturity – Aging and Competition (2010 to Present Day). In the face of the ever-growing competition from road transport with its superior directness, frequency, and speed of travel, railway transport is now becoming a minor mode of transportation in Sri Lanka despite serving as the main transportation network until the 1940s. At present, the technology and the infrastructure utilized by Sri Lanka Railways are considered to be outdated and are struggling to adapt to the moving trends around the world. Most of the railway lines are still single-tracked, signaling systems are outdated and the stations are rather in poorer condition, particularly limiting the line capacities and ability to operate higher frequencies. Since its inception, the government of Sri Lanka has been in charge of railway operations. The involvement from the private sector has been minimal to non-core tasks. Although growth has been a little unpredictable, both passenger use, and freight movement have slightly increased though none proved to be substantial enough to cause any meaningful shift in the modal share for the railways. The monopolistic nature of the industry and the lack of administrative flexibility given to a government department have both been pointed out as contributing factors to the failure of Sri Lanka Railways to make the necessary reforms to become a financially sustainable organization. Despite operating continuously for years, the railway needs to re-conquer the passenger and freight market that it has lost over the previous few decades to road transportation. The profitability of certain markets will once again favor the railway due to rising motorization and the ensuing congestion, which is placing constraints on the continued growth of road-based transportation. Therefore, to develop these markets and improve speed and reliability, the railway needs to adopt more market-oriented approaches. According to industry professionals, it is said that Sri Lanka Railways also needs to develop a strategy for integrating with ports, airports, multimodal logistics centers, and multimodal passenger terminals, including park-and-ride facilities. By 2016, Sri Lanka Railways expects to raise the national modal share of passenger and freight transportation from the current 6% to 10% and from 2% to 5%, respectively, in line with its 2009 Development Plan of Sri Lanka Railways. In order to do this, various strategies are expected to be implemented which include streamlining passenger tariffs, deregulating freight tariffs, increasing non-fare earnings, building new, feasible extensions and connections, and replacing, modernizing, and upgrading railway assets. Quantitative Analysis of the Life Cycle of Sri Lanka Railways. Data Collection and Methodology. The life cycle can be modelled by an S-curve which is used to display data over some time. S-curves (status vs. time) allow us to determine the periods of birthing, growth, and maturity. Here, it is assumed that the data used for modelling takes on a logistic shape to seek a curve that best fits the data. In the current context, to analyze the life cycle of Sri Lanka Railway, the operated kilometers by Sri Lanka Railways are considered as the status and the consequent years as time to plot the S-curve. The required data was then collected from the annual reports published by the Central Bank of Sri Lanka over the years from 1989 to 2019. The life-cycle model can be represented by the following equation: formula_1 Where: The aim is to find the values of a and b that best describe the relationship. However, finding the final market size (Smax) is a concern when utilizing this for forecasting. As a result, it is useful to estimate the midpoint or the inflection year (ti) to apply the model. As it happens, it appears that: formula_2 which then is used to predict the system size (St) in any given year t using the following equation: formula_3 Results and Interpretations. The data over the years was used to plot the S-curve and the forecasted model was obtained as follows: The parameters and the model derived from the regression analysis on the above data are as follows:
3,703
Transportation Deployment Casebook/2024/Hangzhou Metro. = Hangzhou Metro = Overview of Hangzhou Metro. Introduction. The metro, also known as underground railway and underground railway, is a type of urban rail transport that travels in underground tunnels and is usually used for rapid public transport systems within cities. The metro system consists of underground railways, station facilities, rolling stock, power supply systems, signalling systems, and so on. As a modern urban transport system, the metro has become an indispensable part of many metropolises, providing urban residents with a convenient, fast, safe and comfortable way of travelling, and at the same time playing an important role in promoting the development of the city. So in this general trend of metro development, Hangzhou will not be an exception. Hangzhou Metro refers to the urban rail transit system serving Hangzhou City and areas in the Hangzhou metropolitan area in Zhejiang Province, China. Its first line, Hangzhou Metro Line 1, was officially opened on 24 November 2012, making Hangzhou the fourth city in East China and the first in Zhejiang Province to open a metro. As of November 2023, Hangzhou Metro operates a total of 12 lines, with a total of 260 stations (interchange stations are not repeated) and 46 interchange stations. The total operating mileage is about 516 km (excluding 2 city lines). The total mileage of Hangzhou metropolitan area rail transit network is about 610 km. Before the 19th Asian Games in Hangzhou, Hangzhou has formed "12 underground lines, a total length of 516 kilometres of urban rail transit backbone network, to achieve full coverage of the ten urban areas. Technology and facilities. Technical characteristics. Metro is the forerunner of urban rapid transit. Metro is a rapid rail transit system with electric traction, wheel-rail guided, relatively heavy axle weight, a certain scale of capacity, running according to the operation plan, vehicle grouping running in underground tunnels, or according to the specific conditions of the city, running on the ground or elevated lines. Metro capacity, one-way in 30,000 passengers / hour, up to 6 to 80,000 passengers / hour. Maximum speed of up to 90km / h, travelling speed of up to 40km / h or more, can be 4 ~ 10 grouping, the minimum interval between vehicle operation can be less than 1.5min. drive mode has a DC motor, AC motor, linear motors and so on. Metro is expensive, each kilometre of investment in the 300-600 million yuan. Metro has the disadvantages of high construction cost and long construction period, but at the same time, it has the advantages of large capacity, fast construction, safety, punctuality, energy saving, non-pollution of the environment, and saving urban land. Metros are suitable for urban centres with long travel distances and high passenger demand. The underground railway requires a high level of technology, reliability and safety because most of the lines pass underground or elevated. The metro system, like the national trunk railways, is mainly composed of line network, tracks, stations, vehicles, communication signals and other equipment, which requires each department to be able to organically combine and act in concert to maximise the delivery of the task. The following are the main technical parameters. Vehicle facilities. Train system Hangzhou Metro 1, 2, 4, 9, 16 line used in the train is a B-type car, the train for the aluminium alloy body structure, 6 grouping mode (16 line for 4 grouping). Train diameter 2.8 metres, 3.8 metres high, 120 metres long single section of the passenger capacity of the moving car for 348 people, trailer for 321 people, the whole train capacity of 2036 people, the maximum speed of up to 80 kilometres per hour. Hangzhou metro line 3, 5, 6 trains are AH-type cars, AH-type car single car length and B-type cars, 6-vehicle formation when the length of the front column is about 120 metres, the width of 3 metres, the ride in the compartment will be relatively more comfortable, but also can accommodate more passengers. Hangzhou Metro Line 7, 8, 10, 19 trains are A-type cars, the maximum operating speed of 80 to 120 km / h. A-type underground trains generally use high-strength, lightweight, large cross-section of the drum-type aluminium alloy body, 4-action, 2-trailer 6-vehicle grouping, the car width of 3 metres, the total length of about 140 metres, the maximum passenger capacity of 2,520 people. Early market niche and the need to build. The development of urban rapid transit is a strategic decision made by Hangzhou City in accordance with the law of urban development and the development needs of productive forces, taking into account the immediate and long-term, which is not only a major event to promote the development of Hangzhou's public transport across the development of Hangzhou, but also a matter of Hangzhou's future development of the city's strategic overall situation for the creation of a new advantage in the development of the city's functions, optimise the environment of the development of Hangzhou, improve the urban taste of Hangzhou and the quality of life of the masses of the people have a significant practical significance and an important strategic impact. The development of urban rapid transit is a "people's project" to alleviate the problem of "difficult to travel and parking" and to improve the quality of life of the people, and it is a "pilot project" to strengthen the connection between the main city and the sub-cities and to build a networked metropolis in the city area. ". Now "difficult to travel" has become the majority of people in Hangzhou, reflecting the urgent problem of great concern, become an obstacle to economic and social development and affect the people's lives of the prominent contradictions. From domestic and foreign experience and Hangzhou's actual point of view, the construction of rapid transit, constructing a "rail transit as the skeleton, regular buses as the main body, other ways to supplement" modern three-dimensional transport system, is to solve the problem of "travelling and parking difficult" is a good prescription, is Hangzhou The implementation of public transport priority strategy of the urgent and fundamental move. Moreover, the construction of rapid rail transit corridor connecting the main city and the sub-city, the main city and the group, the main city between the areas, is conducive to the construction of Hangzhou, "a main three sub-sub-six groups," the main skeleton of the city, and further optimise the city's urban spatial layout and form. The construction of the first phase of Hangzhou rail transit project will solve the connection between the three sub-cities and the main city in one go, shorten the space-time distance between the sub-cities and the main city, reduce the cost of commuting, and build a half-hour working circle and living circle between the sub-cities and the main city, which makes it easier for people to accept the sub-cities as a place of work, living, residence, shopping and recreation, and then promote the solution to the problem of "difficult housing". and thus promote the solution of the "housing difficulty" problem. Before the metro was built, the only options for travelling were bicycles, electric cars, buses or taxi. Bicycles and electric bikes have a close range, can't cross long distances and take a long time to cost. Buses are cheap but not fast and inefficient, and there are a lot of private cars on the road that can easily get stuck in traffic, which is one of the drawbacks of taking a taxi and driving your own car. Of course, the cost of taking a taxi or driving is also very high. At that time, the infrastructure of the city was not perfect, and there were not many parking spaces, so it was very difficult to find a place to park a private car. Therefore, the advantages of the metro are obvious, and its market is precisely for people who have the need to travel, which can greatly improve people's travelling efficiency and reduce the cost of time and travelling expenses. Historical background of policies and construction of specific routes of construction. Policy orientation of construction. In 1984, Hangzhou commissioned the Second Design Institute of the Ministry of Railways to plan the "Feasibility Study of Hangzhou Light Rail Transit System" with two ground-level light rails crossing through the old city of Hangzhou in the shape of a cross. In 1986, Hangzhou Municipal Planning Bureau commissioned the Survey and Design Department of the Tunnel Engineering Bureau of the Ministry of Railways to prepare and complete the Feasibility Study Report on the Light Rail Transit around the Lake. In 1992, Beijing Municipal Urban Construction Design and Research Institute compiled and completed the first "Pre-feasibility Study Report on Hangzhou Railway Transportation". In February 1993, Hangzhou Municipal Planning and Design Institute completed the Planning Programme for Rapid Transit Line Network. In November 2001, Hangzhou Rail Transit Phase I Feasibility Study was completed. On 6th June 2002, Hangzhou Metro Group Co. In October 2003, the General Office of the State Council issued the Notice on Strengthening the Construction Management of Urban Rapid Transit. In December of the same year, Hangzhou once again declared to the National Development and Reform Commission (NDRC) the relevant construction plan of Hangzhou Metro. On 6 June 2005, the National Development and Reform Commission approved the "Hangzhou Railway Transport Phase I Construction Plan (2005-2010)". On 20 June 2013, the National Development and Reform Commission (NDRC) approved the "Hangzhou Urban Rail Transit Immediate Construction Plan (2013-2019)". On 16 September 2015, Hangzhou successfully bid to host the 2022 Asian Games.On 12 December 2016, the National Development and Reform Commission (NDRC) approved the Hangzhou Urban Rapid Transit Phase III Construction Plan (2017-2022). Specific lines open to traffic. At 14:30 on 24th November 2012, Metro Line 1 (Xianghu Station - Wenze Road Station; Passenger Transportation Centre Station - Linping Station) was opened for trial operation. At 9:15 on 24 November 2014, the southeast section of Metro Line 2 (Chaoyang Station - Qianjiang Road Station) was opened for trial operation. At 9:30 on 2 February 2015, the first section of Metro Line 4 (Pangbu Station - Jinjiang Station) was opened for trial operation. At 15:30 on 3 July 2017, the northwest section of Metro Line 2 Phase 1 (Gucui Road Station - Qianjiang Road Station) was opened for operation. At 17:00 on 27 December 2017, the second and third phases of Metro Line 2 (Liangzhu Station - Gucui Road Station) were opened for operation, and Line 2 became the first fully opened metro line in Hangzhou. At 15:00 on 24 June 2019, the first through section of Metro Line 5 (Liangmu Road Station - Shanxian Station) opens for operation. At 11:00 on 30 December 2020, the first phase of Line 6 (Shuangpu Station - Qianjiang Century City Station), the Hangfu section of Line 6 (Guihua West Road Station - Xiangshan Station of the Academy of Fine Arts), and the first through section of Line 7 (Olympic Sports Center Station - Jiangdong Second Road Station) opened for operation. At 10:30 on 28 June 2021, Metro Line 8 Phase I (Wenhai South Road Station - Xinwan Road Station) opened for operation. At 14:30 on the same day, Hanghai Intercity Railway (Yuhang High-speed Railway Station - ZJU International Campus Station) opened for operation. At 15:00 on 17 September 2021, the northern section of Metro Line 9 Phase I (Linping Station - Longan Station), and Line 7 Olympic Sports Centre Station (not included) - Civic Centre Station opened for operation. At 11:00 on 21 February 2022, the first section of Line 10 Phase I (Cuibai Road Station - Yisheng Road Station) opened for operation. At 14:00 on 22 September 2022, Line 19 opened for operation. Policy Drivers and Markets. The role of the policy is to promote the construction of metro lines, due to the rapid development of Hangzhou, the influx of foreign population, the traffic pressure will only get bigger and bigger, with reference to the Hong Kong Metro and the Shanghai Metro, the size and continuous construction of the state, Hangzhou Metro will also continue to build. Because of the nature of the Hangzhou metro, so there is no same type of transport to compete for the market, the larger the population base, the amount of travelling on the metro will also be more, the development of the Hangzhou metro market is full of potential. Quantitative analysis of Hangzhou Metro. Data Collection and Methodology. The life cycle can be modelled by an S-curve which is used to display data over some time. S-curves (status vs. time) allow us to determine the periods of birthing, growth, and maturity. Here, it is assumed that the data used for modelling takes on a logistic shape to seek a curve that best fits the data. In the current context, to analyze the life cycle of Hangzhou Metro, the operated kilometers by Hangzhou Metro are considered as the status and the consequent years as time to plot the S-curve. The required data was then collected from the annual reports published by Hangzhou Rail Transit over the years from 2013 to 2022. The life-cycle model can be represented by the following equation: S(t) = Smax/[1+exp(-b(t-ti)] Where: Results and Interpretations. The data over the years was used to plot the S-curve and the forecasted model was obtained as follows: The parameters and the model derived from the regression analysis on the above data are as follows:
3,484
Transportation Deployment Casebook/2024/Jakarta BRT system. Technology. Bus Rapid Transit (BRT) is a bus system with special lanes, stations, and vehicles that operate with higher speed, reliability, and safety than a regular bus in its own separated right-of-way independent from other traffic. It has been the primary mode of transportation in Jakarta, the capital city of Indonesia, for the past two decades. There are some main characteristics that BRT has: BRT can provide high-quality massive transportation that costs much less than an LRT system. Other than that, it gives several advantages. First, the right-of-way prevents buses from getting stuck in traffic jams, the high reliability of its timetable, and improved safety. Second, the wide door and the same level between the platform and the bus make boarding and getting off a smooth and efficient process. It leads to more efficient dwelling time in the station and equality for those who use wheelchairs. Third, BRT has the flexibility to operate on all kinds of roadways, can use higher or lower capacity vehicles depending on the need of the service, and could also adopt alternative routes in case of eventual work or accidents. Due to its lower cost and flexibility compared to rail-based transportation, the BRT could be developed within various markets, from minor to megacities, with a population of 200.000 – 10 million. However, BRT has been chiefly used in large cities as an alternate or complementary public transportation solution when rail is not feasible or inexpensive. Context. During the Dutch colonialization era in 1869, Batavia (Jakarta) had its first mass public transport system, which was horse trams running along the main streets. It quickly became a favorite among the people until problems emerged due to horse feces and the exhaustion of the horses. To solve this issue, trams switched to steam power in 1883 before switching to electric power in 1886. The electric trams were quieter and did not emit thick black smoke like the steam trams, making them the primary mode of public transport. The steam trams were then used as freight trains from the border area of Batavia. Unfortunately, the tram system was discontinued in 1962 due to its outdated nature, high maintenance costs, and the availability of city buses as a more efficient means of transportation. In 1963, 100 buses from Australia operated as the new public transportation in Jakarta. Besides the city buses, small-sized passenger cars, namely Oplet, have also emerged as alternative public transportation. Then, in 1979, it changed to Mikrolet, which had the same function but with a newer car and advanced machines because the older Oplet was no longer functioning correctly. Jakarta then became a metropolis that continues to grow rapidly. However, its public transportation system relies heavily on road-based modes of buses and paratransit. Private companies or groups of individual owners rent out buses on a daily cash basis to operators who provide services that are not standardized. Additionally, the poor quality of public transportation has prompted people to switch to private vehicles, contributing significantly to the city's transportation. Despite past efforts to enhance mass transit, such as implementing a curbside bus-only lane in the primary business center during the 1980s, enforcement has been challenging and inadequate. These issues, along with unregulated motorization leading to a decline in the quality of life and the lack of proper public transportation, prompted the implementation of the first BRT program in Jakarta, Indonesia. Invention. Bus Rapid Transit (BRT) was first proposed in 1937 in Chicago and aims to transform the rail lines on highways and local streets into express bus routes featuring dedicated lanes. In the late 1950s, Washington DC included grade-separated busways in its transit plan to alleviate traffic congestion. In 1963, Robert Crain introduced key BRT concepts such as exclusive lanes, pre-boarding fare collection, and traffic priority, now standard features in BRT systems worldwide. Crain's analysis suggested that BRT could provide high-capacity transit at a fraction of the cost of rail while highlighting the importance of passenger information, marketing the service, and overcoming negative perceptions of public transit. One of the first busways was introduced in 1969 on the Henry G. Shirley Memorial Highway in Northern Virginia, USA. Runcorn, a town in the UK, opened its inaugural busway corridor in 1971. This corridor was an elevated section that connected to a retail center. The 22-kilometer Runcorn busway played a crucial role in transforming the urban landscape. The idea of modern BRT was first realized with the introduction of the "land tube" in Curitiba, Brazil, in 1973. The city had initially planned for a rail-based system, but due to limited investment, it created exclusive bus lanes departing from the city center instead. This innovative transportation system, which featured tube-shaped stations and articulated buses, quickly became a global example of excellence, prompting further development in the city. Although mechanically guided busways emerged in Essen, Germany, and other parts of Europe during the 1980s oil crisis, BRT expansion stopped. In the 1990s, BRT was often regarded as a less desirable alternative to rail, particularly in smaller cities or those with limited budgets. However, the inauguration of the TransMillenio system in Bogota in 2000, serving over 7 million people, demonstrated BRT's ability to serve large cities efficiently. TransMilenio includes a dedicated busway, articulated buses, upgraded stations, an innovative card-based fare collection system, an updated control system, and a low-cost option for low-income customers. TransMilenio trunk routes run on a dedicated busway in the city center. Early Market Development. In its initial development, Transjakarta (the BRT system in Jakarta) aims to serve the Sudirman – Thamrin area, the Central Business District (CBD) of Jakarta. Transjakarta’s first corridor connected this area from the southern to the northern side, namely the Blok M to Kota corridor, which is 12,9 km long.   While the existing commuters get good quality public transport, Transjakarta also tries to catch a new market in its first corridor by integrating the Blok M station to surrounding malls with easy and comfortable tunnel access. Meanwhile, Kota station also provides a wide pedestrian pathway from the station to Kota Tua, a famous touristy area of old Batavia or Jakarta’s old town with Dutch-era buildings. The Role of Policy. Transjakarta was inspired from BRT concept of Transmilenio in Bogota. During 2001 – 2003, the governemnt of Jakarta had some discussion with Bogota’s City Mayor and visited Bogota to learn more about this BRT system. This process was supported by a non-governmental organization, named the Institute for Transportation and Development Policy (ITDP), who conducted a review for BRT system world-wide and then provided technical assistance to the city of Jakarta during its initial development. Then, on 15 January 2004, the first corridor began to operate with the same concept as Transmilenio, which has a dedicated bus lane, collecting the fare in advance, and providing an elevated platform for quick boarding and alighting. During the initial phase, Transjakarta innovated by placing an on-board staff member on each bus to assist passengers. These staff members were typically stationed near the door to ensure smooth boarding processes and to maintain security throughout the journey on the bus. Some policies are applied in operating Transjakarta. A minimum standard of service is embedded to manage the operation of Transjakarta. It regulates the headway of the buses, the fares, and facilities in the service. On the other hand, to secure the right of way for Transjakarta, the city of Jakarta enacted a Local Government Regulation that prohibited private vehicles from passing through the busway. Violators may face imprisonment or fines. The main characteristics of Transmilenio, which are utilized in Transjakarta, have become fundamental principles in planning new developments. Stations are situated in the median of the road with high platforms, and the bus fleet has become an integral component of Transjakarta's future infrastructure. Growth Phase. In line with the growth of ridership and corridor expansion, Transjakarta has upgraded its service and infrastructure to accommodate more passengers and enhance service efficiency. Gradually, they increased their rolling stock and replaced buses with articulated buses. Additionally, starting in 2011, Transjakarta began efforts to serve the last-mile trip by providing feeder buses. They collaborated with existing bus companies serving those areas, such as Metromini and Kopaja, to ensure a minimum standard of service consistent with Transjakarta. Moreover, Transjakarta endeavored to meet passengers' needs more effectively. Since 2014, Transjakarta has operated 24 hours, providing special bus services from midnight until 5 AM. In 2016, they introduced special buses for women to enhance security and safety, addressing concerns about overcrowded buses and sexual harassment. The Kartu Layanan Gratis Transjakarta or Free Service Card (TJ Card) was also introduced in 2016 to encourage more passengers. Additionally, in 2017, Transjakarta partnered with a private company, Trafi, to offer real-time bus position information through a mobile application. Furthermore, in 2018, Transjakarta introduced a premium service called Royaltrans to cater to commuters seeking enhanced comfort. The government has played a pivotal role in shaping Transjakarta's growth. A company was established to better manage Transjakarta and provide fare subsidies. In 2016, an odd-even vehicle policy was implemented to encourage public transportation usage and boost Transjakarta's ridership. Private companies have also been involved in manufacturing buses, developing fare collection systems, operating feeder services, and managing station facilities. Since 2006, private companies, supervised by the city government, have operated the second corridor. In 2013, banks were included to improve the ticketing system through smart cards. One notable policy issue during TransJakarta's growth was capacity constraints, particularly during peak hours. To address this, TransJakarta implemented measures such as increasing bus frequency, optimizing scheduling, and expanding the fleet size. Despite criticism from existing bus companies, efforts were made to collaborate in providing feeder services. Additionally, Transjakarta introduced special services for people with disabilities. The government has also adopted bold policies in its National Medium-Term Development Plan (RPJMN) to promote public transportation and restrict private vehicles. This served as the basis for other policies, including increased Transjakarta subsidies. Maturity Phase. Transjakarta has implemented a range of sustainability efforts aimed at reducing emissions and enhancing service quality for passengers. One such initiative involves the introduction of low-emission buses, including e-buses, which began in 2022. This transition is designed to minimize the carbon footprint of the transportation system. Additionally, Transjakarta has integrated with other modes of transportation such as informal micro-buses through Jak Lingko integration program, expanding passengers' access to various areas more easily. Moreover, the system has modernized its stations to enhance convenience for commuters. Despite potential competition arising with the operationalization of the Mass Rapid Transit (MRT) system in 2019, Transjakarta has established connections with other lines and even experienced increased ridership following the MRT's commencement of operations. This is attributed to Transjakarta's role in facilitating MRT passengers' journeys to their final destinations, a service not provided by the MRT itself. Furthermore, Transjakarta and the MRT have collaborated to integrate several stations, improving transit efficiency and enhancing the overall customer experience. Transjakarta has not only integrated with the MRT but also extended its integration to include the LRT, which commenced operations in 2022. Transjakarta leverages big data derived from ticketing information to revamp its service and deliver a customer experience that aligns with passengers' expectations. This data enables the transportation system to gain insights into people's travel patterns and make appropriate adjustments to enhance service delivery. Quantitative Analysis of Transjakarta Life-Cycle. Data Collection and Methodology. To identify the periods of birthing, growth, maturity of Transjakarta, a model is used to predict the passenger of the mode. formula_1 where: The data used comes from the Jakarta Transportation Statistics, an annual publication created by the Central Agency of Statistics. Results and Interpretation. The data from the initial year of operation and the most recent data published was used to construct the S-curve figure, with the projected model stated below: The parameters and the model derived from the regression analysis on the above data are as follows: The graph above depicts the annual passenger growth trend, including both actual data and estimates based on modelling results. Compared to actual data, the modelling results show that Transjakarta is still in the growth stage and has not yet reached maturity. This is evident by the slope, which is still rapidly increasing.
3,046
Transportation Deployment Casebook/Singapore MRT. = Introduction = The Mass Rapid Transit (MRT) is a mass transit system in the Republic of Singapore. Conceived in the 1960s, and built in the 1980s, the MRT system now stretches to all corners of Singapore. The MRT system features a lot of innovative technology, and is projected to grow further in the coming decades. = Qualitative = Technology. The Singapore MRT is a heavy-rail rapid transit system. It runs autonomous trains across six lines and has 140 stations. There are also 3 LRT systems to supplement the main MRT system. The trains run on standard gauge 1435mm track and use DC third rail or overhead wire electrification (depending on the line). Newer trains are fitted with electronic maps, as well as external information displays (see fig. 2). The main advantage of this system is that it is fully grade separated. This means there are no level crossings to slow down the trains. This allows the trains to offer high frequency services, 2-3 minutes on-peak and 5-7 minutes off peak. It also has a higher capacity than comparable modes, such as buses. The MRT features many innovative technologies, which have been borrowed my many transit authorities around the world. All 140 stations are fitted with platform-screen doors, and newer stations also double as bomb-shelters. Newer underground stations also have elevated entrances to stop water entering the station in the event of a flash flood (see fig. 1). Additionally, the system features many cross-platform transfers, including a double cross-platform transfer at City Hall and Raffles Place between the East West and North South Lines. This enables passengers to travel in either direction on the line they intend to transfer to simply by crossing over to the other side of the platform at one of those stations. All stations have numbered exits for convenience, and many stations have exits directly into adjacent buildings. Singapore uses a station numbering system consisting of a two-letter line prefix, and an individual station number. For example, Toa Payoh on the North South line is numbered as NS19. Interchanges have multiple numbers, and there are a number of exceptions to the station numbering rule (e.g. Founders’ Memorial TE22A, Tanah Merah EW4/CG, Punggol NE17/PTC to name a few). Singapore also numbers the end of its lines to make it easier for passengers to find the right platform. For example, trains towards Punggol are numbered as 7 on the system map and on station signage. The system uses electronic cards for payment. There are multiple types of fare cards in circulation, most notable being the EZ-Link and NETS flashpay cards. However, today passengers can tap on and off using credit and debit cards, including digital wallets, due to the new CEPAS standard. The MRT uses technology from all around the world. The MRT uses many different types of trains from manufacturers including Kawasaki Heavy Industries, Siemens, CRRC, and Alstom. Signaling systems come from Alstom, Westinghouse, Siemens, and Thales. Westinghouse also produced a portion of the platform screen doors. Birth of the MRT. In the 1960s, Singapore was a small island with a growing population. This led to high traffic congestion and inefficient journeys. Growing congestion on the bus network was also causing concern, and a solution was needed. A joint study by the United Nations Development Programme and the Singapore State and City Planning Department was conducted to forecast Singapore’s growth in the coming years.. The 1967 study found that Singapore’s population was expected to balloon to 3.4 Million by 1992, and that to cope with the increase, Singapore would need to restrict cars, improve roads, and build a mass transit system.. Support for the MRT was not universal. There was a push for a bus-only system. Back then, Singapore was not the rich metropolis we know today, and the lower cost of buses was very appealing. In 1972, the Singapore Mass Transit Study was done, which recommended the construction of a heavy rail network. In 1980, consultants from Harvard University were brought in for a second opinion. This study found that an all-bus system may work, as long as a system was put in place to restrict car usage. However, a government study was then done in 1981, which found that a bus only system would not be enough. Following on from this, the Singapore Government decided to begin construction in 1982. Construction began in 1983 on what is now the North South and East West lines, and the first section opened in 1987, with an official opening in 1988 by Prime Minister Lee Kwan Yew. Growth of the MRT. Following the initial success of the MRT system, growth followed over the next few decades. In 1996 the Woodlands extension was completed which brought the MRT to the north of Singapore. Up to this point, there had been a branch line extending from Jurong East up to Choa Chu Kang. With the Woodlands extension complete, this branch was now linked up to the rest of the North South Line. As a result, trains no longer branched off the East West Line at Jurong East, and instead ran along the North South Line through Woodlands before running along the old branch section and terminating at Jurong East, forming a loop around Singapore's Central Catchment Nature Reserve. In 2003, the North East Line was opened, which connected the north-east of Singapore with the rest of the MRT network. This was Singapore's first entirely underground MRT line. Up to this point, the MRT system was almost entirely radial, with all three lines heading towards the Downtown Core. To address this, the Circle Line was constructed, opening in 2006. This allowed passengers to travel between suburbs without having to go through the city, helping to reduce crowding in the busy Downtown Core. The construction of the circle line saw the MRT’s first serious accident in the Nicoll Highway tunnel collapse. The damage was so severe that the whole collapsed area was filled in, the tunnels realigned, and the station reconstructed on a new site. While the MRT was expanding, three new LRT systems were also built. These systems consist of automated people movers running on rubber tyres which connect to existing MRT stations with the goal of allowing passengers living in dense residential districts to get closer to their homes by rail. Although they seemed futuristic at the time, these systems are widely seen as failures as they were very expensive, unreliable, uncomfortable to ride, hard for disabled people to access, and (accounting for the time it takes to get from the sidewalk to the platform) slower than a bus. The Bukit Panjang LRT in particular is seven times less reliable than the downtown line. This line also used to contain Ten Mile Junction LRT station which was closed in 2019, making it the first and only rail station in Singapore to close due to low patronage. Government officials later admitted that the Bukit Panjang LRT system was constructed hastily in order to improve their electoral prospects. At the moment there are three new MRT lines under construction. The Thomson-East Coast Line aims to connect the Woodlands area to the Downtown Line at Sungei Bedok via a new corridor along Singapore’s east coast. Similarly, the Cross Island line will connect the eastern part of Singapore to Ang Mo Kio and Bright Hill MRT stations along a new northern axis. This line will also provide an alternative to the heavily congested Circle Line. Lastly, the Jurong Region Line aims to serve the Jurong region by connecting commuters with the existing Boon Lay, Jurong East and Choa Chu Kang stations. Additional infill stations and extensions to existing lines are also planned. The MRT also has a number of unopened stations and shell stations ready to spring to life when future demand emerges. Singapore has had a long-standing policy of promoting public transport over private vehicles. Singapore was the first country to implement road pricing (ERP), and the construction of the MRT system has allowed Singapore to reduce its dependence on private vehicles. As a result, the MRT is now the main way to get around Singapore. Despite the overwhelming success of the MRT, there are multiple MRT projects that were abandoned or never build, including the East West Line Tuas South extension, and the Punggol North LRT. Maturity. As the MRT is still expanding, the system has not yet reached its maturity. However, the system will reach its peak eventually. Singapore is an island, which means (with limited exceptions) there can be no more urban sprawl. Any future new MRT lines beyond the ones currently planned will likely provide new routes through existing areas, rather than serving new areas. As the population of Singapore increases, so must its population density, which may mean more MRT lines in the future. Government Policies. Initially, the biggest policy of the Singapore Government was the decision to fund the numerous transport studies, as well as fully financing the initial construction of the MRT system. This indicates strong and unwavering support from the government of the day. It is extremely common for large projects such as metro lines and freeways to be completed through Public Private-Partnerships (PPPs). This is a form of collaboration between a government and the private sector in which the private sector fully or partially funds infrastructure construction on behalf of the government, in return for extended revenue over the course of the contract. In the case of a train line, a private company would pay for the construction of the train line, and then that company would get to charge fares to customers to pay back their investment. This would normally last for many years, before the infrastructure is handed back to the government. The Singapore Government did not do this because they wanted to spur investor confidence in the country, and because they wanted to keep fares cheap. The average MRT fare is only S$0.85, and this is by design. By keeping fares low, the MRT remains a viable option for all Singaporeans, and provides a realistic alternative to driving. One of the Singapore Government’s other long-standing policies regarding the MRT is the use of transit-oriented development (TOD). Singapore, being a small island with a big population, has the potential to be one of the most expensive real estate markets on the planet. However, housing remains accessible to most Singaporeans due to government policies designed to get Singaporeans into home ownership. In 2023, 77.8% of Singaporeans lived in public housing, known locally as HDBs after the government department that regulates them, the Housing and Development Board. All HDB units are leased out to citizens for 99 years. The main job of the HDB is to develop vacant plots of land into “new towns”, where residents can live, shop and play. These days, most HDB developments are located around MRT stations. This enables citizens who live in the HDB community to easily commute into the city, as well as reducing the need for Singaporeans to own cars. Building MRT stations increases the land value around them , which in turn leads to more businesses springing up and more economic activity. The Singapore Government’s current target is for 8 out of 10 households to be within 10 minutes of an MRT station by 2030. Despite designing its public transport to be a viable alternative to driving, a minority of Singaporeans do choose to drive. For this reason, the Singapore Government has developed road pricing systems to persuade people to drive less and use the MRT more. Singapore was the first country in the world to implement road pricing. Vehicles were charged a fee to drive into the downtown area of Singapore (known as the restricted zone). In 1998, Singapore switched to Electronic Road Pricing (ERP). This system involves gantries mounted above busy roads which charge drivers who drive through them, via units inside the vehicles similar to toll transponders (see fig. 3). The ERP charges change depending on the time of day and the prevailing congestion levels. Additionally, there is a quota on the number of vehicles allowed on the roads, and citizens have to bid for a permit to drive a car. This carrot and stick approach of making driving expensive and transit cheap, has helped to drive Singapore’s MRT ridership up, and car ownership rate down. These three policies of lowering car use, increasing public transport use, and TOD have contributed to making Singapore a more sustainable city. However, the Government is investing in new roads as well, such as the North South Corridor and Changi North Corridor, which may blunt the effectiveness of this strategy. Currently, all future MRT stations are planned to be underground. There may be a number of reasons for this. Most new stations are likely to be in areas that are already built up, and as such, there would be no room for stations above ground. Additionally, there is a trend around the world for elevated viaducts to be seen as ugly and noisy. The Land Transport Authority (LTA) has been retrofitting existing viaducts with noise walls for this reason, and it is estimated that the social benefit of this program has exceeded S$700 Million Lastly, newer MRT stations also double as bomb shelters, which requires underground stations. The MRT rolling stock and associated infrastructure is owned by the LTA, the transit authority of Singapore. The MRT system itself is operated by two different companies. The East West, North South, Circle, and Thomson-East Coast Lines are operated by SMRT Corporation, while the North East and Downtown Lines are operated by SBS Transit. = Quantitative = The MRT is currently in the growth stage of its life cycle. It is a well established system, but still has a number of extensions planned and under construction. To estimate how long the MRT will be in the future, a three-factor logistic function can be used. In this case, the function will take the form: S(t) = Smax/[1+exp(-b(t-ti)] where: Data. System length data from 2005 to 2021 was sourced from the LTA. Additionally, the system is expected to reach 360km by 2030, and 400km by 2040.The dataset was assembled using these figures. Modelling. To make the model work, values are needed for Smax, b, and ti. To estimate b and ti, a linear regression can be used. A linear regression requires the dependent variable to be linear in nature, so instead of regressing the dependent variable itself, it can be transformed into a linear form, and then transformed back to arrive at a prediction. The regression will take the form: Y=bX+c where Y=ln(Length/(Smax-Length)) ln(Length/(Smax-Length)) is a linear transformation of the logistic function. The b term from this regression is equal to the b term in the logistic function. This regression also gives a value for ti in the form ti=c/-b where c is the constant term in the regression. In this case, ti represents the midpoint of the regression line. As Smax has not yet been estimated yet, a test value of 500km will be used. Regressing Y on the year value returned the following results: From here, the Y values can be transformed into predictions using the formula: Predicted Length = Smax/(1+EXP(-b*(Year-t_i))) The last step is to estimate Smax. To do this, excel solver can be used. The actual length can be subtracted from the predicted length for each year. These values can then be squared to calculate the sum of squared errors (SSE). Excel solver can then be set to minimise the SSE by changing the Smax value. Changing the Smax value will change the Y values, and therefore the regression results. However, the whole model was linked using formulas such as =SLOPE, =INTERCEPT, and =LINEST, so the regression results will refresh and still be accurate. The final model is shown below and in fig. 4: The final coefficients were: Interpretation. This system length column does contain missing values, but the linear regression will still function. It should be noted that this data represents the total length of the Singapore rail network (both the MRT and LRT systems). However, as no new LRT lines have been constructed during this timeframe, and there are no plans to construct any new ones, they will not affect the accuracy of the model. The total length of the LRT system accounts for 28.8km of the system length. The data does not seem to account for the closure of Ten Mile Junction Station, which means that this data also includes tracks used to access depots, since Ten Mile Junction Station is also home to the Bukit Panjang LRT Depot which remains in use. This model predicts that the MRT will top out at 538km, and that the inflexion year was 2022. The R2 for this model is over 98%. However, this does not mean that the model is 98% accurate. Rather, it simply means that 98% of the change in system length can be attributed to the passage of time. The birth phase of the MRT was in the period 1987-1990, as this is when the first phase of the MRT opened. This birth phase proved the viability of the system and led into the growth phase, which continues until today. Based on the model, the growth phase seems to end around 2050. If the model is correct, by that point most the system will be constructed, and there will be very little left to build. The model predicts that by 2100, the system will be fully complete.
4,042
Transportation Deployment Casebook/2024/Guangzhou Metro. 1 Introduction. Guangzhou City, commonly known as Guangzhou, abbreviated as Guang, Sui, and Yangcheng, is the provincial capital and sub-provincial city of Guangdong Province, the People's Republic of China, and one of the first batch of coastal open cities. Guangzhou Metro is the urban rail transit system of Guangzhou City, Guangdong Province, China. The operation scope of Guangzhou Metro covers Guangzhou City and Foshan City. As of January 2024, Guangzhou Metro has 16 operation lines and 8 under construction. Currently, 653km of Guangzhou’s local subway network is under construction, with a length of approximately 171.5km. 2 Quantitative Analysis. 8.7.2024 Date source= To analyze the development history of the Guangzhou subway and the role of the subway in the urban development process of Guangzhou, the average annual number of subway passengers in Guangzhou from 1997 to 2023, the subway mileage from 1997 to 2023 and the permanent population of Guangzhou from 1997 to 2023 were selected Growth data. The following Table 1 is the population of Guangzhou, Table 2 is the mileage of the subway in Guangzhou, and Table 3 is the annual number of passengers on the Guangzhou subway. Table 1 Guangzhou population,1997 to 2022(in millions) Table 2 Guangzhou Metro mileage ,1997 to 2023(in kms) Table 3 Guangzhou Annual Transit Ridership ,1997 to 2023(in millions) Afterthat use the data to estimate a three-parameter logistic function: S(t) = K/[1+exp(-b(t-to))] Where: S(t)is the priedicted data K is the carrying capacity of the system (in our case, the maximum number of rides&the maxmum number of population ) T is time(years) T0 is the year in which 1/2 K is achieved The S(t) formula can be formed to: LN( Ridership/(K - Ridership))=bt+c or LN( Metro mileage/(K - Metro mileage))=bt+c The K value in the above formula is a measure of saturation. Since the Guangzhou Metro has not reached saturation, the least squares method is used to find the value of K. The following two tables show the K value of Guangzhou Annual Transit Ridership and the K value of Guangzhou population respectively. Table 4 Determine the value of K(Annual Ridership) Table 5 Determine the value of K(Population) 2.2 Analysis. The life cycle phases are shown in Figure 1, Figure 2, and Figure 3. It can be seen from Figure 1 that the current passenger travel volume of Guangzhou Metro is close to the mature stage. The first half of the existing data is in good agreement with the S-curve. However, there was a significant decline in passenger travel between 2019 and 2020. This is due to the impact of covid-19. During this period, a series of policies limited the number of subway trips and the development of Guangzhou's subway. Figure 2 shows the distance of Guangzhou subway operation mileage. There are two stages in the figure where the mileage increased rapidly. One is around 2010, when Guangzhou hosted the Asian Games, so policies were supportive at this time. The second is between 2016 and 2019. During this stage, the annual average mileage of Guangzhou Metro The number of trips is also increasing dramatically. Basically, the average annual number of trips is positively correlated with the total operating mileage of the subway. At the same time, there is always a plateau period 1 to 2 years before and after each rapid increase in mileage. This is a characteristic of the subway mode of transportation. A subway requires a certain amount of time to plan and construct, as well as a lot of investment. This is also the reason why subways are not suitable for small and medium-sized cities. Figure 3 shows the population growth curve of Guangzhou. The first half is basically consistent with the S-curve, but in the second half the population growth may be slowly slowed down by factors such as social, economic, cultural, and city size. Therefore, the second half of the curve may not conform to the change pattern of S-curve. Regression Results. Table 6 Regression Results for Ridership Table 7 Regression Results for Population 3 Qualitative Analysis. 3.1 Mode description. In 1863, the first subway opened in London, England. Before the advent of the subway, the main way people traveled was by horse-drawn carriage. Compared with carriages, buses and other travel modes, the advantages of the subway are: 1. The subway has a large transportation capacity compared to other modes of transportation. 2. The subway has a high punctuality rate. 3. Traveling by the subway takes less time. 4. The space required by the subway is mainly Underground space reduces the use of above ground space. As the urban population continues to increase, subways have increasingly become a major indicator of whether a city can become an international metropolis. The sensational effect generated by the subway, the increase in popularity along the line, and the adsorption effect on people's livelihood facilities such as residential buildings, commercial outlets, star-rated hotels, cultural, educational and health facilities, urban complexes, etc. are difficult to match by any public transportation means. While it has the above advantages, the subway also has many shortcomings: large investment, long construction period, and high operation and maintenance costs. The biggest flaw is the large loss area and high amount of losses. Especially in some cities with a small total passenger flow or characterized by tidal passenger flow, the subway will not only fail to create the economic benefits it deserves, but will also make local debts tighter. 3.1.1 Broad service coverage area. Guangzhou Metro is currently one of the most extensive and attractive subway systems in the world. Guangzhou Metro is currently the third largest urban rail transit system in mainland China and the third largest urban rail transit system in the world. As of December 28, 2023, Guangzhou Metro currently has 271 operating stations covering 12 administrative districts in Guangzhou and Chancheng District, Nanhai District, and Shunde District in Foshan City. The following is the specific information of each line of Guangzhou Metro. 3.1.2 Comfortable ride experience. In order to give passengers a better riding experience, Guangzhou Metro has added many humanized measures. 1. Female carriages: Guangzhou Metro will pilot female carriages on Line 1 starting from June 28, 2017. During the period of 07:30 to 09:30 and 17:00 to 19:00 on weekdays (excluding holidays), the last carriage of Guangzhou Metro Line 1 towards Guangzhou East Station and the first carriage towards Xilong Set as women's compartment. 2. Mother and baby room: At the end of 2016, Guangdong Metro built the first mother and baby room on a pilot basis at Guangdong South Station. Starting from 2017, maternal and child rooms will be set up at newly opened line stations. Since 2018, Guangzhou Metro has also overcome difficulties such as limited station space, insufficient water, electricity, ventilation, and passenger flow lines on existing lines, and added independent maternal and child rooms at stations that meet the conditions for renovation on existing lines. As of May 2022, a total of 145 maternal and infant rooms have been built on the Guangzhou Metro line network, serving more than 1,000 maternal and infant passengers on average every day, including Guangzhou South Railway Station, Nansha Passenger Transport Port Station, Huadu Square Station, Airport North Station, and Conghua Passenger Transport Station A total of 6 stations including Zengcheng Square Station and Zengcheng Square Station were awarded Guangzhou Maternity and Infant Room Demonstration Sites. 3.1.3 Diverse ticketing methods. Guangzhou Metro supports a variety of ticket purchasing methods, including one-way tickets, day tickets, bus codes, credit cards, Apple pay, and facial recognition. There are a total of 8 ticket purchasing methods. Among them, the facial recognition entry function used in the east extension of Guangzhou Metro Line 5 and the second phase of Line 7 in December 2023 will also be a new direction for the future development of ticketing in various subway systems. 3.2 Technology. 3.2.1 Evolution mechanism and prevention and control mechanism of major risks in urban rail transit operations. Guangzhou Metro has achieved technological breakthroughs in five aspects : First, it has overcome the panoramic scanning monitoring and early warning technology for fires in high-speed trains running in long distances, and achieved accurate monitoring and early warning of fires in running trains; second, it has broken through the vehicle-ground collaborative detection of objects beyond visual range intrusion limits. and risk analysis technology to achieve vehicle-ground coordinated over-the-horizon dynamic and accurate identification; the third is to overcome intelligent flood prediction and early warning and dynamic prevention and control decision-making technology to achieve rapid spatial-temporal early warning and prevention of the full flow path in sudden flooding tunnels; the fourth is to break through Induced network passenger flow risk prevention and control technology for passengers-train-line network-information collaboration realizes the reconstruction of network passenger flow induction paths; fifthly, establishes "scenario-response" digital emergency response auxiliary decision-making technology to support joint prevention of major operational risks Joint control. 3.2.2 signal system. In terms of signaling system, Guangzhou Metro Lines 1, 2, and 8 adopt the Siemens FTGS train operation control system, and other lines adopt the CBTC train operation control system. Among them, the CBTC train operation control system used by Guangzhou Metro Line 7 is fully Chinese. The development and application of the MTC-I type CBTC train operation control system with independent intellectual property rights has broken the monopoly of foreign manufacturers on China's urban rail transit signaling systems for more than ten years. 3.3 The role of policy in birthing phase. 3.3.1 The role of policies in overall development. In the early stages of Guangzhou's subway construction, it received support from many policies. In 1984, the Guangzhou Metro Preparatory Office drew on the experience of the Hong Kong Metro and proposed for the first time in the country the idea of "focusing on transportation, taking into account civil defense" and building subway lines on major transportation corridors, and received approval from relevant departments. This change in design thinking has brought Guangzhou subway construction into the fast lane. Subsequently, Guangzhou began to compile subway network planning and engineering plans. Through policy support, the initial planning of the Guangzhou Metro is to develop along the line based on the TOD model. This played a guiding role in development in the early stage. The policy guidance of the "Recent Line Network Planning and Implementation Adjustment Plan in 2000" issued in 1997 proposed to give priority to the construction of Lines 3 and 4 that have been modified to go deep into the south to support urban development. On December 29, 2002, the first section of Line 2 was opened for trial operation. Since then, the "Ten" track network of Guangzhou Metro has been initially formed. At present, the construction of the basic framework of Guangzhou Metro in all directions has been initially completed. On this basis, due to the overall urban expansion of Guangzhou and the bid to host the 2010 Asian Games, the development of Guangzhou Metro has entered its peak period. In August 2005, the National Development and Reform Commission officially approved the "Guangzhou Urban Rapid Rail Transit Recent Construction Plan" and agreed to build multiple lines between 2005 and 2010 with a total length of 127.6 kilometers. In February 2009, the National Development and Reform Commission approved the "Guangzhou City Recent Construction Plan Adjustment" (2005-2012), agreeing to make appropriate adjustments to the construction tasks and goals of the July 2005 "Guangzhou City Rapid Rail Transit Recent Construction Plan". In July 2012, the National Development and Reform Commission approved Guangzhou’s recent urban rail transit construction plan (2012-2018). Only the northern extension of Line 8 and Line 11 are urban lines, and the rest are suburban radial lines. In order to meet the development needs of Guangfo City, in 2015, the National Development and Reform Commission approved the construction of the first phase adjustment section of Line 7. This line connects the two cities of Guangzhou and Foshan, further deepening the connection between the two cities. On July 30, 2020, the "Guangdong-Hong Kong-Macao Greater Bay Area Intercity Railway Construction Plan" was approved by the National Development and Reform Commission. At this point, policy development will be more inclined to support the lines linking Guangzhou City and the Greater Bay Area, including the southern extension of the original Line 18 (Nansha to Zhongshan (Zhuhai) intercity, 79km), the northern extension of Line 18 (from Guangzhou east to Huadu-Tiangui intercity, 38km), the northern extension of Line 22 (intercity from Fangcun to Baiyun Airport, 39km) and the original Line 28 (intercity from Foshan via Guangzhou to Dongguan, 107km). 3.3.2 Changes in charging policy. According to Guangzhou’s public transportation fare discount adjustment plan, starting from September 1, 2023, the preferential policies for Guangzhou subways and buses will change. The following are the new preferential plans: Ordinary passengers who spend more than 80 yuan but less than 200 yuan on the Guangzhou subway and buses in a calendar month will enjoy a 20% discount. If it exceeds 200 yuan, you will enjoy a 50% discount. This means more discounts for multiple rides, not only for the subway, but also for buses and water buses. 3.4 Market development. Due to the nature of subway operations, Guangzhou Metro does not have any commercial competitors. Therefore, the development of the market is mainly related to the growth of population. The larger the population base in Guangzhou, the higher the proportion of people choosing to travel by subway, and the broader the Guangzhou subway market will be. Through the S-curve, we can see that the current resident population in Guangzhou is still in the rising stage, so the market development of Guangzhou Metro will also be more advantageous in the future.
3,658
Transportation Deployment Casebook/2024/Lifecycle of Sydney Buses. The lifecycle of the Sydney bus network is about analysing the birthing, the growth, the maturity and potential decline of its service through data analytics and computer modelling. A time based approach is used to plot scale of its yearly patronage or track length over the network's entire lifetime. An S-curve model using the Logistics equation with three parameters forms the theoretical basis for this approach. Background. The bus network in Sydney, Australia forms the second largest public mode of travel by patronage in the state of New South Wales, behind the train line service. According to Transport NSW, in 2023, 48.9% of single trips completed using the train/metro whereas 38.1% of trips were completed by the bus. As of 2015, it covers over 25,000 km of route length. It services the CBD and the suburban areas of the Greater Sydney region and operates under Opal Card scheme introduced in 2013 to 2014. The lifecycle of the bus network in Sydney spans over a 100 years and has changed based on the technology, and policy needs of the city. This lineage includes the early adoptions of the horse carriage omni buses to the modern engine powered articulated and double decked buses. It is estimated that the city has grown from 8.4 million yearly passengers in 1900 to over 308.8 million in 2014 as of latest figures by Bureau of Infrastructure, Transportation and Research Economics (BITRE), before the introduction of Opal. The bus network is considered quite a mature mode of transport as it has saturated the region for 30 years without increasing its ridership significantly. Due to recent events such as Covid-19, the ridership has still been in recovery as it has only surpassed 200 million yearly passengers again in 2023 still below its pre-Covid numbers of 308.5 million in 2019. History. The historical context of the bus network in Sydney puts the numbers into a more human perspective. The trends in the bus network tend to reflect saturation without much room for growth only maintenance. Birthing. In the early 20th century, the tram network dominated the city followed by the ferry and heavy rail network. Since the modern car had not been motorised to take over the market just yet, the bus network was simply made of horse carriage omnibuses, that could hold not hold more than around 10-15 people before it was put under severe stress. This is in contrast to the early trams that were, at the time, just recently electrified, and could serve well over twice the capacity of a regular bus while having more connections and integration for faster travel time. Following the ending of the Great War, the buses too became motorised and received traction for their versatility and low maintenance and running costs. This was happening at the same as the modern personal car became a staple as production costs decreased, paved roads became more common and the urban landscape increased. This sparked large amount of privately owned bus companies to run as the transport had not been formally systematised by the state government. In 1927, over 500 buses that ran on a motor engine served the city while being unregulated. In contrast, the trams were widely adopted by the city and plans to control major modes of public transport was an important opportunity to seize. In 1930, the New South Wales government intervened the bus network operations by limiting its freedoms to open new routes and more services through taxation and legislation. The Transport Act of 1931 passed to stifle the competition as its private owners could pay the high costs that the government enforced. A condensed timeline from BITRE shows that the bus service was still in its infancy but it was gaining popularity as the technology continued to accelerate especially post Second World War. Growth. The comparison and competition with the tram system continued through the 40s and 50s as the popularity of one mode sacrificed the patronage of the other. The exception occurred in the middle of World War Two where integration of the two services was used to save running costs for buses and trams, which proved important as this became a bigger problem later on and was experienced across the globe. After the war, the buses were upgraded and received further funding as the idea of a wide spread bus urban public transport (UPT) mode became more popular. Similarly to the trends overseas, like in the U.S.A., car adoption with decrease in fuel price, stronger and lighter metals meant that the government could look into cheaper options that could gain capital. BITRE shows one perspective how attractive it would have been for petrol or diesel owned vehicles as their real price compared to 2011-12. From the beginning of WW II until the end 1939 - 1945), it saw an increase in yearly ridership by nearly 100 million people from 65.8 to 159.8 million. This can be thought of as the inflexion point of highest rise in the mode's lifecycle. From 1946 all the way to end of the 70s, the bus saw a steadier rise in popularity as the tram network was virtually phased out in favour of the light capacity UPT that could now reach greater distances and was more versatile and flexible in times of need. In 1969, bus patronage reached a maximum yearly patronage of over 328.1 million, which is the first true peak of the mode and the sign of maturity and saturation over the upcoming decade. Maturity. Despite the evolution of the bus taking over the tram system and becoming dominant the dominant UPT in the 70s and 80s, the numbers show a stagnation in the service as it was saturated and had reached most of the regions in the Greater Sydney region. Again, it followed similar paths to Australian cities such as Melbourne as it stagnated its development due to filling most of its areas with bus links. However, Melbourne had a resurgence that is up to dispute based on boundary conditions of city limits as pointed by the BITRE analysis. As such, the data is not 100% conclusive as even the bureau agrees with through its inclusion of different accuracy classifications for its 'UPT Bus' figures in Sydney. Decline. Excluding Covid due to its external impacts on the wide transport network across the globe, Sydney bus system has stayed relatively consistent in terms of patronage numbers despite rising city population and possible demand for greater amount of bus services. As of 2020, the network is 5 times the size that of 1925 and with more suburban and outer Sydney buses that travel longer distances. However, with rise of suburban sprawl, cars have become dominant mode of transport for most residents and trains have gained more patronage becoming leading mode transport in 2023. Thus, bus network has reached an equilibrium in terms of its usage, or it is not utilised as has been argued in recent media and government pieces that debate efficacy of the system. S-Curve Projection. Methodology. For the projection to occur, the logistic S-curve is a sensible approach to assess the state of the bus network or any mode of transport or moving of goods. The unique aspect of being able to project a peak instead of an infinite rise like in an exponential rise give it more realistic use scenarios. However, it has to be noted that this logistic curve does not account for lag as is more evident in this mode of transport as it has not significantly decreased in its use but has also not found a resurgence or a second rise. The following equation sets the foundation for the predicted projection of the network vs the actual figures. formula_1 The b coefficient is represents a lot of challenge as the slope repesents the natural log of the different cases where the actual value is compared against the difference between the actual and predicted saturation value. formula_2 For this data set, 114 years or points are chosen and the b is the slope in between them. The intercept is also calculated. This forms a straight line with an intercept which can be compared against real data. The R squared value (RSQ) is modelled in Excel to evaluate its effectiveness in realising the logistic S-curve. It is not only an approximation in terms of the method as it does not account for deviations or sudden falls or rises. Results. For this example, total 18 guesses were made for possible peak ranging from K = 330 - 500 million yearly patrons with a positive increment of 10 million. The biggest RSQ was chosen out of the 18 calculations. The range between smallest and biggest number was 0.033967. This suggests that it is not conclusive what the maximum number could be and that it could range anywhere from 330 - 500 million without much difference. However, the maximum number was 360 million patrons. The graph visualises the logistic S-curve and how it generally follows the trend of the rise and peak of the Sydney buses. The extrapolation is not shown to reach 360 million passengers which is predicted to happen in 2027. It also shows the inability to account for the stagnation in the numbers its sharp climb happening during 40-70s not accounted. This might be because of the WW II dip that slowed down progress and the war efforts being the main capital investment for the government, with the addition of restrictions and rations. The table of results is listed below:
2,185
Transportation Deployment Casebook/2024/Suzhou Metro. Overview. What is Metro? A metro is a train system that runs underground or above ground in big cities to help people move around fast. Background: Suzhou City and Its Metro. Suzhou, a Chinese city housing 10 million residents, ranks as a significant metropolis. Over the past half-century, the city has seen remarkable growth, transforming from a town with limited touristic appeal to a global award winner in urban planning. Suzhou stands as a major industrial hub, holding the second-highest count of factories in China, including those manufacturing iron, steel, textiles, electronics, and computers. The city’s robust growth has also fueled the service sector, with tourism being a notable contributor, adding 150,000 Yuan to the city’s revenues in 2013. Due to its proximity to Shanghai, a city with a rising demand for premium services, Suzhou has turned into a magnet for foreign investment. Public transportation, epitomized by the Suzhou Metro, is a critical component of these services. The inception of the Suzhou Metro occurred in 2007 when the local government decided to address the transportation needs of its citizens, leading to the metro’s construction. The operational lines of Suzhou Metro, namely lines 1, 2, and 4, were initiated in 2007, 2009, and 2010 respectively, becoming fully functional in 2012, 2013, and 2017. At present, these lines traverse a cumulative 120 kilometers across their 93 stations. There are prospective plans to double this coverage to 240 kilometers by the end of the decade. Qualitative Analysis. Historical Background of Suzhou Metro System. Overall development. In 1996, Suzhou City began to study the construction of rail transit [1]. On December 28, 2003, the foundation of the Jinji Lake test section project of Suzhou Rail Transit Line 1 was laid, which was the first rail transit project in Suzhou [2]. On December 26, 2007, the construction of Suzhou Rail Transit Line 1 started [3]. On April 2, 2008, the logo of Suzhou Rail Transit was determined [4]. On December 25, 2009, the construction of Suzhou Rail Transit Line 2 started [5]. On January 12, 2012, Suzhou Rail Transit Line 1 began trial operation [6]. On April 28, Suzhou Rail Transit Line 1 opened for trial operation [7]. On September 27, the construction of Suzhou Rail Transit Line 2 extension, Suzhou Rail Transit Line 4 and branch line started [8]. On December 28, 2013, Suzhou Rail Transit Line 2 opened for trial operation [5]. On December 16, 2014, the construction of Suzhou Rail Transit Line 3 started [9]. On June 28, 2016, the construction of Suzhou Rail Transit Line 5 started [10]. On September 24, Suzhou Rail Transit Line 2 extension opened for trial operation [11]. On December 7, Suzhou Rail Transit Line 4 and branch line began trial operation [12]. On April 15, 2017, Suzhou Rail Transit Line 4 and branch line opened for trial operation [13]. On November 27, 2018, the construction of Suzhou Rail Transit Line 6 and Suzhou Rail Transit S1 Line started [14]. On August 18, 2019, Suzhou Rail Transit Line 3 began trial operation [15]. On September 30, the construction of Suzhou Rail Transit Line 8 started [16]. On December 25, Suzhou Rail Transit Line 3 opened for initial operation, and Suzhou Rail Transit Line 7 started construction [17]. On February 7, 2021, Suzhou Rail Transit Line 5 began trial operation. On June 29, Suzhou Rail Transit Line 5 opened for initial operation. On May 6, 2022, the construction of Suzhou Rail Transit Line 2 extension, Suzhou Rail Transit Line 4 extension, and Suzhou Rail Transit Line 7 extension started. From November 11, Suzhou Rail Transit Line 5 opened the cab, becoming the first fully automatic operation line in Jiangsu Province to open the cab. In December, Suzhou Rail Transit Group Co., Ltd. held a special consultation and demonstration meeting, suggesting that the "S1 Line" be renamed as "Line 11". Technology. Signal System. Suzhou Metro employs the Fourth Generation FAO Signal System, commonly known as the unmanned driving system. Compared to the third-generation system, the fourth-generation system offers higher automation and reliability. It enables automatic train operation, automatic stopping, passenger embarkation and disembarkation detection, and automatic departure. Additionally, a failure in a single system module does not impact the overall system operation. Suzhou Metro utilizes an advanced train control system provided by Siemens, known as the Train Guard MT Communication Based Train Control (CBTC) system, along with a free propagation radio WLAN system, Sicas ECC electronic interlocking system, and AzS axle counters for its signaling and communications. This setup ensures efficient and safe operation of the metro lines​​ [18]. Typical Cases. As part of Suzhou Metro Line 6, Hanqingqiao Station is considered a pioneer in digital transformation. It has introduced a range of intelligent passenger service equipment, including smart service terminals, dual-eye fare gates, intelligent customer service robots, and smart guidance screens. These devices enhance passenger convenience and intelligence during their journeys [19]. Including: Policy in Birthing Phase. Policies have played an important role in the birthing phase of the Suzhou Metro, providing guidance for its expansion and operation. Government approvals and support were critical to initiate the construction phase and expand the network. For example, construction of Line 1 began in 2007 and was put into operation in 2012. This was followed by the construction of Lines 2 and 3, each with specific timelines and targets, thus connecting districts and increasing mobility in the city. The government has also introduced a succession of fare and ticketing policies, including discounts and other benefits for users. Market Development. Suzhou rail transit has been in operation for 10 years, and has opened and operated 5 lines with a total mileage of 210 kilometers, transporting more than 2.2 billion passengers. The 5 rail transit lines that have been opened in Suzhou cover an area of about 336 square kilometers and a population of 2,754,500 people in Suzhou. The average daily passenger flow has grown from 100,000 to 1,200,000, with the highest reaching 1,910,000 in a single day, and the proportion of rail transit in public transportation travel has exceeded 50%. In 2022, Suzhou Rail Transit Group completed engineering investment of 20.5 billion yuan, exceeding the annual plan by 5%. Among them, Line 6 completed an annual investment of 7.08 billion yuan, Lines 7 and 8 completed a total annual investment of 7.85 billion yuan, Line 11 completed an annual investment of 5.31 billion yuan, and the project of extending Lines 2, 4 and 7 completed an investment of 253 million yuan. Quantitative Analysis. Data Collection and Methodology. The life cycle can be modelled by an S-curve which is used to display data over some time. S-curves (status vs. time) allow us to determine the periods of birthing, growth, and maturity. Here, it is assumed that the data used for modelling takes on a logistic shape to seek a curve that best fits the data. In the current context, to analyze the life cycle of Suzhou Metro, the operated kilometers by Suzhou Metro are considered as the status and the consequent years as time to plot the S-curve. The required data was then collected from the annual reports published by Suzhou Rail Transit over the years from 2012 to 2022. The life-cycle model of Suzhou Metro can be represented using the following equation: S(t) = Smax/[1+exp(-b(t-ti)] Where: I fit the data using least squares to estimate the values of b and ti. Results and Interpretations. To plot the S-curve and the forecasted model, I obtained the data from Suzhou Rail Transit 2022 Social Responsibility Report which was published by Suzhou Rail Transit Group Co Ltd. The parameters and the model derived from the regression analysis on the above data are as follows:
2,111
Transportation Deployment Casebook/2024/Ningpo Metro. Introduction. Metro. After the London Underground, the first metro railway, came into service in 1863, many of the world's major cities have constructed their own underground systems. In 2015, nearly 160 cities around the world had metro systems, forty per cent of which were built in the 21st century. Urban transport in Ningpo. As one of the most populated and developed cities in Chekiang Province, Ningpo, with its permanent population of nearly 10 million (2023), attaches great importance to the system of urban transport. Currently, Ningpo has a relatively convenient and mature urban transport system, including bus and metro networks. According to "2022 China Urban Transport Report" released by Baidu Maps, the average walking distance for public transport trips in Ningpo was 791m, the walking distance between the two ends of the connection was 728m, and the walking distance between metro and bus interchange was 330m. These statistics are ranked in the top ten in the 100 main cities of China, which means that when people use public transport modes to travel in Ningpo, they are able to transfer easily in most cases and do not need to walk long distances. Overview of Ningpo Metro. The first metro system in China was the Peking Subway, which was approved in 1965 and began operation in 1969. In 2019, four decades after the first metro system opened in Peking, China has 37 cities with metro systems totalling 5,180.6 kilometres. The metro was firstly proposed in 2003 by the government. The construction of Line 1 started in June 2009, and was put into operation in 2014. By 2022, Ningpo Rail Transit already has five lines in operation with a total length of about 180 km. The trains used in the Ningbo Metro system are all CRRC Zhuzhou Locomotive designed Type B metro trains. The body is made of aluminium alloy, has a crash cushioning design and uses LED lighting to reduce energy consumption. The latest trains on Line 5 have Fully-Automatic Unattended Train Operation (UTO) capability. History and development of Ningpo Metro. Beginning. Since 2003, Ningpo had accomplished a series of plans of rail transit, mainly the metro, and submitted the plan of construction to the state authorities in 2005. In 2006, Ningpo's rail transit construction plan was reviewed and feasibly evaluated in terms of the programme and technical standards. According to the initial plan, the two lines would be completed by 2015, with a total length of 72.1 kilometres and 45 stations. In 2014, the first phase of Line 1, the construction of which began in 2009, became operational, while the second phase began in 2012 and was completed four years later. Construction of the first phase of Line 2, which forms a cross with Line 1, began at the end of 2010, and the 28.5km line connects to the airport, opening in 2015. Development. In 2011, Ningpo put forward the second round of rail transit construction plan, planning to complete the second phase of Line 2 and build Line 3, Line 4 and the first phase of Line 5 between 2013 and 2020, forming a transport network covering six major urban areas of the city. What is noteworthy in the second round of rail transport construction is the change in the finance source, with the municipal government signing an agreement with the financial themes (local governments) under the jurisdiction of Ningpo, whereby the municipal government takes the lead, and the local government also contributes funds to participate in the construction at the same time. Under the plan, by 2020, a rail network with a total length of nearly 250 kilometres would be in operation. By the end of 2021, when the first phase of Line 5 was commissioned, the target for the second round of construction had been reached, and the city had a total of 183 kilometres of rail transit. Currently, the Ningpo rail transit system is under construction according to the third phase of the programme. The third round was approved by the State Council in December 2020. The plan, which covers the period from 2021 to 2026, includes Lines 6, 7, 8, and the extension of two already existing lines, Line 1 and Line 4, with a total length of 106.5 kilometres and a planned investment of RMB 87.59 billion. Market. Funding for the construction of the Ningpo Metro came mainly from the government's public coffers, with the municipal government contributing assets in the early stages and the local government providing a portion of the funding from the second round of construction onwards. In view of the background of greater pressure on government finances, the city government has also asked the relevant authorities to make use of market financing in addition to playing the role of the government's public finances. Advantage. The five lines of the 180-kilometre-long Ningpo Metro basically cover the city's central urban area and connect business districts, train stations, airports and university towns, which to a large extent facilitates people's travel and eases traffic pressure. In terms of promoting tourism, Ningpo Metro not only makes it easier for tourists to travel to various tourist attractions, but the metro itself, as a cultural symbol of the city, can also arouse tourists' interest and love for the city through the metro Logo, which symbolises the city's culture, and the cultural decorations in the metro and metro stations. Quantitative life-cycle analysis of Ningpo Metro. Data collection. Annual passenger volume from 2015 to 2022 (The first line was launched in May 2014, so this year's data will not be included in the analysis) is obtained from the website of China Association Metro, where the association publishes the annual reports: An S-curve can be used to model the life cycle of Ningpo Metro. A three-parameter logistic function is used: formula_1 where:
1,495
Transportation Deployment Casebook/2024/Ferry service in Sydney. Introduction of Ferries. Overview and Characteristics. A ferry is a vessel commonly employed for transporting passengers and/or vehicles over a body of water. They come in a range of sizes and serve various purposes, from compact boats shuttling passengers across brief distances to expansive ships ferrying both passengers and vehicles over extended routes. Information about ferries and their role in public transportation systems in waterside cities and islands is provided. Connections between distant locations, like those across bodies of water such as the Mediterranean Sea, are often referred to as ferry services, with some accommodating vehicles. It is believed that early humans utilised logs or other floating objects to traverse bodies of water, which is essentially the concept behind a ferry. Ferries have been utilised in various countries with waterways, and despite advancements in other modes of transportation, ferries have continued to operate in many countries for over two centuries. According to historical records, the Juliana, established by John Stevens, is considered the first steam-powered ferry. It began operating between New York and Hoboken, New Jersey in 1811, challenging Livingston's monopoly in the area. According to technology, one kind of vessel that is propelled by steam is the steamship, which is also referred to as a steamer at times. These vessels are often able to travel across seas and are propelled by steam engines that use paddlewheels or propellers to power their propulsion. Advantages of Ferry. Steamships were less reliant on wind patterns. Even when propelled by sail, shipping has always had an inherent advantage due to the comparatively effortless manoeuvrability of a large-diameter vessel across the water. Ferry Market in the world. Ferries operate on several routes worldwide, totalling hundreds of options. Europe has a high concentration of ferry services, particularly in two major markets: Northern Europe and the Baltic region, as well as across the Mediterranean. Greece's domestic ferry business, with its extensive transportation network connecting several islands, is among the biggest in the world. Japan relies heavily on ferries for transportation. The primary ferry market in North America is located along the Pacific border between the United States and Canada. Ferry operations in South America and Africa are quite restricted. There is ferry traffic in the Red Sea. The Philippines in Asia have significant ferry traffic due to their ferry-friendly archipelago. The ferry sector in Indonesia saw significant expansion initially, but has faced setbacks due to intense competition from budget airlines. Hong Kong and Singapore are significant markets for high-speed ferries. In China, there is a growing presence of modern ferries in the Bohai Sea. Invention, Market, and Policy of the Ferry. Before the invention of the ferry. Paddlewheels were often used as the primary method of propulsion on these early ships. It was an efficient method of moving forward under perfect circumstances, but had significant disadvantages in other situations. The paddle-wheel functioned optimally at a certain depth, but any change in the ship's depth due to extra weight caused the paddle wheel to dive further, resulting in a significant decline in performance. Invention of ferry (Screw propulsion). The crucial advancement that enabled ocean-going steamers to be practical was the transition from using paddle-wheels to using screw-propellers for propulsion. Steamships gained popularity rapidly due to the propeller's steady efficiency regardless of operating depth. Due to its lower size and bulk and total submersion, it was less susceptible to harm. The cylinders of an engine powering a paddle steamer are located below the shaft, which is above the waterline. The first screw propeller was installed on an early steam engine built by James Watt of Scotland at his Birmingham factory. This engine was a hydrodynamic screw, and Watt is generally acknowledged as having invented the technology. Early market development. The inadequate public transportation system in Sydney was a common gripe even back then. Despite plans for a four- or five-hour journey, the boat ride from Sydney to Parramatta often took as long as twelve hours, and in one documented case, it took just less than fourteen. Port Jackson's deepwater harbour enabled Sydney to quickly establish itself as the primary seaport in the south-west Pacific area. As the colony expanded and the people migrated to the north, west, and south, coastal shipping emerged to provide the city with food and raw materials for its industries, as well as facilitate transhipment to other regions. Suburbs like Balmain, Pyrmont, and Mortlake sprung up around the waterfront industry. These were located in the western suburbs of Sydney. Roughly 80% of Sydneysiders lived within a few minutes' walk of Sydney Harbour until the 1880s. For the busiest times, Woolloomooloo Bay had 11 spaces, four of which were part of the wharf. Its main purpose was to let Australia's wool products leave the country. The role of policy. There was a need that boats be waterproof and have four oars so that anyone may lend a hand to the boatmen "if they pleased" when the government drafted laws in 1803. Additionally, the mast and sail had to be carried by the boats. This situation persisted until 1831. This government intervention eventually became the foundation of boat safety in the future The significance of water transport was important. The port and the rivers that flowed into it were early epicentres of activity, with private companies stepping up to the plate despite governmental regulations and limited resources. The role of the government in regulating transportation has persisted right up to the current day. Competition, however, did start to grow between government and privately operated transportation as well. Ferries in Sydney. Birth (1831-1876). Sydney's early colonisation by Europeans gave rise to the Sydney Harbour ferry service. Agricultural settlements along the Parramatta River were serviced by slow and irregular boats that ran from Sydney to Parramatta. The river was utilised by people, mail, and shops prior to the opening of the Sydney-Parramatta rail connection in 1855. Ferry services continued to be vital for transportation along the Parramatta River even after the Sydney-Parramatta railway arrived in 1855 since they primarily serviced towns south of the harbour. In order to sustain commerce, the Rose Hill Packet, dubbed the "first ferry," is established and serves as a bridge between Sydney Cove and the agricultural community of Parramatta. Round-trip travel on this ship required a week. The first regularly scheduled cross-harbour rowboat in Sydney is run by former prisoner Billy Blue in the 1830s, from Dawes Point to Blues Point. When the oceangoing steamer Sophia Jane reaches Sydney in 1831, it is sometimes used for coastal commerce and towing to Newcastle on the Parramatta River. In 1833, an Australian Conveyance firm was established. With a 12-horsepower (8.9-kW) engine, the Surprise was the first paddle steamer ever constructed in Australia, and it started providing regular trips to Parramatta in 1834. Boatmen used to use the poorly maintained route to Parramatta at that period. As the tourist industry grows, Henry Gilbert Smith charters Brothers in 1855 to operate the first regularly scheduled service between Sydney and Manly. Growth (1876-1914). The New Era and Birkenhead ferries were run by Thomas Henley in the 1890s, and the Balmain-Johnstons Bay region had a ferry boat boom around that time. Population growth also played a significant role in the growth of the ferry industry. Between 1851 and 1891, the population of Metropolitan Sydney surged from around 54,000 to 96,000 and 383,000, respectively. Also rising from 28% in 1851 to 34% in 1891 was Sydney's population share in New South Wales overall. During this time, the growth of the ferry business was influenced by both governmental and private factors. In terms of the private sector, it was in charge of providing ferry services and was essential to the development of the ferry sector. Improvements in the Balmain region were brought about by the large-scale engineering and ship repair activities that Thomas Sutcliffe Mort and Captain Rowntree started in Mort Bay. Employment prospects in the ferry industry were brought about by the migration of people into this region. Regarding the public sector, steps were taken to guarantee that ferry services would be accessible and available as a means of public transit. When smooth transitions between trams and ferries were introduced, public transportation was used more often. Trams arrived at Erskine Street Wharf by the late 1880s, and the location became a centre for tram and ferry services that ran between Parramatta and Balmain. Toll fees can be utilized for the development funds of facilities, but high tolls can lead to dissatisfaction among users. Fare regulation is often implemented by authorities to prevent monopolistic practices and provide passengers with economical prices. Between New South Wales and Victoria, the Wymah ferry traverses the Hume Weir, travelling half a km in thirteen minutes. Farm vehicles and stock trucks are transported over the Weir on behalf of the nearby agricultural community, and it is directly managed by the Roads and Traffic Authority (RTA). At first, ferry users had to pay tolls, which varied depending on whether they were walking, riding in a cart, or on horseback. But when the ferry service was jointly supported by the State Governments of Victoria and New South Wales, as well as the Towong Shire Council, the toll system was eliminated in 1908. The number of people using ferries rose after tolls were eliminated. Furthermore, the development of ports is essential for the advancement of the ferry industry. When the bubonic plague approached Sydney in 1900, the City Council faced criticism for its role in the public health emergency and the rat issue that made it worse. The Rocks and Millers Point slums were renovated by the state government, which also assumed control of the City Council's health authority. The newly formed Sydney Harbour Trust was given administration of these areas. This policy and social atmosphere led to the redevelopment of the docks. Maturity (1915-1946). During the mature phase, the development entails the stabilization and establishment of the transportation mode or the scaling up of economies. Apart from the Manly ferry service and a few little launching services, Sydney Ferries Limited had a near-monopoly on the Sydney Harbour ferry services in the 1920s. It became the biggest ferry service in the world in terms of the number of vessels and utilisation rates in addition to purchasing the majority of other ferry companies and their assets. The highest capacity ferries on Sydney Harbour, Kuttabul and Koompartoo, were put into operation by Sydney Ferries Limited in response to the increasing demand for cross-harbour ferry services to Milsons Point. Sydney Ferries met a historically high demand for cross-harbour services during the duration of the 1920s. Sydney Ferries Limited, which held a monopoly, faced competition from other modes of transportation. They stopped providing cross-Parramatta River services beyond Gladesville in 1928 as a result of competition from railways and roadways. The opening of the Sydney Harbour Bridge took place on March 19, 1932. As a direct consequence of this, the number of passengers travelling every year dropped from 40 million to 14 million. The Milsons Point location was the only one where the Sydney Ferry service was available. Within the first two weeks after the bridge was opened, the vehicle ferry services that were formerly offered between Dawes Point and Blues Point, as well as between Bennelong Point and Milsons Point, were furthermore terminated. The Manly was known for its limited width, which made ferry manoeuvres difficult and time-consuming to perform. In 1922, the first double-ended screw steam ferry, known as the Manly (II), was developed in order to solve this issue further. Located in Balmain, New South Wales, it was constructed by Young, Son & Fletcher. The voyage to Manly was completed in a record time of 22 minutes, which has not been exceeded by conventional ferries yet. This record was achieved in 1922. The Lady Chelmsford was changed from steam power to diesel propulsion in order to modernise its fleet in conjunction with the introduction of diesel engines meant for ferries. By virtue of its two-stroke, five-cylinder engine, the Gardner diesel was capable of producing 190 horsepower (141 kW) and achieving a speed of 10.3 knots. The conversion of the other four "Lady Class" ferries, as well as Karingal and Karabah, took place in the 1930s. This was due to the fact that ferry firms were typically unable to buy new ships during the decades after the Bridge. During this period, new services were introduced that went beyond simple transportation methods, offering features previously unseen at the time. In 1929, the Viceroy of India ordered the construction of ships that introduced a new degree of luxury and speed to the service experience. The first Peninsular and Orient Steam Navigation Company (P&O) ships to include indoor swimming pools were these particular vessels. The propulsion system for all three vessels consisted of enormous electric motors that were driven by turbo-electric steam turbines. As far as design, popularity, and service were concerned, the 'Strath' liners were superior than the Viceroy of India. They possessed the same degree of luxury and speed as the Viceroy of India, which was able to finish its journey from Britain to Bombay without any disruptions. Decline (1947-1975). Since 1946, the Sydney ferry service began to decline. Ferry companies, lacking sufficient funds to purchase new vessels, planned to increase efficiency by converting ships to diesel propulsion. However, the conversion process incurred substantial costs, nearly leading the companies to bankruptcy. Attempts to persuade the government to take over the ferry service failed. The failing Sydney Ferries Limited was acquired by the New South Wales government in 1951 for a sum of £25,000. The government purchased assets obtained by the Sydney Harbour Transport Commission, in addition to fifteen ferries and the Balmain plant. The Port Jackson & Manly Steamship Company had the operation and maintenance contract. As a response to Sydney Ferries Limited's decision to cease ferry operations, the government enacted the Sydney Harbour Transport Act (No. 11 of 1951). Under this legislation, Sydney Harbour Transport was established as a company with members including the Under Secretary of the Treasury, the Commissioner for Government Transport, and the President of the Maritime Services Board in July 1951. Although the board controlled the facilities and the boats, it did not handle the recruitment of staff or other operational duties. This minimal engagement approach revealed the government's lack of expertise in specialized matters when, by 1974, it was compelled to assume direct control and management of almost all Sydney ferries. Life Cycle of Ferries in Sydney. Quantitative Analysis. Every transport system has a lifespan. This quantitative investigation quantitatively examined the full Sydney ferry life cycle. First comes growth, then maturity, then decline. Sydney ferry life-cycle phases are determined by yearly patronage statistics. The data comes from the Bureau of Infrastructure and Transport Research Economics (1900–2013) and New South Whale Government Annual Reports (2013–2023) . However, due to limited access to data on passenger usage in the 1800s, analysis was conducted starting from the 1900s. Utilising the observed data, a three-parameter logistic function was estimated. The accuracy of the function, which models passenger numbers, can be assessed by plotting it against the observed data. S(t) = Smax/[1+exp(-b(t-ti)], S(t) = State Measurement (in units of individual passengers on board) t = time (in years), t0 = inflection point time (the year in which half of S max is obtained), Smax = saturation level (maximum annual passenger capacity of U.S. airliner), b = the coefficient to be estimated. Y = bX + c where: Y=ln(Passengers/(Smax-Passengers)) X=Year Parameters. To fit the model as closely as feasible (R-squared near to 1), several Smax were used for the tests, and the final Smax value was obtained by narrowing down the range one at a time from a large range of tests and a number of tests to identify the real Smax boundary. Analysis. 1900-2022 (Total). Figure 2 depicts the Sydney ferry passenger real data and predicted passenger data from 1900 to 2022. Although having data from 1830 would have provided a clearer life cycle, the analysis proceeded based on the available data. As shown in Figure 1, there are two peak points in the lifecycle of the ferry until 2022. The first peak occurred in 1931 with 45 million passengers, followed by a sharp decline in 1932 due to the opening of the Harbour Bridge. Following a period of decline after 1950 due to the increasing use of public transportation and private cars, there was a resurgence in the 1970s when the government took over ferry services, accompanied by investments. However, passenger numbers have since stabilized. The second peak drop occurred in 2020, with a drastic drop to 1 million passengers, attributed to the COVID-19 pandemic in 2019. While there has been a recovery since passenger numbers have not surpassed those seen in the early 1900s. The regression results for the period from 1900 to 2022 yielded an R-square of 0.468, indicating low fitness of the regression model, and a standard deviation of 5.6, suggesting significant short-term fluctuations in passenger numbers. Therefore, the analysis proceeded by dividing the data into segments. 1900-1914 (Growth). Figure 3 depicts the ferry passenger real data and predicts passenger data from 1900 to 1914. The positive value of "b" indicates a positive slope, and the R-square value has significantly increased to 0.636 compared to the previous total graph (1900 to 2022). The standard error has also decreased significantly to 2.6. While there are differences between the curve graphs of the predicted and actual data, the trend direction can still be discerned, and overall improvement is observed compared to the data for the entire period (Figure 1). Given the historical context, ferry services in Sydney in the early 1900s remained crucial public transportation amid rapid population growth, as indicated by the data portraying a phase of growth. 1915-1946 (Maturity). Figure 4 illustrates the ferry passenger data and predicted passenger data from 1915 to 1946. The value of "b" is -0.026, indicating a negative slope, and the R-square value has sharply declined to 0.265, indicating a significant decrease in the fitness of the regression model. The standard error has also increased significantly to 4.7. In 1931, ferry usage peaked at 520,000 passengers, but in 1932, with the opening of the Harbour Bridge, ferry usage plummeted to 200,000 passengers. While there was some recovery until 1946, considering the negative value of "b" during this period, it can be inferred that this was a phase of maturity, marked by reaching the peak followed by a low-growth phase. 1947-1975 (Decline). Figure 5 illustrates the ferry passenger data and predicted passenger data from 1946 to 1975. The value of "b" is -0.025, indicating a negative slope, and the R-square value has increased again to 0.65, indicating an improvement in the fitness of the regression model. The standard error has also significantly decreased to 1.58. From the late 1940s on, many ferry services were either mobilized for the war effort, reduced, or suspended due to the difficulties of the Depression and the War. Despite attempts to transition to diesel engines in the 1950s and the introduction of premium services like the Manly III, which significantly reduced travel time, demand struggled to recover. In the 1960s, services to Neutral Bay, Cremorne, and Mosman experienced modest growth alongside harbor housing developments in the area, leading to a slight increase in passenger demand. Despite efficiency improvements through the introduction of new technologies and fast premium services, ferry demand decreased from 1946 to 1975. This period can be considered a phase of decline. 1976-present (Re-growth). Figure 6 depicts the ferry passenger data and predicted passenger data from 1976 to 1990. The value of "b" has shifted back to positive at 0.052, and the R-square value has significantly increased to 0.88, making it the most accurate regression model among all. The standard error has also decreased substantially to 0.938, indicating a close match between the graphs. During the 1980s, the inaugural Great Ferry Race was held as part of the Festival of Sydney, becoming an annual event since then, now held on Australia Day. Additionally, the introduction of the new "Freshwater-class" Manly ferries, the largest ferries to run on the harbor, took place during this period. This period can be viewed as a phase of resurgence, marked by government organizational changes, new events, and the introduction of modernized ferries. 1991-2022 (Stability). Figure 7 illustrates the ferry passenger data and predicted passenger data from 1991 to 2022. The value of "b" has shifted back to negative at 0.025, and the R-square value has significantly decreased to 0.05, making it the lowest among all regression models. The standard error, however, was the lowest at 0.68, despite the low R-square value. This discrepancy can be attributed to the sharp decline in 2020 due to the impact of the COVID-19 pandemic. The demand for ferries experienced a drastic decline due to the effects of the pandemic, with a gradual recovery in demand observed after 2022. However, the demand has stabilized at a consistent level with slight fluctuations, indicating a phase of stability and a minor decline post-pandemic. Conclusion. Figure 8 illustrates the phase of the lifecycle of Sydney ferries obtained through regression analysis, evaluating the accuracy of actual versus predicted data through regression analysis. In some cases, analyzing the entire duration resulted in a significant decrease in the R-square value. However, segmenting the data over different periods helped improve this. One limitation was the inability to include the "birth" phase due to restricted access to Sydney ferry data. If more data becomes available in the future, it will be necessary to revisit the analysis and include this phase.
5,496
Transportation Deployment Casebook/2024/Sydney Bus. The lifecycle of the Sydney bus network is about analysing the birthing, the growth, the maturity and potential decline of its service through data analytics and computer modelling. A time based approach is used to plot scale of its yearly patronage or track length over the network's entire lifetime. An S-curve model using the Logistics equation with three parameters forms the theoretical basis for this approach. Background. The bus network in Sydney, Australia forms the second largest public mode of travel by patronage in the state of New South Wales, behind the train line service. According to Transport NSW, in 2023, 48.9% of single trips completed using the train/metro whereas 38.1% of trips were completed by the bus. As of 2015, it covers over 25,000 km of route length. It services the CBD and the suburban areas of the Greater Sydney region and operates under Opal Card scheme introduced in 2013 to 2014. The lifecycle of the bus network in Sydney spans over a 100 years and has changed based on the technology, and policy needs of the city. This lineage includes the early adoptions of the horse carriage omni buses to the modern engine powered articulated and double decked buses. It is estimated that the city has grown from 8.4 million yearly passengers in 1900 to over 308.8 million in 2014 as of latest figures by Bureau of Infrastructure, Transportation and Research Economics (BITRE), before the introduction of Opal. The bus network is considered quite a mature mode of transport as it has saturated the region for 30 years without increasing its ridership significantly. Due to recent events such as Covid-19, the ridership has still been in recovery as it has only surpassed 200 million yearly passengers again in 2023 still below its pre-Covid numbers of 308.5 million in 2019. History. The historical context of the bus network in Sydney puts the numbers into a more human perspective. The trends in the bus network tend to reflect saturation without much room for growth only maintenance. Birthing. In the early 20th century, the tram network dominated the city followed by the ferry and heavy rail network. Since the modern car had not been motorised to take over the market just yet, the bus network was simply made of horse carriage omnibuses, that could hold not hold more than around 10-15 people before it was put under severe stress. This is in contrast to the early trams that were, at the time, just recently electrified, and could serve well over twice the capacity of a regular bus while having more connections and integration for faster travel time. Following the ending of the Great War, the buses too became motorised and received traction for their versatility and low maintenance and running costs. This was happening at the same as the modern personal car became a staple as production costs decreased, paved roads became more common and the urban landscape increased. This sparked large amount of privately owned bus companies to run as the transport had not been formally systematised by the state government. In 1927, over 500 buses that ran on a motor engine served the city while being unregulated. In contrast, the trams were widely adopted by the city and plans to control major modes of public transport was an important opportunity to seize. In 1930, the New South Wales government intervened the bus network operations by limiting its freedoms to open new routes and more services through taxation and legislation. The Transport Act of 1931 passed to stifle the competition as its private owners could pay the high costs that the government enforced. A condensed timeline from BITRE shows that the bus service was still in its infancy but it was gaining popularity as the technology continued to accelerate especially post Second World War. Growth. The comparison and competition with the tram system continued through the 40s and 50s as the popularity of one mode sacrificed the patronage of the other. The exception occurred in the middle of World War Two where integration of the two services was used to save running costs for buses and trams, which proved important as this became a bigger problem later on and was experienced across the globe. After the war, the buses were upgraded and received further funding as the idea of a wide spread bus urban public transport (UPT) mode became more popular. Similarly to the trends overseas, like in the U.S.A., car adoption with decrease in fuel price, stronger and lighter metals meant that the government could look into cheaper options that could gain capital. BITRE shows one perspective how attractive it would have been for petrol or diesel owned vehicles as their real price compared to 2011-12. From the beginning of WW II until the end 1939 - 1945), it saw an increase in yearly ridership by nearly 100 million people from 65.8 to 159.8 million. This can be thought of as the inflexion point of highest rise in the mode's lifecycle. From 1946 all the way to end of the 70s, the bus saw a steadier rise in popularity as the tram network was virtually phased out in favour of the light capacity UPT that could now reach greater distances and was more versatile and flexible in times of need. In 1969, bus patronage reached a maximum yearly patronage of over 328.1 million, which is the first true peak of the mode and the sign of maturity and saturation over the upcoming decade. Maturity. Despite the evolution of the bus taking over the tram system and becoming dominant the dominant UPT in the 70s and 80s, the numbers show a stagnation in the service as it was saturated and had reached most of the regions in the Greater Sydney region. Again, it followed similar paths to Australian cities such as Melbourne as it stagnated its development due to filling most of its areas with bus links. However, Melbourne had a resurgence that is up to dispute based on boundary conditions of city limits as pointed by the BITRE analysis. As such, the data is not 100% conclusive as even the bureau agrees with through its inclusion of different accuracy classifications for its 'UPT Bus' figures in Sydney. Decline. Excluding Covid due to its external impacts on the wide transport network across the globe, Sydney bus system has stayed relatively consistent in terms of patronage numbers despite rising city population and possible demand for greater amount of bus services. As of 2020, the network is 5 times the size that of 1925 and with more suburban and outer Sydney buses that travel longer distances. However, with rise of suburban sprawl, cars have become dominant mode of transport for most residents and trains have gained more patronage becoming leading mode transport in 2023. Thus, bus network has reached an equilibrium in terms of its usage, or it is not utilised as has been argued in recent media and government pieces that debate efficacy of the system. S-Curve Projection. Methodology. For the projection to occur, the logistic S-curve is a sensible approach to assess the state of the bus network or any mode of transport or moving of goods. The unique aspect of being able to project a peak instead of an infinite rise like in an exponential rise give it more realistic use scenarios. However, it has to be noted that this logistic curve does not account for lag as is more evident in this mode of transport as it has not significantly decreased in its use but has also not found a resurgence or a second rise. The following equation sets the foundation for the predicted projection of the network vs the actual figures. formula_1 The b coefficient is represents a lot of challenge as the slope represents the natural log of the different cases where the actual value is compared against the difference between the actual and predicted saturation value. formula_2 For this data set, 114 years or points are chosen and the b is the slope in between them. The intercept is also calculated. This forms a straight line with an intercept which can be compared against real data. The R squared value (RSQ) is modelled in Excel to evaluate its effectiveness in realising the logistic S-curve. It is not only an approximation in terms of the method as it does not account for deviations or sudden falls or rises. Results. For this example, total 18 guesses were made for possible peak ranging from K = 330 - 500 million yearly patrons with a positive increment of 10 million. The biggest RSQ was chosen out of the 18 calculations. The range between smallest and biggest number was 0.033967. This suggests that it is not conclusive what the maximum number could be and that it could range anywhere from 330 - 500 million without much difference. However, the maximum number was 360 million patrons. The graph visualises the logistic S-curve and how it generally follows the trend of the rise and peak of the Sydney buses. The extrapolation is not shown to reach 360 million passengers which is predicted to happen in 2027. It also shows the inability to account for the stagnation in the numbers its sharp climb happening during 40-70s not accounted. This might be because of the WW II dip that slowed down progress and the war efforts being the main capital investment for the government, with the addition of restrictions and rations. The table of results is listed below:
2,181
Transportation Deployment Casebook/2024/Sydney Bus System. The lifecycle of the Sydney bus network is about analysing the birthing, the growth, the maturity and potential decline of its service through data analytics and computer modelling. A time based approach is used to plot scale of its yearly patronage or track length over the network's entire lifetime. An S-curve model using the Logistics equation with three parameters forms the theoretical basis for this approach. Background. The bus network in Sydney, Australia forms the second largest public mode of travel by patronage in the state of New South Wales, behind the train line service. According to Transport NSW, in 2023, 48.9% of single trips completed using the train/metro whereas 38.1% of trips were completed by the bus. As of 2015, it covers over 25,000 km of route length. It services the CBD and the suburban areas of the Greater Sydney region and operates under Opal Card scheme introduced in 2013 to 2014. The lifecycle of the bus network in Sydney spans over a 100 years and has changed based on the technology, and policy needs of the city. This lineage includes the early adoptions of the horse carriage omni buses to the modern engine powered articulated and double decked buses. It is estimated that the city has grown from 8.4 million yearly passengers in 1900 to over 308.8 million in 2014 as of latest figures by Bureau of Infrastructure, Transportation and Research Economics (BITRE), before the introduction of Opal. The bus network is considered quite a mature mode of transport as it has saturated the region for 30 years without increasing its ridership significantly. Due to recent events such as Covid-19, the ridership has still been in recovery as it has only surpassed 200 million yearly passengers again in 2023 still below its pre-Covid numbers of 308.5 million in 2019. History. The historical context of the bus network in Sydney puts the numbers into a more human perspective. The trends in the bus network tend to reflect saturation without much room for growth only maintenance. Birthing. In the early 20th century, the tram network dominated the city followed by the ferry and heavy rail network. Since the modern car had not been motorised to take over the market just yet, the bus network was simply made of horse carriage omnibuses, that could hold not hold more than around 10-15 people before it was put under severe stress. This is in contrast to the early trams that were, at the time, just recently electrified, and could serve well over twice the capacity of a regular bus while having more connections and integration for faster travel time. Following the ending of the Great War, the buses too became motorised and received traction for their versatility and low maintenance and running costs. This was happening at the same as the modern personal car became a staple as production costs decreased, paved roads became more common and the urban landscape increased. This sparked large amount of privately owned bus companies to run as the transport had not been formally systematised by the state government. In 1927, over 500 buses that ran on a motor engine served the city while being unregulated. In contrast, the trams were widely adopted by the city and plans to control major modes of public transport was an important opportunity to seize. In 1930, the New South Wales government intervened the bus network operations by limiting its freedoms to open new routes and more services through taxation and legislation. The Transport Act of 1931 passed to stifle the competition as its private owners could pay the high costs that the government enforced. A condensed timeline from BITRE shows that the bus service was still in its infancy but it was gaining popularity as the technology continued to accelerate especially post Second World War. Growth. The comparison and competition with the tram system continued through the 40s and 50s as the popularity of one mode sacrificed the patronage of the other. The exception occurred in the middle of World War Two where integration of the two services was used to save running costs for buses and trams, which proved important as this became a bigger problem later on and was experienced across the globe. After the war, the buses were upgraded and received further funding as the idea of a wide spread bus urban public transport (UPT) mode became more popular. Similarly to the trends overseas, like in the U.S.A., car adoption with decrease in fuel price, stronger and lighter metals meant that the government could look into cheaper options that could gain capital. BITRE shows one perspective how attractive it would have been for petrol or diesel owned vehicles as their real price compared to 2011-12. From the beginning of WW II until the end 1939 - 1945), it saw an increase in yearly ridership by nearly 100 million people from 65.8 to 159.8 million. This can be thought of as the inflexion point of highest rise in the mode's lifecycle. From 1946 all the way to end of the 70s, the bus saw a steadier rise in popularity as the tram network was virtually phased out in favour of the light capacity UPT that could now reach greater distances and was more versatile and flexible in times of need. In 1969, bus patronage reached a maximum yearly patronage of over 328.1 million, which is the first true peak of the mode and the sign of maturity and saturation over the upcoming decade. Maturity. Despite the evolution of the bus taking over the tram system and becoming dominant the dominant UPT in the 70s and 80s, the numbers show a stagnation in the service as it was saturated and had reached most of the regions in the Greater Sydney region. Again, it followed similar paths to Australian cities such as Melbourne as it stagnated its development due to filling most of its areas with bus links. However, Melbourne had a resurgence that is up to dispute based on boundary conditions of city limits as pointed by the BITRE analysis. As such, the data is not 100% conclusive as even the bureau agrees with through its inclusion of different accuracy classifications for its 'UPT Bus' figures in Sydney. Decline. Excluding Covid due to its external impacts on the wide transport network across the globe, Sydney bus system has stayed relatively consistent in terms of patronage numbers despite rising city population and possible demand for greater amount of bus services. As of 2020, the network is 5 times the size that of 1925 and with more suburban and outer Sydney buses that travel longer distances. However, with rise of suburban sprawl, cars have become dominant mode of transport for most residents and trains have gained more patronage becoming leading mode transport in 2023. Thus, bus network has reached an equilibrium in terms of its usage, or it is not utilised as has been argued in recent media and government pieces that debate efficacy of the system. S-Curve Projection. Methodology. For the projection to occur, the logistic S-curve is a sensible approach to assess the state of the bus network or any mode of transport or moving of goods. The unique aspect of being able to project a peak instead of an infinite rise like in an exponential rise give it more realistic use scenarios. However, it has to be noted that this logistic curve does not account for lag as is more evident in this mode of transport as it has not significantly decreased in its use but has also not found a resurgence or a second rise. The following equation sets the foundation for the predicted projection of the network vs the actual figures. formula_1 The b coefficient is represents a lot of challenge as the slope represents the natural log of the different cases where the actual value is compared against the difference between the actual and predicted saturation value. formula_2 For this data set, 114 years or points are chosen and the b is the slope in between them. The intercept is also calculated. This forms a straight line with an intercept which can be compared against real data. The R squared value (RSQ) is modelled in Excel to evaluate its effectiveness in realising the logistic S-curve. It is not only an approximation in terms of the method as it does not account for deviations or sudden falls or rises. Results. For this example, total 18 guesses were made for possible peak ranging from K = 330 - 500 million yearly patrons with a positive increment of 10 million. The biggest RSQ was chosen out of the 18 calculations. The range between smallest and biggest number was 0.033967. This suggests that it is not conclusive what the maximum number could be and that it could range anywhere from 330 - 500 million without much difference. However, the maximum number was 360 million patrons. The graph visualises the logistic S-curve and how it generally follows the trend of the rise and peak of the Sydney buses. The extrapolation is not shown to reach 360 million passengers which is predicted to happen in 2027. It also shows the inability to account for the stagnation in the numbers its sharp climb happening during 40-70s not accounted. This might be because of the WW II dip that slowed down progress and the war efforts being the main capital investment for the government, with the addition of restrictions and rations. The table of results is listed below:
2,182
Transportation Deployment Casebook/2024/Sydney's Bus Network. The lifecycle of the Sydney bus network is about analysing the birthing, the growth, the maturity and potential decline of its service through data analytics and computer modelling. A time based approach is used to plot scale of its yearly patronage or track length over the network's entire lifetime. An S-curve model using the Logistics equation with three parameters forms the theoretical basis for this approach. Background. The bus network in Sydney, Australia forms the second largest public mode of travel by patronage in the state of New South Wales, behind the train line service. According to Transport NSW, in 2023, 48.9% of single trips completed using the train/metro whereas 38.1% of trips were completed by the bus. As of 2015, it covers over 25,000 km of route length. It services the CBD and the suburban areas of the Greater Sydney region and operates under Opal Card scheme introduced in 2013 to 2014. The lifecycle of the bus network in Sydney spans over a 100 years and has changed based on the technology, and policy needs of the city. This lineage includes the early adoptions of the horse carriage omni buses to the modern engine powered articulated and double decked buses. It is estimated that the city has grown from 8.4 million yearly passengers in 1900 to over 308.8 million in 2014 as of latest figures by Bureau of Infrastructure, Transportation and Research Economics (BITRE), before the introduction of Opal. The bus network is considered quite a mature mode of transport as it has saturated the region for 30 years without increasing its ridership significantly. Due to recent events such as Covid-19, the ridership has still been in recovery as it has only surpassed 200 million yearly passengers again in 2023 still below its pre-Covid numbers of 308.5 million in 2019. History. The historical context of the bus network in Sydney puts the numbers into a more human perspective. The trends in the bus network tend to reflect saturation without much room for growth only maintenance. Birthing. In the early 20th century, the tram network dominated the city followed by the ferry and heavy rail network. Since the modern car had not been motorised to take over the market just yet, the bus network was simply made of horse carriage omnibuses, that could hold not hold more than around 10-15 people before it was put under severe stress. This is in contrast to the early trams that were, at the time, just recently electrified, and could serve well over twice the capacity of a regular bus while having more connections and integration for faster travel time. Following the end of the Great War, the buses too became motorised and received traction for their versatility and low maintenance and running costs. This was happening at the same as the modern personal car became a staple as production costs decreased, paved roads became more common and the urban landscape increased. This sparked large amount of privately owned bus companies to run as the transport had not been formally systematised by the state government. In 1927, over 500 buses that ran on a motor engine served the city while being unregulated. In contrast, the trams were widely adopted by the city and plans to control major modes of public transport was an important opportunity to seize. In 1930, the New South Wales government intervened the bus network operations by limiting its freedoms to open new routes and more services through taxation and legislation. The Transport Act of 1931 passed to stifle the competition as its private owners could pay the high costs that the government enforced. A condensed timeline from BITRE shows that the bus service was still in its infancy but it was gaining popularity as the technology continued to accelerate especially post Second World War. Growth. The comparison and competition with the tram system continued through the 40s and 50s as the popularity of one mode sacrificed the patronage of the other. The exception occurred in the middle of World War Two where integration of the two services was used to save running costs for buses and trams, which proved important as this became a bigger problem later on and was experienced across the globe. After the war, the buses were upgraded and received further funding as the idea of a wide spread bus urban public transport (UPT) mode became more popular. Similarly to the trends overseas, like in the U.S.A., car adoption with decrease in fuel price, stronger and lighter metals meant that the government could look into cheaper options that could gain capital. BITRE shows one perspective how attractive it would have been for petrol or diesel owned vehicles as their real price compared to 2011-12. From the beginning of WW II until the end 1939 - 1945), it saw an increase in yearly ridership by nearly 100 million people from 65.8 to 159.8 million. This can be thought of as the inflexion point of highest rise in the mode's lifecycle. From 1946 all the way to end of the 70s, the bus saw a steadier rise in popularity as the tram network was virtually phased out in favour of the light capacity UPT that could now reach greater distances and was more versatile and flexible in times of need. In 1969, bus patronage reached a maximum yearly patronage of over 328.1 million, which is the first true peak of the mode and the sign of maturity and saturation over the upcoming decade. Maturity. Despite the evolution of the bus taking over the tram system and becoming dominant the dominant UPT in the 70s and 80s, the numbers show a stagnation in the service as it was saturated and had reached most of the regions in the Greater Sydney region. Again, it followed similar paths to Australian cities such as Melbourne as it stagnated its development due to filling most of its areas with bus links. However, Melbourne had a resurgence that is up to dispute based on boundary conditions of city limits as pointed by the BITRE analysis. As such, the data is not 100% conclusive as even the bureau agrees with through its inclusion of different accuracy classifications for its 'UPT Bus' figures in Sydney. Decline. Excluding Covid due to its external impacts on the wide transport network across the globe, Sydney bus system has stayed relatively consistent in terms of patronage numbers despite rising city population and possible demand for greater amount of bus services. As of 2020, the network is 5 times the size that of 1925 and with more suburban and outer Sydney buses that travel longer distances. However, with rise of suburban sprawl, cars have become dominant mode of transport for most residents and trains have gained more patronage becoming leading mode transport in 2023. Thus, bus network has reached an equilibrium in terms of its usage, or it is not utilised as has been argued in recent media and government pieces that debate efficacy of the system. S-Curve Projection. Methodology. For the projection to occur, the logistic S-curve is a sensible approach to assess the state of the bus network or any mode of transport or moving of goods. The unique aspect of being able to project a peak instead of an infinite rise like in an exponential rise give it more realistic use scenarios. However, it has to be noted that this logistic curve does not account for lag as is more evident in this mode of transport as it has not significantly decreased in its use but has also not found a resurgence or a second rise. The following equation sets the foundation for the predicted projection of the network vs the actual figures. formula_1 The b coefficient is represents a lot of challenge as the slope represents the natural log of the different cases where the actual value is compared against the difference between the actual and predicted saturation value. formula_2 For this data set, 114 years or points are chosen and the b is the slope in between them. The intercept is also calculated. This forms a straight line with an intercept which can be compared against real data. The R squared value (RSQ) is modelled in Excel to evaluate its effectiveness in realising the logistic S-curve. It is not only an approximation in terms of the method as it does not account for deviations or sudden falls or rises. Results. For this example, total 18 guesses were made for possible peak ranging from K = 330 - 500 million yearly patrons with a positive increment of 10 million. The biggest RSQ was chosen out of the 18 calculations. The range between smallest and biggest number was 0.033967. This suggests that it is not conclusive what the maximum number could be and that it could range anywhere from 330 - 500 million without much difference. However, the maximum number was 360 million patrons. The graph visualises the logistic S-curve and how it generally follows the trend of the rise and peak of the Sydney buses. The extrapolation is not shown to reach 360 million passengers which is predicted to happen in 2027. It also shows the inability to account for the stagnation in the numbers its sharp climb happening during 40-70s not accounted. This might be because of the WW II dip that slowed down progress and the war efforts being the main capital investment for the government, with the addition of restrictions and rations. The table of results is listed below:
2,184
Transportation Deployment Casebook/2024/Copenhagen Metro. Copenhagen Metro. The development of public transport and infrastructure supporting cities around the world such as Copenhagen has become more critical in an ever growing and more globalised world in a way to boost the success and efficiency of a city, as well as increase its appeal on a global scale. The introduction of The Metro system into the Copenhagen urban fabric from 2002 has contributed greatly in the reduction of use of cars within the main city centre, and for a greater transition to the use of public transport around the Greater Copenhagen region, fostering a more sustainable transition into movement into, and around the city. The Greater Copenhagen region has around 2 million inhabitants, with almost 0.7 million of those living in City of Copenhagen, thus the introduction of the Metro was vital in the development of the city and its surroundings. The Metro Technology. Metros are a form of rapid transit first developed in 1863 in London.  They are often characterised by their high capacity, speed and high level of safety, designed for short to medium distances, unlike traditional trains, which often travel over longer distances within a regional, rather than metropolitan scope. The Copenhagen Metro network, first opened in 2002, consists of four main lines, seen in "Figure 1", operating on a frequency-based timetable, twenty-four hours a day. It was developed to counteract the rising ownership and usage of private cars, and to further improve the efficiency of the existing infrastructure into, and within the main city centre to promote higher public transport use. The areas that The Metro cover, promote land development and redevelopment, including changes in the urban use of neighbouring land. Typically, positive commercial growth follows on from the implementation of a metro system, allowing employment opportunities to present itself along the metro corridor. Furthermore, residential developments often arise along the metro passage in the outer city centre, allowing for the growth of new hubs. Both these types of growth are demonstrated in the development of Ørestad, towards the south of Copenhagen City, still a growing area that has been steadily evolving due its ease of accessibility to and from the city. Pre-Metro Context. Copenhagen and surroundings underwent rapid development following on from a period of recession from the late 1980s to the early 1990s. This caused an increase in the tertiary sector, more houses being built, and a growth in several industries including pharmaceuticals and information technology. As a result, the Danish government set into motion the development of a number of large projects in the coming decades to further boost the economy, better connect Greater Copenhagen with the rest of the country, separated on three main islands: Zealand, Funen and Jutland, seen in "Figure 2", and better connect Copenhagen on a metropolitan scale. This was done to bring up the city, and as a result, the country onto a more global scale, on par with other major European cities which had already underwent similar transformations. Private Car Ownership. The 1950s and 1960s saw an increase in the ownership and usage of private cars, alongside a time in which the Danish government was planning Copenhagen and surrounding municipalities’ public transport systems. The accessibility and use of private cars during this time caused some traditional companies and public institutions to move away from the growing city centre, into the suburbs between the 1970s to 1990s. The open greenfield space, ease of accessibility via private car and cheaper sites drew companies out of the main city, thus further pushing the private car agenda during this period of time. To counteract the growing popularity of private cars, the Road Agency (now the Danish Road Traffic Authority) was established to further investigate the design and construction of a more holistic and comprehensive national highway system. This was necessary as the capital city Copenhagen is situated on the East most island Zealand, with ferries across the Great Belt to connect it to the middle island Funen, and North Jutland island. The 1990s finally saw the construction of The Great Belt Link, a bridge opened in 1998 connecting the main islands together. This provided a better and more efficient connection into the capital city, to further support the economic growth and development of the nation. The Five Finger Plan. The initial development of the Five Finger Plan in 1948, then further implemented in 1974, was the main regional planning principle implemented in Greater Copenhagen, driving the development of the existing regional rail network (S-trains) prior to the development of the Metro. Railway lines were built stretching out in five directions from the main city, to control urban sprawl, while leaving space for recreational and green areas in between the five ‘fingers’, seen in "Figure 3." The 1989 Regional Plan for Greater Copenhagen further refined the initial Five Finger Plan, with development on planned urban centre hubs outside of the main city. There was also an emphasis on the development of new offices to only be situated within walking distance of a train station, which were located along the ‘fingers.’   Development of The Metro. Copenhagen city’s metro system was developed alongside the national highway system, and after the implementation of the Five Finger Plan, to further reduce the use of private cars in and around the city, promoting the use of fast, efficient public transport. It further built on and connected to the existing regional rail and bus network. Construction on Phase 1 of the metro began in 1994, successfully opened in 2002, with Phase 2, 3 and 4 opening in subsequent years. The Metro was successfully constructed and met its intended purpose of promoting the use of public transport and as a result, reduce the use of private vehicles within the city centre in particular. This was accomplished through a combination of a positive economic upturn coupled with positive governmental decisions that were followed through from initiation to completion. The Copenhagen Metro. The Copenhagen Metro is the city’s most used form of public transport, taking up to two million passengers a week on its driverless metro train network. Four lines are currently in use, seen in Figure 3, with an extension on the M4 in currently under construction. Three lines run from one terminal to another, with the M3 making up the most recently constructed City Ring. The network has a twin tunnel track length of 43km, between a total of over forty stations, eight of which link up directly to the S-train network. Services run based on frequency, rather than specific times all through the day, and through the evening. During rush hour, metro trains arrive in intervals of two to four minute, and frequencies vary depending on time of day. Each of the trains consist of three cars, fully accessible from one end to the other, each train with a capacity of 300 people, 100 of which can be seated. The shift away from private car use, buses and trains to the metro took a while to pick up traction, as the metro lines were being phased into use over a period of a decade, with Stage 1 opening in 2002 first, Stage 2 in 2004, Phase 3 in 2007. One framework that can be utilised to investigate the implementation and successful survival of a new niche technology (The Metro) into an already established environment is through dissecting three key dynamics: shielding, nurturing and empowerment. This framework was coined by Rob Raven, Professor of Sustainability Transitions and Research Director at Monash University, Melbourne, and Adrian Smith, a Professor at the Department of Management at the University of Sussex, Brighton. Shielding. Shielding is the protection of new initiatives, against unfavourable markets and environments. The implementation of restrictive and expensive parking policies, coupled with the policy of high taxing on the ownership of cars, up to 150% of the cost of the car has helped shield against unfavourable market outcomes. The Metro against the upkeep and potential increase of private automobile usage. Taxes on cars, which fell under the ‘luxury goods’ category initially introduced in 1910, increased steadily. In 1934, there were approximately 34 cars for every 1000 people in Denmark, on par with neighbouring European countries such as Germany. In 2016, this had number was 200 cars for every 1000 people, compared to 900 cars for every 1000 people in the United States. The heavy taxes have discouraged the Danes from purchasing cars and to utilise the public transport system in combination with cycling. It’s interesting to note the lack of national automotive industry, and thus major industry backlash, allowing for the high taxing of cars. In contrast, most other countries such as the United States and China, is one of the key reasons why tax placed upon cars are relatively lower. Thus, the implementation of increased car taxes along with restrictive parking policies have helped shield The Metro against unfavourable markets, and keep the ownership and usage of cars at a minimum, promoting the use of public transport and cycling. Nurturing. Nurturing is the adjacent activities supporting the new initiative to allow for its future survival with minimal support. The development of urban squares around the Metro stations was done to create a clear, safe and approachable access into the metro station. The squares around the station entrances are supported by bike racks and clear way finding to get to the station. The countless racks built allow users to access the station quickly and efficiently by bike, to leave at the station, or onto the metro via the lifts. Half of all trips made by car in Europe cover a distance of under 5km, with 30% under 3km. These are trips which can take between fifteen to thirty minutes by bike. With infrastructure supporting the usage to and from the metro via bike, these trip times can be cut even shorter and made in a more sustainable manner, rather than driving a short distance in a private automobile. The provision of bicycle parking and fast and efficient lifts into the station from above ground further nurtures the use and survival of the metro’s survival. Furthermore, the comfort and size of the S-train and bus fleet were also improved to meet demands of more people riding the train into the city, to transfer onto the metro to reach their final destination. Empowerment. Empowerment refers to the act of intervening in the environment to help support the manifestation of favourable conditions for a new niche technology. Empowerment exists in two forms, the first of which is the fitting and conformance of a technology to the existing market and environment. One way in which the metro was fitted to suit the existing market was the design of four ‘flex zone’ within each train, where seats can be flipped up to provide more space. These zones not only accommodate for prams, but also for bicycles too, should there be a need to take it from one station to another. This is significant due to the prolific ownership and usage of bicycles through Copenhagen and beyond. Empowerment can also exist in the fitting and conforming of the existing environment by supporting the recently developed technology and discourage from existing forms of technology. This was done through changing the bus supplies and schedules to reduce competition with the metro, fostering uptake of the metro system. Movia, the primary transit agency for buses in Copenhagen predicted that the opening of the 2019 City Ring would drop bus transit users from 47% to 34%. Furthermore, the establishment of the A-line bus network served as an access and egress method to and from the metro stations, transferring people to their final destination more efficiently. Like the metro timetable, these buses operate based on frequency, rather than specific times. Thus a combination of some of these interventions in the existing environment help empower favourable conditions for The Metro into the future. Data and Life Cycle Analysis. Quantitative Analysis of Annual Passengers. To conduct an analysis of predicted life-cycle of the Copenhagen Metro, an S-curve was produced using data on annual number of riders from Statistics Denmark. Even though the metro was operational from 2002, public data from Statistics Denmark was only available from 2006. The S-Curve was found through the equation below, where S(t) was the status measure, S_Max was the saturation status level, t was time in years, t_0 was inflection time (where half of the max level was achieved) and b was an arbitrary coefficient. formula_1 A linear regression model was run on Excel, producing results of the following values, seen in Table 1. "Figure 4" illustrates the observed annual passenger count from 2006-2023, and the predicted annual passenger count in subsequent years, up to 2050, predicting the Birthing, Growth and Maturity of the system. Table 2 details the observed annual passenger count in the thousands, against the predicted annual passenger count. Analysis of Actual vs Predicted Passenger Count. Steady growth in passengers on the metro can be seen in both the actual and predicted values from 2006 to 2018, however, the actual value jumps up in 2019, before falling rapidly in 2020, due to the Covid-19 Pandemic, causing reduced ridership due to partial lockdowns, among other means of suppressing the spread of the virus, despite the City Ring having just opened in 2019. These numbers quickly recovered in following years and have been on a steady increase, with current values almost on par with the predicted mode. "Figure 5" provides an estimated model up to 2081, showing a more clear delineation of the Birthing, Growth and Maturity phases. The relatively new birth of The Copenhagen Metro lends less data to investigate compared to other more established metro systems such as London and Paris, thus the life cycle prediction is only based off twenty years worth of data. This may cause limitations in the data analysed as a more accurate prediction cannot be drawn from the available data. Nevertheless, a rough life cycle can still be extrapolated from the data provided. It can be seen that the Birthing period existed from 2002 up to about 2010, when passengers were increasing at a more steady rate. The Growth period exists from 2010 to about 2070, where the curve begins to slow down in rate of increase, with Maturity from 2070 onwards. Adoption and Growth of The Metro. "Figure 6" illustrates the growth in employment in metro-served areas around the Greater Copenhagen region, with the most significant growth seen along the M1 line from Vanløse to Ørestad. The Ørestad area was a developing area which in the late 1900s had been a focal geographical point of the city which the government were planning on growing into another strong hub. The first office buildings were erected in 2001 at Ørestad, and by 2016 there were 17,000 jobs in the area. This illustrates the way in which the completion of the M2 line in 2002 majorly boosted the growth in employment in this initially underdeveloped part of the outer city. The opening of the M2 through Christianhavn shows a similar positive trend of increasing employment growth after the construction of the Metro line. The public sector (The Danish Government and local municipalities) pushed the growth of the metro moreso than the private sector. This is clear from the initial strategic planning of the City of Copenhagen and Greater Copenhagen back in 1948 during the drafting of 'The Finger Plan'. The introduction of the metro led to more people utilising the public transport system, which was designed to connect smoothly with the existing S-train infrastructure. Research conducted in 2005 on the impacts of the Copenhagen Metro on transport by Goran Vuk, from the Danish Transport Research Institute revealed a number of findings right after Phase 1 was opened in 2002. He found that between 2002 to 2003: The population of the City of Copenhagen has risen by 200,000 residents from 1990 to 2022, with further growth in 2035 projected to be an additional 100,000 residents. As such, the further development of the metro system is crucial in meeting the rising demand. Extensions on the M4 line by a further five stations towards Sydhavn, set to open in 2024 illustrates the continual growth of the metro system. Additional extensions of the metro line combined with further government policy making will help create a more well rounded, better connected infrastructure system in Copenhagen City and the Greater Region, allowing for positive, sustainable growth into the future.
3,833
Transportation Deployment Casebook/2024/Electric Vehicles. Technology. Electric Vehicles (EVs) are vehicles that use electrical energy to drive the wheels on road for movement. Its main parts include battery pack, electric motor, power electronic controller, onboard charger, and DC/DC converter among others. It works by converting electrical energy stored in batteries into mechanical energy of the wheels via a motor. Power electronic controller is a part that regulates electrical energy for optimal performance of the motor. Battery is one of the bulky parts of an EV made of lithium ions. Onboard charger converts the alternating current supplied by grids at homes or charging stations into direct current which is compatible to be stored in batteries. DC/DC converter is another part that helps to produce low voltage from high voltage stored in batteries to support parts like lights, dashboard displays of vehicle. Electric Vehicles are cleaner, quieter and have low running costs. One surprising feature is that when brakes are applied in an internal combustion engine vehicle, the motion energy is dissipated into heat to make it stop, while in EV part of this motion energy is converted back into electrical energy to be stored in batteries. It is a booming technology that is in its growth phase whereby few countries have formulated policy for its wide scale promotion and adoption of zero emission vehicles while some are at their incipient stage. Context. The mode of travel has changed over time from walking, horse drawn carts, steamboats, railroads, trams, buses to high-speed rails, metros. Around one-fourth of global emissions are from transport sector and road transport accounts for 75% of these emissions. Internal combustion engine (ICE) vehicles emit greenhouse gases (GHG) that have resulted in global warming and climate change phenomenon around the world. Understanding the implications of these emissions, measures were taken to counteract climate change phenomenon like the Paris Agreement of 2015. It is an international treaty to reduce increase in global temperature to 2°C below the pre-industrial level or limit it to 1.5°C. 194 States and European Union have either ratified or acceded the agreement which accounts for 98% of greenhouse gas emissions. Fossil fuel oil reserves are limited to certain nations, are scarce and other countries spend large amount of budget on its imports. All these factors have explored the path for search of alternate energy sources, and EVs are seen as one of the possible solutions for the problem. Today, many countries like China, Canada, United States, United Kingdom, Australia, Denmark, France, Korea, India, Thailand, Indonesia have formulated policy, strategies, programs for EVs uptake like grants, reduced import duty, reduced registration tax, market share, charging infrastructure construction and manufacturing batteries. All these initiatives on a world scale in case of electric vehicles strongly support that EVs are on a lead to become the next generation of transport mode. Market Development: History and Current Scenario. The Hungarian Benedictine monk Ányos Jedlik in 1827 built the first simple d.c. electrical machine which he later used to move a small-scale car model. It also states that the first full scale electricity driven carriageway was built by Robert Anderson that used non-rechargeable batteries and could move at a speed of 12km/h. Electric vehicles were invented as early as in mid-1830s in the USA, UK, and the Netherlands. One-third of vehicles in the United States were EVs in the early 1900s. Modes like steam vehicles required a large container to store water which required to be heated to convert into steam for motion. Sometimes the heating process took as long as 45 minutes during winters, and it also needed refilling making it unsuitable for large travels. The early gasoline vehicles used hand crank to start, needed manual changing of gears which were difficult to operate, and produced noise and smell. These demerits of steam and gasoline cars were absent in electric cars making it famous for short distance travel within cities especially among females. Electricity availability in 1910 made it easier to charge EVs and its popularity grew among people. Ferdinand Porsche developed an electric car called the P1 in 1898, and created the world’s first hybrid car that can run on electricity as well as gasoline. Thomas Edison was also impressed by electric vehicle technology and worked to make more efficient batteries. In 1914, he collaborated with his friend Henry Ford to make affordable electric cars. However, the introduction of Model T in 1908, a gasoline car by Henry Ford was a major setback for electric vehicles. Mass production of Model T resulted in easy availability and cost was low. The electric vehicle was around $1000 costlier than the gasoline in 1912. The development of electric starter by Charles Kettering eliminated the use of hand cranks in gasoline vehicles. The discovery of crude oil in the US reduced the cost of fuels, gasoline stations were at easy reach and electricity were at far reach for rural Americans. All these factors led to disappearance of EVs by 1935. High prices of gasoline fuels and scarcity in early 1970s compelled nations to search for other sources or technology in transportation. Electric delivery jeeps produced in 1975 in the United States was used as a test program in the postal services. NASA’s electric Lunar rover that drove on moon’s surface in 1971 was an attempt to hype the electric vehicle technology. However, the lower performance of electric vehicles with a speed of 45miles/h and distance covered in single charge which was limited to 40 miles could not compete with the available gasoline vehicles. The climate change and fuel scarcity in the current time have attracted public and private sectors for improvements and adoption of electric vehicles. However, high upfront costs, lack of charging infrastructures, range anxiety, few models, availability of parts, service stations, consumers awareness and psychology are the factors that has been playing a major role in mass acceptance. Increasing efficiency of batteries, making it compact to run for long distances, and initiatives by government bodies around the world like reducing the upfront costs, building charging infrastructures will certainly increase in its acceptance as a transport mode. EVs in Australia. In 1899, Henry Sutton was the person to make Australia’s first electric vehicle which had three wheels with a range of 40km and could reach speed of 16km/h. The article also highlights that it was in early twentieth century, taxi services were introduced in Melbourne and Sydney and passengers favored it for smooth and silent drive. Similarly, electric delivery vans for transportation of perishable goods, electric buses and trams were used in urban centers. However, deficit infrastructure, low range, cheap availability of gasoline are the factors that resulted in its decline. Australia targets to reduce its greenhouse gas emission to 43% below the 2005 level and achieve net zero emission by 2050. One of the solutions is shifting to electric vehicles. On this regard, Australia has produced its National Electric Vehicle Strategy 2023. It states that 19% of Australia’s emission are from transport sector and is believed to be the largest contributor of emission by 2030. The country has a large potential of renewable sources of energy that can be used to power EVs. It also shows possibility of batteries manufacturing due to its large reserve of lithium and other minerals like copper, nickel and magnesium. The same strategy highlights that the government will strive to give more affordable varieties with measures like incentives, rebate on registration, second hand EVs from public fleet. To assist in long distance travel, the government will set up EV charges on major highways at around 150km interval. The state governments have also formulated its strategies like the NSW Electric Vehicle Strategy (2021), State Electric Vehicle Strategy for Western Australia (2020) stating plans and programs for EVs growth. The driving range of EVs has increased from about 139km in 2011 to about 350km in 2021 with some vehicles going to 550 km in single charge. On an average an Australian drive 38km/day which can be fulfilled by EV models available in the market. As per a report of State of Electric Vehicles (July 2023), 8.4% of car sales till the time of publication in 2023 were electric vehicles which has shown more than 120% increase in sales compared to 2022. The high sales models are Tesla Model Y, Tesla Model 3 and BYD Atto 3 accounting 68% of sales. There are currently 91 electric vehicle models available in Australian market of which 74 are cars, 7 ute and 10 vans. It also highlights that there are 967 high power public charges, 438 fast charge locations and more than 120 ultra-fast charging location distributed in 558 locations around the country. The report says that per year cost of fuels for a gasoline car is $2400 whereas it is $400 of electricity for EVs. Moreover, efficiency of electric vehicles is 77% compared to 25% for ICE vehicles. Considering the new technologies, cost saving in long run, grants in upfront cost, more charging infrastructures will definitely help in consumers preference for EVs in Australia. Policy initiatives and Growth. Policy initiatives like providing subsidy in purchase cost, rebates on registration are the measures to decrease the gap in cost of EVs and internal combustion engine vehicles. They are in practice in Norway as early as in 1990s, 2008 in the US and 2014 in China. Similarly, tailpipe carbon-dioxide emission standards adopted by European Union; California’s Zero Emission Vehicle mandates have facilitated the use of EVs. To discourage the use of petrol and diesel, Norway has planned to stop sales of vehicles using such fuels by 2025, UK by 2030 and Japan by mid of 2030s. Availability of charging stations is one of the major problems restraining people to purchase EVs. Germany targets to put 1 million charging facility at the end of this decade. Hyundai will not sell internal combustion engine vehicles by 2040 and Ford will not sell in Europe by 2030. Countries like China, India, Indonesia, Morocco have formulated policies for EV battery or vehicle manufacturing. The National Electric Vehicle Strategy outlines that all states in Australia have either formulated Electric Vehicle Strategy or Zero Emission Vehicle Strategy or Climate Action Plan to address climate change. Queensland will provide fast charging network at 54 locations in its electric superhighway. The NSW government will contribute $633 million through its electric vehicle strategy providing support for production of EVs and incentives to local bodies to buy EVs. The Western Australia government has come up with a plan to invest $22.9 million to provide 100 charging stations. Life Cycle Analysis. A technology is generally found to go through phases like birth, growth, saturation and decline. Birth is the phase when the technology enters the market in premature stage showing less sales or adoption. As people get acquainted with the technology there is a sharp sale/adoption within short period of time which is represented by a growth phase. Factors like advancement in technology, high performance, conducive policy environment plays a major role in quick adoption. Then, there comes the saturation phase whereby, the maximum capacity in sales is reached. Some technology shows a decline phase when a new advanced technology paves way into the market and replaces the existing technology. This birth, growth and saturation phase over time shows an S-curve when plotted. The life cycle of a technology can we well represented by following formula: S(t)=Smax/[1+exp(-b(t-ti)] where in this case, S(t)= electric vehicle sales at a time t= time (years) Smax= maximum number of sales or saturation level b= coefficient to be estimated ti= inflection time (the time at which 50% of saturation sales will be achieved) The value of b is determined and used to find predicted sales using the above formula. The data of battery electric vehicle sales in Australia is taken from State of Electric Vehicles 2023. In the year 2011, battery electric vehicle sale was just 49 numbers which slowly grew to 5292 in 2019. In 2021 sales was 17,293 and almost doubled in 2023 at 33,416. The graph shows that in case of Australia the period from 2011 till 2020 can be taken as birth phase of electric vehicles in Australia. The recorded sales in 2020 was slightly less than in 2019. COVID-19 might be one of the causes for this low value. The sales after 2020 shows a steep increase almost increasing to six and a half times in 2022. EV sales at the end of 2023 were not available in State of Electric Vehicle, 2023. But we can determine from the nature of graph that EV sales are in its growth phase. The saturation of EV sales is a matter of time in future. There were 21,168,462 motor vehicles registered in Australia till the end of January 2023. If we assume that all the motor vehicles registered in Australia will be replaced by battery electric vehicles, then the inflection time, ti will be 2035.05, that is by this time half of the registered vehicles will be battery electric vehicles. The statistical parameters are: Smax=21,168,462 b= 0.52887 ti=2035
3,166
Transportation Deployment Casebook/2024/Micromobility in Brisbane. Shared Micromobility in Brisbane. Introduction to Shared Micromobility. As a relatively new mode of transport, there are various definitions for micromobility. The US Federal Highway Administration defines micromobility as;Small, low-speed, human or electric-powered transportation devices, including bicycles, scooters, electric-assist bicycles, electric scooters (e-scooters), and other small, lightweight, wheeled conveyancesWhile it has not provided a definition for micromobility, Brisbane City Council defines E-mobility (a significant portion of the micromobility market) as;E-mobility... refers to a range of small lightweight devices operating at powered speeds of no more than 25km/h...The key attributes of most definitions are that the system is relatively small and lightweight, are designed to provide transportation for a single person, and operate at relatively low speeds. Technological Characteristics. Most definitions of micromobility include both vehicles owned privately by individuals, and shared vehicles. In this analysis we look at the specific case of shared micromobility which is characterised as being publicly accessible (though not necessarily publicly owned). The initial deployment of shared micromobility in the Brisbane context, and more broadly, was in the form of shared bikes with a fixed docking point. More recent developments of shared micromobility have introduced electric powered devices, such as e-scooters and e-bikes which do not require a docking station. In the Brisbane context this has seen the deployment of shared e-scooters and e-bikes by private companies (Beam and Neutron) under a permitting system (competitive tender) with the Brisbane City Council. While in Brisbane, shared micromobility has been in the form of conventional bikes, e-bikes and e-scooters, other technologies such as segways, electric skateboards and other devices fitting the above definition of micromobility may in the future contribute to the growth of shared micromobility. Main Advantages. The main advantage of micromobility over other forms of transport from a user perspective, relate to its ability to fulfil relatively short journeys within urban areas. Public transport generally operates at larger scales, providing transport between distance locations at a high speed, and automobiles can be constrained in urban environments by congestion and a need to find parking. In contrast, micromobility allows users to travel at moderate speeds utilising active transport infrastructure. The introduction of dockless systems greatly improved the flexibility of the system by allowing users to start and finish trips at a wider array of locations. Shared micromobility has the following important advantages over privately owned micromobility; Main Markets. The main market for shared micromobility is in relatively dense areas with safe active transport infrastructure. Generally, these conditions are most likely to be established in Central Business Districts, tourism focussed precincts and in the immediate areas around public transportation. The Brisbane CityPlan 2014 sets out the strategic direction for the future growth of Brisbane and identifies a goal to accommodate future population growth through infill development. As this occurs, a greater proportion of Brisbane residents will be within the catchment area of existing commercial and tourism areas, and in closer proximity to public transport stations. This suggests that as Brisbane develops, the market for micromobility services will grow. Shared Micromobility in Brisbane. Brisbane City Council rolled out the docked shared bicycle service (named as the CityCycle program) in October 2010. The initial system had a series of bicycle docking stations within the CBD of Brisbane, primarily along active transport corridors adjacent to the Brisbane River. Users would pay through a membership model (either a one-day membership or daily/weekly/monthly memberships) which allowed for the bikes to be accessed. Usage of the bikes would then be charged based on the time the bike was away from a dock. The bike could be returned to any dock. In December 2018, electric scooters (E-scooters) were first trialed within Brisbane, with a system operated by private company Lime. In 2019 Brisbane City Council undertook a competitive tender for e-scooter operators, ultimately awarding contracts to two private operators (Lime and Neuron). The E-scooters operate as a dockless system where users subscribe to a mobile phone application, on which they can locate and pay to activate the scooters. When they finish their journey, the user can sign-off in the app. The scheme was limited to a total of 1,000 e-scooters. In 2021 the Brisbane City Council discontinued the CityCycle scheme, to be replaced by a shared electric bike (e-bike) service, operated by private companies Beam and Neutron under a contract with the local government. Invention and Innovation. Prior to Shared Micromobility. The conventional bicycle is generally considered to have been developed in the 1850s and 1860s in France. The Velocipede, and subsequent improvements on the mode, provided users with a new way to navigate relatively short distances within towns and cities. Prior to the advent of the personal automobile, bicycles represented a significant mode share within towns and cities. As automobile grew and infrastructure and land-use patterns increasingly became car-oriented and less dense, the mode share of bicycles declined. In the 21st century, public and active transport has seen increasing investment as governments try to address the negative externalities resulting from the car-oriented development that has occurred. A barrier for increasing the utilisation of public transport systems and inducing a modal shift away from private automobiles relates to the first and last kilometre/s of a trip. People are generally willing to walk up to around 400m to a destination. As a result, people living greater than 400m from a public transport station, or who are travelling to a destination greater than 400m from a public transport station, are more likely not to walk. Without an alternative such as micromobility, these people are likely to either drive to a public transport stop or to skip public transport altogether. In low-density cities such as Brisbane, this results in a relatively small number of people who are likely to utilise the public transport system. While privately owned micromobility (such as conventional bicycles) have been around since the 19th Century, utilising them as part of a multi-modal trip requires consideration for the storage of the vehicle, which can be a barrier to their use. Invention of Shared Micromobility Services. The first deployment of a shared micromobility service in the global context is generally considered to the be the “White Bike Program” in Amsterdam. The project provided free bicycles to the public, without any associated security features. The next generation of shared bikes first deployed in Denmark in 1991 incorporated docking stations which locked the bikes until money was deposited to allow usage. The transport model developed further in France in 2007 to incorporate communications technology which allowed for greater monitoring of the system, and a more financially sustainable service. First Wave of Shared Micromobility in Brisbane (2010 to 2019). In Brisbane, the first deployment of a shared micromobility service was Brisbane City Council’s docked shared bike schemes (CityCycle). The following technological building blocks contributed to the design of the CityCycle service: A significant issue encountered with the initial roll out of the system included that at first no helmets were available at the docking stations, and Queensland law requires that users of micromobility are wearing a helmet. Later the CityBike system provided helmets at the docking locations but in practice they were not always present. This means users either used the vehicles without a helmet, or needed to bring their own (a major barrier for usage). Second Wave of Shared Micromobility in Brisbane (2019 to - ). The second wave of micromobility in Brisbane incorporated small electric motors and dockless deployment systems. In 2019 e-scooters began to be deployed, and were closely followed by e-bikes, both operated by the private companies Beam and Neutron. The following technological building blocks contributed to the design of the e-scooter and e-bike services: The Role of Policy. The initial implementation of shared micromobility in Brisbane utilised the conventional bicycle. As a result, it was largely able to function within the existing policy framework for bicycles. The existing road rules had been developed to include requirements for the use of bicycles and the shared bike system utilised the existing infrastructure, such as shared paths, separated cycle paths and on-street cycling lanes. An early impediment to the adoption of shared bikes was the legal requirement to wear a helmet when riding a bike. Initially the docking stations did not include helmets, and so users were required to bring their own. This reduces the desirability of the service for unplanned trips and results in people utilising the service without helmets. Brisbane City Council played a major role in the deployment of shared micromobility in Brisbane by owning and operating the CityCycle service, and by investing in active transport infrastructure throughout the deployment area. The introduction of e-scooters and e-bikes in Brisbane presented significant challenges to the existing transport policy framework. E-scooters are distinct in design from bikes and so were less able to fit within the existing framework. The E-scooters can operate at speeds significantly greater than pedestrians, creating a sense of danger if they are allowed to operate on footpaths, however like bicycles the unenclosed and small design creates significant safety concerns if required to operate on roads. To manage the deployment of e-scooters, Brisbane City Council have constrained the roll-out of vehicles by imposing caps on the service operators. Brisbane City Council has also developed the Brisbane E-mobility Strategy to manage the growth of shared micromobility. The strategy identifies that the accelerating growth of e-mobility within Brisbane has to date been largely driven by commercial considerations of the operators, which has potential transportation access equity implications that will need to be considered in the future. In response to the continuing growth of shared micromobility services, the Queensland Government in 2022 introduced new laws imposing requirements on the use ‘personal mobility devices’ (PMD’s). This included setting a speed limit of 12km/h for PMD’s on all footpaths and shared paths. The legal changes included an option for council’s to apply to increase speed limits up to 25km/h on shared pathways, where certain criteria is met. Growth of Shared Micromobility in Brisbane. The first phase of shared micromobility in Brisbane saw relatively slow growth of the CityCycle docked share bike scheme, commencing in 2010 and reaching a peak in 2018 of around 198,650 trips per quarter (or approximately 2,200 trips per day). The introduction of e-scooters in 2019 saw a decline in the usage of CityCycle, further exacerbated by lockdowns during the COVID19 pandemic. Brisbane City Council announced the closure of CityCycle in November 2020. The introduction of shared e-scooters in 2019 and shared e-bikes in 2021 saw a rapid increase in the growth of shared micromobility in Brisbane. Opportunities for Further Growth. Strategic planning undertaken by Brisbane City Council, including the CityPlan 2014 and the Transport Plan for Brisbane, suggest that future land-use planning and infrastructure investment will focus on increasing population density and encouraging a mode shift from private automobiles towards public transport and active transport. The result of which would be improved viability for micromobility for transport. The growth of Mobility as a Service (MaaS) is also likely to benefit the adoption of shared micromobility. Mobility as a Service provides a single interaction point for consumers that allows them to purchase trips between different destinations, utilising multiple modes of transport. Micromobility could play a significant role in this approach by fulfilling the first and last kilometres of a multimodal trip initiated through a MaaS provider. In addition, more sophisticated pricing systems have the potential to improve utilisation of existing systems, such as by adjusting pricing based on demand to incentivise off-peak usage. The vehicles themselves are also likely to contribute to future growth of micromobility. Improvements in safety features or the development of more diverse forms of micromobility are likely to improve adoption of the mode of transport. Quantitative Analysis of Lifecycle of Shared Micromobility in Brisbane. Logistic Curve Regression Analysis. Quantitative analysis has been undertaken utilising regression analysis, assuming that the lifecycle of shared micromobility follows a logistic curve. As the deployment of shared micromobility is still in the early stages of its lifecycle, it is difficult to establish a maximum market size for the technology with a high level of confidence. In order to analyse the development of the technology the following assumptions have been made: Data for the model has been obtained from the following sources: Accuracy of the Model. The R2 value of the regression analysis is 0.962, suggesting a relatively strong relationship between the model and the shared micromobility utilisation to date, however given that the technology is in such an early phase it is difficult to estimate a maximum market size. Predicted Lifecycle of Shared Micromobility in Brisbane. Birthing Phase (2010 to present). The birthing phase of shared micromobility in Brisbane commenced with the introduction of the CityCycle docked shared bike scheme in 2010. The rate of growth of the mode is relatively slow during this period, prior to the introduction of the alternatives of e-scooters and e-bikes. The utilisation of shared micromobility then starts to rapidly grow. The end of the birthing period of the lifecycle is generally defined as 15% of the ultimate maximum. Given that micromobility is early in its development, it is difficult to estimate an ultimate maximum utilisation, and therefore defining the end of the birthing phase is also difficult. Based on the maximum utilisation that has been assumed for this analysis, the development of the technology is still in the birthing phase, however a more conservative assumption would conclude that the technology is in the early growth phase. Growth Phase. Based on the maximum utilisation assumption, the model suggests that half of the max utilisation would occur in the year 2038 (or the time of inflection). Using 15% and 85% to define the limits of the growth phase suggests that the growth period will occur between 142,500 trips per day and 807,500 trips per day. (Note the current maximum usage is approximately 10,303 trips per day). Mature Phase. The analysis undertaken suggests that shared micromobility will reach maturity above 807,500 trips per day, based on an estimated peak usage of 1,280,000 trips per day. Regressional Analysis. The following values were utilised in the regression analysis:
3,583
Radiation Oncology/System Down. System Down or Network Down due to Natural Disasters or Cyberattacks
24
Kitchen Remodel/Outcome. What is still left to do. On completion of this book, my kitchen project is not quite finished yet. One of the important things that I have found confirmed during the lengthy process of this remodel is that a slow progress may be a nuisance, but it bears great opportunities, too, to think things over and to become creative to a degree that you otherwise wouldn't. In my case, this includes a number of action items that we have not completed yet:
107
Transportation Deployment Casebook/2024/Monorails. Introduction. Monorails are railways systems where the track used to move the carriage is a single, continuous line. They work similarly to traditional heavy rail or tram systems, transport a carriage across a predetermined track, but can be elevated above existing infrastructure, and take up comparatively less space than the former options, making them better suited for transport across populated areas such as cities. As such, they have a few desirable qualities as a public transport system, as an elevated transport network, they are separate from the road network, which would naturally be taken up by cars and buses, and provide an elevated view of the local area, which is good for tourists visiting the city. The history of the monorail is hard to correctly classify as the term has been used interchangeably for various different transportation methods. As the name suggests, the term was often used to describe railway systems where only a single main beam was used, historic designs could be related to bicycles balancing on a railway line, whereas modern designs involve a carriage that straddles a double-flanged metal beam for support. The two most common modern designs differ on where the carriage is in regard to the rail that guides the wheels, either riding above or suspended below. Modern designs are powered by electricity which is obtained from the rail it rides on, although older designs used steam, coal or even horses as power. Monorails are adaptable in their application, ranging from smaller operations such as moving passengers in a theme park or airport, to larger areas which sometimes span smaller cities. Before the Monorail. Prior to the invention of the monorail, traditional dual railway trains were used in their place. Their initial intention was as a cheaper way to transport cargo across land in newly settled cities, moving resources from ports deeper inland or moving products from mines back to cities or ports for trade. As the monorail naturally required only one beam, they were often considered a cheaper alternative, although potentially less consistent. Another advantage was that compared to traditional, bulkier train carriages, they were slightly easier to build on mountainous terrain as they were thinner and required less space, making them more suitable for the shorter distances transportation of goods along mountainous uncharted terrain as virgin cities looked to expand further inland. Early Designs and the History of the Monorail. The Cheshunt Railway is commonly cited as the first instance of a passenger-carrying monorail. The design of the carriage was based on the 1821 patent by Henry Robinson Palmer, who theorized a carriage that equally balanced itself on a single elevated rail line. The carriages would hang off either side of the center rail such that the center of gravity was below the rail. For many years monorails designs would be powered by horses, monorails did not use steam engines until 1876 with General Le-Roy Stone’s design in the Philadelphia Centennial. During the early history of their existence, monorails were not often geared towards moving passengers, however some were still used for passenger transport. A famous example is the Listowel and Ballybunion Railway that ran from the Irish port city of Ballybunion to the inland market town of Listowel. The carriage design was similar to Palmer’s design, where two carriages had to be equally balanced on either side for the system to move. Despite this flaw, it remained successful for many years as it was the only passenger monorail service in the British Isles for most of its service from 1888 to 1924 until it was decommissioned due to increasing service costs and competition from road networks. The oldest monorail that is still in commission was built in 1901 in Germany by Eugen Langen, the Schwebebahn suspension monorail in the city of Wuppertal. As a suspension railway, it bears some resemblance to the Enos Electric railway demonstrated over a decade prior, however no direct connection has been confirmed. Technological Advancements for the Monorail Up until the early 1900s, the generic design of the monorail did not stray far from Palmer’s original design, which was effectively a balanced bicycle on a singular rail. However, past this point, some started to innovate on the idea, creating the straddle-type monorail, which modern monorails still use today. The straddle-type differs from Palmer’s original design, rather than a single wheel that rode on a rail, instead, there are multiple wheels that all move along an I-shaped beam, effectively “straddling” the center beam, thus greatly improving safety and reliability compared to the original designs. A third concept for a monorail was created and later patented by Louis Brennan in 1903. A monorail that stayed upright through the use of a gyroscope. While the design was proven to be successful, where even if all the passengers were sat to one side, the carriage was able to remain upright, it was never put past the testing and prototyping stage due to concerns over the requirement of a functioning gyroscope. If it were to ever fail, then the entire carriage and all its passengers would fall over, thus Brennan’s design never got past the testing phase and was never used for actual transportation purposes. Limitations of the Monorail. Despite their advantages in cost reduction, monorails were not adapted in their early history and even now, they are relatively unpopular when compared to heavy rail or trams. Due to the relative unreliability, workers often resorted back to the classic dual railway systems for longer distances, in fact modern monorail systems do not go beyond 100 kilometers in length, as at that point, conventional rail systems would simply be preferable, as there were often able to go faster due to their stability. In the modern era, three main issues affect the widespread adoption of the monorail as a primary mode of transport. The main one being cost, as the monorail requires an elevated network to function, it is expensive to build long connections above the road network, massively inflating the cost compared to maintaining an already existing road, or constructing a tramline on the ground. Due to how the monorail is connected, it often requires specialized trackwork separate from existing systems which increases the cost substantially. The second issue is that of track crossing, the straddle-type monorail that is used in modern cities does not allow for easy ways to change lines, as the carriages must always remain in contact with the rail. Dedicated areas to change lines for monorails is possible, but it is a slow process and can be expensive to construct and maintain. Finally, the general urban plan for modern Western cities stifles the effectiveness of the monorail. Monorails are often intended for short-distance journeys, riding above the traffic of a road network, however in Western cities, they often plan to expand outwards into suburbs, where a road network or cars would be far more efficient. Currently, the main areas where monorails are still employed in cross-city transport are in Asian megacities such as Chongqing, China, Mumbai, India and Tokyo, Japan. The reason these cities use monorails is that they are much denser than Western cities and as such, to facilitate a higher population with a much smaller area, they often build vertically, which fits well with the monorail’s need for elevated structures. Therefore, dense megacities are one of the few places around the world where monorails still see regular use, with the larger cities reaching hundreds of thousands of daily riders. Visualization of Monorail Ridership. Due to the lack of widespread adoption of the monorail as a viable solution, data about its historical ridership and predicted ridership is scarce. The following data uses the yearly ridership statistics of the Kuala Lumpur Monorail from 2004 to 2023 to visualize the growth of the service. Methodology. The approximate life cycle of a technology can be roughly represented by following formula: S(t) = S/[1 + e(-b*(t - t))] Where in this scenario: S(t) = expected annual riders at a given time t t = time (years) S = an estimated max amount of riders b = estimated coefficient t = inflection time, approximate time at which half Smax is achieved. The above graph was achieved using the following estimations: S = 30,000,000 b = 0.194031 t = 2005.179 As seen in the above graph and table, for the first few years of the tracked data, the predicted trend was relatively close to the actual ridership, however around 2017, ridership sharply fell, then cratered further in 2019 with the global COVID-19 pandemic. The drop in ridership in 2017 is not confirmed, however local reporters claim it is due to a spike in fare prices of public transport at the time, and many citizens just used cars and private transport instead. As the pandemic cleared up in late 2022 and 2023, ridership has seen another rise back to normal levels and is getting closer to the estimated trend. Due to the sharp decreases in annual ridership, the graph has not yet reached a maturity point and an inevitable fall, although if no major loss of ridership occurs in the future, this will happen.
2,105
A Guide to Sylheti Wikipedia/Getting Started. Navigating the Sylheti Wikipedia. Once you have your Wikimedia Incubator account, it's essential to become familiar with the Sylheti Wikipedia project's layout and navigation: 1. Main Page: The main page is where you'll find featured news articles, community announcements, and links to various sections. 2. Navigation Menu: The navigation menu on the left-hand side provides quick access to important sections like "Recent Changes" (for tracking edits), "Current Events," and "Community Portal." 3. Search Bar: Use the search bar at the top of the page to find specific articles or topics you're interested in. 4. Tabs: Explore the tabs at the top of the page, including "Recent Changes," and "Random Article" to discover content and contributors. 5. Prefixes: Wikipedia (Wp) and Language code (syl) is used as prefixes to distinguish it from other language editions. Example: Wp/syl/page name. Helpful resources.
242
A Guide to Sylheti Wikipedia/Contributing. References. Adding references and citations to the Sylheti Wikipedia, or any small wiki, is essential for maintaining the credibility and reliability of the content. While larger wikis might have dedicated gadgets and templates for citations, smaller wikis may not always have these tools readily available. Here's a simple way to add references using the tags: Or, alternatively you may use this templateBy following these steps, you'll add references and citations to the Sylheti Wikipedia using the tags, even if dedicated gadgets and templates are not readily available. This ensures that your content is verifiable and trustworthy. List of vocabulary. List of vocabulary required for referencing in Sylheti Wikipedia: You can use these terms and their Sylheti translations to create accurate references and citations in the Sylheti Wikipedia.
204
A Guide to Sylheti Wikipedia/Community and Collaboration. The Culture of Sylheti Wikipedia. The culture of the Sylheti Wikipedia project within the Wikimedia Incubator is a testament to the power of language preservation and the dedication of its contributors. In this digital space, Sylheti-speaking volunteers and language enthusiasts converge with a shared passion for nurturing their linguistic heritage. This culture emphasizes collaboration, as individuals from diverse backgrounds work together to create and expand the Sylheti Wikipedia content. It values the authenticity of Sylheti culture, history, and traditions, reflected in the articles and resources produced. The project's culture also embodies resilience, as it strives to overcome challenges such as a limited number of contributors and the need for more reliable sources in the Sylheti language. Ultimately, the Sylheti Wikipedia project in the Incubator is a vibrant hub where linguistic diversity, cultural preservation, and the spirit of collaboration converge to shape the future of Sylheti knowledge and identity within the Wikimedia ecosystem. The distribution of the Burnstar, exemplifies the generosity and appreciation within our community. Recognizing and rewarding outstanding contributors not only motivates them but also sets a positive example for others. Burnstar serves as a symbol of appreciation and gratitude for the remarkable work that takes place within our Wikimedia community. Content and Contributors. Sylheti Wikipedia in the Wikimedia Incubator encompasses a wide range of subjects, including history, culture, geography, language, and more. Contributors to this project consist of native Sylheti speakers, language enthusiasts, and individuals passionate about promoting Sylheti language and culture. The project adheres to the core principles of Wikipedia, emphasizing neutrality, verifiability, and reliance on reliable sources.
405
A Guide to Sylheti Wikipedia/Promotion and Outreach. Awareness. One of the core challenges facing Wikipedia's linguistic diversity is the content gap. This gap arises when articles on geographical and cultural topics exist in larger language editions but are absent in minoritized language editions. Marc Miquel presents potential languages that can easily become Wikipedias in the Celtic Knot Wikimedia Language Conference 2019 including Sylheti.
102
Electricity and magnetism/Magnetism. Magnets. Like electric forces, magnetic forces obey the principle of the attraction of opposites. Magnets always have two poles, a North Pole which is naturally attracted to the Earth's magnetic South Pole, and a South Pole, which is naturally attracted to the Earth's magnetic North Pole. In a compass, the North Pole of the magnetic needle indicates the geographic North Pole of the Earth. The geographic North and South are therefore reversed in relation to the magnetic North and South. When magnets love each other, they do this in a somewhat special way. The North Pole of one sticks to the South Pole of the other, as if magnetism had chosen the eroticism of 69. Unlike electric forces, isolated magnetic charges have never been found. Apparently Nature did not welcome magnetic monopoles. Magnets are always dipoles. If we cut a magnet in the middle between its two poles, we do not obtain two monopoles, but two new dipoles, two new magnets which each have two poles: It is explained by assuming that a magnetic material is composed of microscopic magnets all aligned in the same direction: Magnetostatics is the study of magnetic forces between stationary magnets. The calculation of magnetic forces is similar to that of electric forces, except that we reason on dipoles, and never on monopoles: Like the electric field formula_1 for electric charges, the magnetic field formula_2 is like a mathematical intermediary that is used to calculate the forces between magnetized materials. It is also used to calculate the forces exerted by magnets on moving charges, the forces exerted by moving charges on magnets, and the forces exerted between moving charges. But it is also much more than a mathematical intermediary, because it has an autonomous existence. Magnetic force is produced by electric currents. A magnet naturally orients itself perpendicular to an electric current. Its direction depends on the direction of the current. An electric current is therefore a source of magnetic force. This is a discovery made by Hans Christian Ørsted in 1820: If we cut the current, this magnetic force disappears: Ampère concluded that an electric current can behave like a magnet. This conclusion led him to the discovery of Ampère's law (1825): Two parallel electrical wires carrying a current attract each other if the currents go in the same direction and repel each other if the currents go in the opposite direction. The force between two electrical wires carrying a current is the magnetic force. It cannot be an electric force since the two wires are electrically neutral. The unit of measurement of electric current, the ampere, was defined from the magnetic force between two wires carrying a current: "The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed one metre apart in vacuum, would produce between these conductors a force equal to 2×10−7 newtons per metre of length." (International Bureau of Weights and Measures, 1948) Since 1 A = 1 C/s, the definition of the ampere also defines the Coulomb. When Ampère discovered his law, electrons were not known, so the ampere could not be defined as a current of electrons. Now that the electrons are known and their charge -q = -1.602 176 487 ×10−19 C has been precisely measured, we can define the ampere by the number of electrons it takes to make 1 C: 1 ampere = 6.241509074×1018 electrons per second, roughly six billion billion electrons every second. Current loops behave like magnets. Two parallel loops attract each other if their currents go in the same direction and repel each other if they go in the opposite direction. A current loop is therefore a magnetic dipole: The magnetic field produced by a current loop is similar to the electric field produced by an electric dipole: Ampère assumed that the magnetic force of magnets is produced by microscopic current loops. We now know that it comes mainly from the spin of electrons. Electrons are magnetic dipoles because they behave like spinning tops. The rotation of electrons is not an electric current, but the effect is similar to that of a microscopic current loop. As with the electric field, the magnetic field produced by several sources is the sum of the fields produced by each of them separately. For two parallel wires carrying currents in opposite directions, we obtain the total field by making a vector sum: The Earth's magnetic field is produced at the center of the Earth by its constantly moving liquid iron core: Coulomb's law in electrodynamics. To calculate the forces between moving charges we need generalized Coulomb's law: If a charge is eternally stationary in an inertial frame of reference, then the force it exerts on a moving charge is the same as the electrostatic force it would exert on that charge if it were stationary at the same position. If a charge is eternally immobile in an inertial frame of reference, the electric field of which it is the source has had time to propagate throughout space and establish itself there. If a charge is accelerated, there is no inertial frame of reference in which it is eternally stationary. The Coulomb field of which it is the source in its rest frame at a given moment takes time to propagate. At a later time, it is the source of another Coulomb field, in another rest frame. This is why accelerated charges are the source of electromagnetic waves that propagate throughout space. Light is produced by accelerated electrical charges: An immobile electric charge is not a source of light because no waves propagate in the electric field it establishes around it. Generalized Coulomb's law makes it possible to calculate the electric force field of any system of charges in uniform rectilinear motion relative to each other. To calculate the force exerted by all the charges, we calculate the sum of the forces exerted separately by each charge. To calculate the force exerted by a charge, we consider the frame of reference where it is at rest and calculate the Coulomb force. The Lorentz transformation then makes it possible to calculate this force in any frame of reference. The magnetic force of the electric current is a consequence of the Fitzgerald contraction and the Coulomb force. Fitzgerald contraction is the contraction of all solid bodies in the direction of their motion. It is appreciable only for bodies whose speed formula_3 approaches the speed formula_4 of light. The contraction factor is formula_5. When an electrically charged body is contracted, its charge density increases. This effect is at the origin of the magnetic force of the electric current. The relative movement of positive and negative charges can give rise to differences in charge density and therefore electrostatic forces. These electrostatic forces in one frame of reference are magnetic forces in another frame of reference. We can make a model of a conducting wire carrying an electric current with two insulating wires which carry opposite charges and which slide over each other. Let formula_6 be the charge density on the positive wire assumed to be stationary and formula_7 the charge density on the negative wire which goes at speed formula_3. The current formula_9. The negative wire represents the conduction electrons, the positive wire represents the rest of the metal wire. In the frame of reference R where the metal wire is at rest, it is electrically neutral. So in this frame formula_10. Let R' be a frame of reference which goes at speed formula_3 with respect to R, formula_12 and formula_13 the densities of negative and positive charge, measured in R'. The negative wire is therefore at rest in R'. formula_14, because from the point of view of R', the positive wire is contracted in the direction of its movement. formula_15 is the inverse of the length contraction coefficient. formula_16, because from the point of view of R, the negative wire is contracted in the direction of its movement. So formula_17. From the point of view of R', the superposition of the two wires, positive and negative, is not electrically neutral, it is therefore the source of an electrostatic field. Consider a charge formula_18 of speed formula_3 in R at distance formula_20 from the wire. The force F+ exerted by the positive wire on the charge formula_18 is perpendicular to the wire: Measured in R, formula_22 This result is proven from Gauss' theorem in the chapter on Maxwell's equations. The charge formula_18 is at rest in R'. The electrostatic force formula_24 exerted by the negative wire on the charge formula_18 is perpendicular to the wire. formula_26 Measured in R the force formula_27 of the negative wire on the charge formula_18 is equal to formula_29. The charge formula_18 therefore experiences a force formula_31 If we set formula_32 we obtain formula_33 formula_34 is the magnitude of the magnetic field formula_2 created by the current formula_36 in a wire of infinite length. The Lorentz force formula_37 is the magnetic force exerted on a charge formula_18 which moves at speed formula_3 in a magnetic field formula_40 if its speed is perpendicular to the field. We thus find that the magnetic field only acts on moving electric charges. In the frame of reference R there is no electrostatic force, because the wire carrying a current is electrically neutral. But in the frame of reference R' the negative charge density is lower than the positive charge density, because of the Fitzgerald contraction. The charge density is not zero and is the source of an electrostatic force field. This electrostatic force in R' is the magnetic force in R. The Fitzgerald contraction and the Coulomb force are therefore sufficient to explain the existence of the magnetic force produced by an electric current. A wire carrying a current repels negative charges that go in the conventional direction of the current (a current of positive charges) and attracts negative charges that go in the opposite direction. We thus find Ampère's law: two parallel wires carrying a current attract each other if the currents go in the same direction and repel each other if the currents go in the opposite direction. "Reference": Feynman's physics course, Electromagnetism, chapter 13-6. The vector product. To calculate the magnetic forces produced by moving charges and the magnetic forces exerted on moving charges, one must know the cross product of two vectors in three-dimensional space: The vector product w = u×v of two vectors u and v is the vector The triplet (thumb, index, middle finger) of the right hand is positively oriented. The one of the left hand is negatively oriented. (right, front, above) is positively oriented. In general, the x, y and z axes of a coordinate system are chosen positively oriented. The cross product of two vectors with the same direction is the zero vector. The length of the cross product of two perpendicular vectors is the product of their lengths. The Biot-Savart law. If the regime is stationary (the electric currents do not change) the magnetic field formula_2 produced at the point formula_44 by a current formula_45 in a segment formula_46 of an electric wire is formula_47 where formula_44 is the vector which goes from the segment formula_46 to the point considered, formula_50 is its length, formula_51 is the unit length vector in the direction of formula_44 and formula_53 is a constant which depends on the choice of units of measurement. With the Biot-Savart law, we can calculate the magnetic field produced by an electric current. If the wire is straight and of infinite length, the lines of the magnetic field are circles centered on the wire: The Biot-Savart law has a mathematical form similar to that of Coulomb's law. Force is inversely proportional to the square of the distance. It is not a coincidence. We can deduce the Biot-Savart law from Coulomb's law, because the magnetic force of electric current has an electrostatic origin. The Lorentz force. An electric field formula_1 and a magnetic field formula_2 exert on a particle of charge formula_18 a force formula_57). formula_58 is the Lorentz force: With the Lorentz force, we explain the magnetic force on an electric current: The principle of an electric motor: If a current loop is parallel to a uniform magnetic field, the Lorentz force on the moving charges produces a torque that turns the loop. The Lorentz force equation and Maxwell's equations are the fundamental laws of electromagnetism. Maxwell's equations tell how charges produce electric and magnetic fields throughout space and how these fields change over time. The Lorentz equation then tells how these fields act on charges. The Biot-Savart law allows us to calculate the magnetic field produced by an electric current. The Lorentz force then makes it possible to calculate the force between two wires carrying a current. The magnetic field is like a mathematical intermediary for calculating the forces between moving electric charges. But it is more than a simple mathematical intermediary, because it has an autonomous existence. We can calculate the magnetic force produced by a current from the Biot-Savart law and from Coulomb's law. The equality of the two results shows that formula_59 formula_60 and formula_61 are physical constants that have been measured independently of the speed formula_4 of light. When Maxwell discovered the fundamental laws of the electromagnetic field, he discovered that formula_63 is the speed of electromagnetic waves. Since formula_4 is also the speed of light, he concluded that light is an electromagnetic wave. The electromotive force of magnetism. The magnetic force on a charged particle is always perpendicular to its velocity. The work of force is therefore always zero. The kinetic energy of the particle is not changed, only its direction. So how can magnetic forces produce an electric current? How can they set in motion charges initially at rest? If a particle moves in the field created by a stationary magnet, it experiences a magnetic force. But in a frame of reference where the particle is at rest, it does not experience any magnetic force, since its speed is zero. It can only be subjected to an electrical force. So a moving magnet is the source of electrical forces and can thus produce an electric current. If we place a conducting loop in a constant magnetic field and rotate it around a diameter perpendicular to the magnetic field, the Lorentz force sets the electrons in motion in the direction of the conducting wire, as soon as the rotation of the loop imposes on them a movement which is not parallel to the magnetic field. This movement of electrons in the direction of the wire manifests the presence of an electrical voltage. We can therefore make an electric generator by rotating a conductive loop in a constant magnetic field: The blue area is proportional to the flux of the magnetic field through the loop. Faraday's law, presented in the chapter on Maxwell's equations, says that the voltage across the two terminals of the loop is the opposite of the rate of change of the flux of the magnetic field through the loop. Like the magnetic force of electric current, the electromotive force of magnetism has an electrostatic origin. To understand it, we just need to think about a square circuit traversed by a current. Let formula_6 be the linear density of the conduction electrons. The intensity of the current is formula_66 where formula_3 is the average speed of the electrons in the frame R where the circuit is at rest. Let formula_18 be an electric charge placed in the center of the circuit with speed formula_3 relative to R, in the direction of one of the sides of the square traveled by a current. Let R' be a frame of reference such that the charge formula_18 y at rest. From the point of view of R', the charge formula_18 cannot experience a magnetic force, because its speed is zero, but it experiences an electrostatic force. From the point of view of R', the two sides of the square parallel to its movement are contracted in the direction of the movement, but not the two sides perpendicular to the movement. The linear density of the conduction electrons, on the side where their average speed is zero, is equal to formula_72 where formula_73 is the inverse of the length contraction coefficient. The linear density of conduction electrons on the other side is formula_74. It is different, because the conduction electrons have a non-zero average speed compared to R'. From the point of view of R', the charge formula_18 experiences an electrostatic force, perpendicular to the movement of the circuit and proportional to formula_76. From the point of view of R, this force is formula_77 times smaller, therefore proportional to formula_78. It is the magnetic force experienced by the moving charge in R. In R', the electric circuit is a variable magnetic field source that exerts an electric force on the charge formula_18. The variation of the magnetic field therefore has an electromotive force.
3,935
Transportation Deployment Casebook/2024/Australian Post. Background. Technology. Australia Post is a corporation that offers postal delivery service to Australia operating for over 200 years . It works to deliver goods including letters and parcels across people, businesses and communities in Australia . The technological characteristics of Australia Post have evolved overtime as Australia Post shifts its strategies and service focuses to match with its loyal clients - Australia. In the modern age with increasing digitisation of information and communication, the key technological characteristic of Australia Post is its physicality - the mail and packages delivered are physical objects to be held and kept. The main advantage and market of this would lie in parcels - physical object deliveries which cannot be made through an email. During the initialisation of Australian Post however, the essential technological characteristic had been the efficient transmission of information as a means of communication. At its creation the main advantage of Australian Post was through delivery of letters and communications across Australia - in particular between the early settlers, officials and convicts and their hometowns in Britain. Naturally the main market of the postal service was hence the people in Australia wishing to deliver letters across space and oceans . The Australian Post was created as a means of organising the masses of letters being delivered to allow the correct recipient to receive or collect their mail. It has since grown into a network of delivery across modes of pack-horse, carriage, trains, motor vehicles and planes. Early 1800s Context. Prior to the establishment of the first Australian post office in 1809, connection between Australia and Britain could only be made through letters on ships arriving in Sydney. This system was limited as it meant ships could be mobbed and crimes of fraud and theft ran rampant . Centralised postal service had already been organised in Britain in 1516 but early settlers needed time to organise this system around the development of the Australian colony . Progression was made from mobbed ships with letters into a mail delivery service in 1803 with boatmen delivering letters across Sydney and Parramatta . This is described in the "Sydney Gazette” (first newspaper printed in Australia ) issue of the 10th July 1803 . This system was still to evolve further as this delivery system involved no prepayment. Recipients of letters would be expected to pay which led to an unreliable delivery system . The chaos of these earlier years created a need for a system of organisation for mail among early settlers in Australia. This was reflected in growing complaints to the Lieutenant-Governor until change was made . 1809: First Australian Post Office established. The beginning of the Australian Post office is often attributed to the opening of the first Australian Post office with Post Master Isaac Nichols appointed on 25th April 1809 in Sydney . This appointment was notified through the "Sydney Gazette" by the Lieutenant-Governor to address the concerns of stolen mail enabling fraudulent crimes. The purpose of this post office was for all parcels and letters coming from ships to be deposited and then organised to be distributed to correct recipients . The establishment of the post office in Australia did not implement much technological innovation other than incorporating the early British centralised postal service into Australia as the settler population grew . Isaac Nichols would use his home – which served as the post office – to sort mail . Rules for the method of handling mail and costs for the service were all outlined in this order forming the initial policy to shape the Australian Post in its birthing phase . The role of Postmaster gave Nichols the right to board ships and obtain mail addressed to Australians on their behalf - a list of names of people with mail to be collected would then be printed in the "Sydney Gazette". A fixed collection fee was charged and this business remained private until 1825. At its birth, the post office served the market of early Australians who were struggling to correctly receive their mail. The system required the recipient to travel to the post office and receive their mail as at this stage and the number of letters were still limited enough for this system to be manageable. With the growing population and land occupied by the settler population in Australia however, the natural progression of the technology and system would be for a delivery system to be established to allow service to those who lived further away. 1825-1828: Growth - First Postal Act and Delivery services. In 1825 "An Act to Regulate the Postage of Letters in New South Wales" was passed by the New South Wales Legislative Council. This transferred the Post office from a private business to the public sector. The Governor was authorised to establish more post offices in Sydney and the New South Wales Colony. This allowed the postal office system to grow with the population of settlers across New South Wales. The Governor was also given the authority to fix the postmaster's wage and mail collection fees . As with the establishment of the first post office, this legislation seemed to lag behind the growing population of New South Wales that already passed 30,000 by 1821 - it was put in place after growing market demand. In 1828, postal delivery services were established in Sydney . A postman would be responsible for delivering and collecting letters using a pack-horse . This technological development allowed postal offices to better organise mail so as to not be over-flooded by those who were collecting mail. Mail routes were established and Postmasters were appointed in other parts of Australia. Coach services were established between Melbourne and Sydney in 1838 to allow overland delivery . During this period roads and horses were the primary technological characteristic of postal delivery. It was the more efficient alternative to walking - delivering 50,000 letters and newspapers with a staff of three people: Postmaster, clerk and postman, but would lack the later reliability of steam trains. 1828-1901: Emerging technologies. Emerging technologies appeared for delivering mail in the 1800s including the Cobb and Co. Coaches that were brought to Australia with the gold rush. These coaches replaced the pack-horse as the official transport mode of mail from 1862 . This was an upgrade from the pack-horse to allow larger volumes of mail to be held during a delivery run. As railways developed throughout New South Wales they were also increasingly adopted for larger distance mail delivery - particularly between Sydney and Parramatta from 1855 onwards . During the 1800s other technologies also developed that challenged the function of letters in communication across distances. Telegraph services were opened in Australia in 1854 and allowed communication across distances that was far more rapid than mail delivery. This new technology, initially private, was incorporated by the government into the postal office however and spread with the postal offices across the nation . Likewise in 1878, the first telephone call was made in Australia. These technologies allowed for rapid communication across distances greatly lowering the market need for letters and mail. The adoption of telegraph systems within post offices however kept post offices serving the existing market of those looking to communicate across time and space. Timelines of key developments and growth. Key technological developments and their effects within the post office are listed in the table below: Maturity and decline. With peak Australian letter volumes being achieved in 2008, this has been prescribed as the mature phase of the mode. From the rise of technological advancements such as the e-mail and internet replacing postal letters the mode has become less economic and rapidly declined . Australian Post has attempted to shift focus towards parcels but faces competition with other delivery services that have risen with the rise of online shopping . This attempt to shift is reflected in the 2021 opening of a new $33 million processing station in Adelaide - capable of processing 130,000 parcels in a day. The future prospects of Australian Post after consultation with the Australian Public is a business model which aims to decrease normal delivery speed in an attempt to minimise expenses while maintaining revenue through express delivery . Australian Post: Life-cycle model. Model. The life cycle of Australian Post has be modelled using the S-curve with periods of birthing, growth and maturity. Data analysing the productive output of Australian Post was fitted to the logistic function formula_1. where: The observed data used was of Australia Post's Total Output Value which aggregates revenue from reserved letters, other addressed and unaddressed mail and money orders. This data was approximated using the above logistic function and modelled by the following equation: formula_2 The method of calculating this model was found using Ordinary Least Squares Regression. The Logistic equation was rearranged to obtain a linear equation of the form formula_3 where the following values were taken. formula_4 Method and Evaluation. Although there is a slight decline in Value output for 2001-2002, with hindsight knowledge that peak achieved in 2008 it is assumed that these were years of fluctuation and formula_5 had not yet been fully achieved, hence they were also included in the S-curve approximation. The unknown in this case was formula_5, hence different regressions were found for variable values of formula_5. The R-squared value was used to evaluate which regression was the best fit for the data. The R-squared value for the final regression chosen was formula_8. Given the R-squared value being very close to 1, it is presumed that the model is a good fit. Issues with the model are the larger variations occurring during the earlier and later years. It may be possible that given more data from earlier and later years, a better fitting equation for modelling the life-cycle of the Australian Post office would have greater growth during the 2000s and a later earlier inflection point. Life-cycle years. From the observed data the periods of birthing, growth and maturity are identified as the following: Birthing: Pre 1976-1980 Growth: 1980-1999 Maturity: 1999-2002
2,263
Electricity and magnetism/Maxwell's equations. The divergence of a vector field. To understand the fundamental equations of electromagnetism given by Maxwell, we must understand the divergence and curl of a vector field. To understand its divergence, one must understand the flux of a vector field across a surface. If we think of a vector field as the velocity field of a fluid, its flux through a surface is the flow rate of the fluid through that surface. The flux obviously depends on the orientation of the surface: Let formula_1 be a sufficiently small surface element so that we can assume that the vector field formula_2 is almost constant. Let formula_3 be a vector of unit length perpendicular to formula_1. The flux formula_5 from formula_2 through formula_1 in the direction of formula_3 is then formula_9 formula_1 To find the flux through a surface, we divide the surface into small surface elements and sum all the fluxes. When the magnitude of the surface elements tends to zero, the limit of this sum is an integral. It is the flux formula_11 through the surface. formula_12 formula_1 To calculate the flux formula_11 we must have chosen a direction of crossing the surface. The flux is positive if the vector field goes in the same direction, negative otherwise. The divergence formula_15 of a vector field formula_2 is obtained at each point by considering closed surfaces smaller and smaller that surround this point, small spheres centered on the point for example, or small cubes. It is the limit of the flux of the vector field through these small closed surfaces, divided by the volume which they delimit, when this volume tends towards zero. The crossing direction is always chosen from the inside to the outside. If its divergence at a point is positive, the vector field is divergent at this point, it is like a source of fluid. If its divergence is negative, it is convergent at this point, it is like a sink for a fluid. As the volume of an incompressible fluid is constant, the divergence of its velocity field is always zero, because there is neither source nor sink. To calculate the divergence, we use the following formula: formula_17 for a field formula_2 whose three components are formula_19, formula_20 and formula_21. Proof: we reason about a small cube of side formula_22 and whose faces are parallel to the xy, xz and yz planes. On faces parallel to yz, therefore perpendicular to the x axis, the flux is formula_23 and formula_24. The difference of the two is formula_25. The same goes for faces parallel to xz and xy. So the total flux leaving the cube is formula_26. This flux is also equal to formula_27. The divergence of a vector field is a real number, positive or negative, defined at each point in space. It is therefore a scalar field derived from a vector field. Gauss's theorem. "The flux of a vector field through a closed surface, from the inside to the outside, is always the integral of its divergence over the entire volume inside the surface." Proof: if two cubes have a face in common, the flow through the rectangular tile they form together is the sum of the two fluxes through the two cubes, because the two fluxes through the interior face of the tile exactly compensate each other. Everything that comes out of a cube through this face goes into the other. A volume delimited by a closed surface can always be divided into small adjacent volumes, so formula_28 formula_29 where formula_30 is the volume delimited by formula_31. With Gauss' theorem and Coulomb's law, we find the first of Maxwell's equations: formula_32 where formula_33 is the electric charge density. For a uniformly charged volume formula_30 of charge formula_35, formula_36 is the charge density inside formula_30. Proof of Maxwell's first equation: we reason about the flux of the electric field produced by a spherical charge through a sphere centered on this charge: The flux of the electric field created by a charge formula_35 which passes through a sphere of radius formula_39 centered on this charge, is formula_40 where formula_41. So formula_42. Or formula_43 formula_29 where formula_33 is the charge density and formula_30 is the volume of the sphere centered on formula_35. So formula_48 formula_49. Since this equation is true for any volume that surrounds a charge density formula_33: formula_51 Maxwell's second equation is: formula_52 It says that the magnetic charge density is always zero, so magnetic monopoles do not exist. The flux of the magnetic field through a surface bounded by a loop depends only on the loop. Proof: let formula_53 and formula_54 be two surfaces delimited by the same loop. These two surfaces delimit a volume. The flux of the magnetic field leaving this volume is the difference of the fluxes through formula_53 and formula_54. But since the divergence of the magnetic field is always zero, this difference is zero, according to Gauss's theorem. So the two fluxes are equal. The magnetic field can always be identified with the velocity field of an incompressible fluid. The flux entering a volume is always equal to the flux leaving it. The same is true for the flux of an electric field in a volume that contains no charges. Gauss' theorem allows us to calculate the electric field created by an infinite electrically charged plane or by an infinite electrically charged line. The infinite charged plane Let formula_57 be the surface charge density of a plane. If the plane has a finite surface, the electric field of which it is the source depends on the distance from its edges. But for a very large surface, this edge effect is negligible, provided that we are far from the edges. The electric field created on an axis perpendicular to the middle of a large electrically charged disk is therefore equal to the field created by an infinite plane which has the same charge per unit area. The charged disk is rotationally symmetrical. Curie's law, which says that effects have the same symmetry as their causes, therefore requires that the electric field it produces is also symmetrical by rotation. On the axis of the disk, it is therefore necessarily in the direction of this axis. The electric field produced by an infinite charged plane is therefore everywhere perpendicular to this plane. Consider a cylinder whose surface faces formula_31 are parallel to the charged plane, such that this plane passes through the midpoint between the two faces. The flux of the electric field through the side wall of the cylinder is zero, since it is always parallel to the wall. The flux is therefore the sum of the fluxes on the two sides: formula_59 where formula_60 is the magnitude of the electric field on each face. The electric charge formula_35 contained in the cylinder is formula_62 Gauss' theorem allows us to conclude: formula_63 formula_60 does not depend on the distance to the charged plane. The magnitude formula_60 of the electric field produced by an infinite electrically charged plane is the same everywhere in space. The electric field is perpendicular to the plane and directed towards it, if its charge is negative, and in the opposite direction, if its charge is positive. The surface charge of a conductive surface On one side of the surface the electric field is zero. On the other side it is perpendicular to the surface. Gauss' theorem applied to a small cylinder which crosses the surface and whose faces parallel to it have a surface formula_1 gives formula_67 so formula_68 The infinite charged line If a charged wire is of finite length, the electric field it is the source of depends on the distance from its ends. But for a very long wire, this edge effect is negligible, provided that we are far from the ends. The electric field created in a plane perpendicular to the middle of a long electrically charged finite wire is therefore equal to the field created by a wire of infinite length which has the same charge per unit length. The electric field created in a plane perpendicular to the middle of a long electrically charged finite wire is therefore equal to the field created by a wire of infinite length which has the same charge per unit length. The electric field created in a plane perpendicular to a uniformly electrically charged wire of finite length, which passes through its middle, is necessarily perpendicular to the wire, by symmetry. If this electric field deviated from this median plane, Curie's law would be violated. This law also shows that the electric field created by a uniformly charged wire is necessarily in a plane parallel to its axis. So it is necessarily directed towards the wire, or in the opposite direction. As the wire is symmetrical by rotation around its axis, Curie's law also shows that the magnitude of the field can only depend on the distance from the wire. Consider a cylinder of radius formula_39 and length formula_70, centered on a uniformly charged infinite wire. The internal charge on the cylinder is equal to formula_71 where formula_72 is the linear charge density of the wire. Across both ends of the cylinder, the electric field flux is zero, since the electric field is parallel to them. The flux formula_11 of the electric field formula_74 is therefore equal to the surface of the cylinder, apart from its ends, multiplied by the field formula_60, which is always perpendicular to this surface: formula_76 where formula_39 is the radius of the cylinder, formula_70 its length and formula_60 the magnitude of the electric field formula_80 created by the uniformly charged wire at distance formula_39 from the wire. So formula_82 formula_83 The curl of a vector field. To understand its curl, we must understand the circulation of a vector field along a loop. For a uniform vector field formula_2, the circulation of the field along a straight line formula_85 is equal to formula_86. If the vector field is not uniform or if the path formula_87 is not straight, we consider a broken line which follows the same path and whose segments can be as small as we want . We calculate the circulation of the field by assuming that it is uniform on each segment, and we take the limit of the sum of the circulations of the field on each segment when the length of the segments tends to zero. This limit is an integral and it is the circulation formula_88 of the field formula_2 on the path formula_87 considered: formula_91. The work of the electric force on a unit charge along a path is the flow of the electric field along that path. If a force field derives from a potential, its circulation on a loop is always zero, because the potential at the starting point is equal to the potential at the ending point, which is the same as the starting point. To measure circulation on a loop, one must choose a direction of circulation on the loop. Circulation in one direction is the opposite of circulation in the opposite direction. The curl of a vector field is obtained from its circulation in the same way that its divergence is obtained from its flux, but the curl of a vector field in three-dimensional space is a vector field, whereas its divergence is a scalar field. To understand the curl of a vector field, it is better to start by understanding it in a two-dimensional space, a surface, because then it is a scalar field, and a scalar field is simpler than a vector field. The curl of a two-dimensional vector field formula_2 at the point formula_93 is the limit of formula_94 when formula_1 tends to zero, where formula_88 is the circulation of formula_2 on a loop whose surface is formula_1 and which surrounds the point formula_93. To calculate the curl we use the formula: formula_100 for a field formula_2 whose two components are formula_19 and formula_20. Proof: We reason about a small square with side formula_22 and whose sides are parallel to the x and y axes. On the sides parallel to the y axis, therefore perpendicular to the x axis, the circulation is formula_105 and formula_106. The sum of the two is formula_107. On the sides perpendicular to the y-axis, the circulation is formula_108 and formula_109 . The sum of the two is formula_110. So the circulation over the entire square is formula_111. This circulation is also equal to formula_112. When the vector field is three-dimensional, we consider its projections on three perpendicular planes. These are three two-dimensional vector fields, each of which has a curl. We can therefore associate three numbers with each point. These are the components of a vector field: formula_113 The curl of a three-dimensional vector field is also a three-dimensional vector field. The divergence of the curl of a three-dimensional vector field is always zero. Proof: formula_114 The curl of a vector field can therefore always be identified with the velocity field of an incompressible fluid. The flux of the curl of a vector field through a surface bounded by a loop depends only on the loop. Proof: let formula_53 and formula_54 be two surfaces delimited by the same loop. These two surfaces delimit a volume. The flux out of this volume is the difference of the fluxes through formula_53 and formula_54. But since the divergence of a curl is always zero, this difference is zero, according to Gauss's theorem. So the two fluxes are equal. The curl of the gradient of a potential is always zero. Proof: let A and B be two points on a loop. The flux of the gradient of a potential formula_30 on any path from A to B is equal to formula_120. The circulation of the gradient on the loop is the circulation on a path from A to B plus the circulation on a path from B to A. It is therefore always equal to zero. As the rotational is defined from the circulation on a small loop, it is also always zero for the gradient of a potential. Stokes' theorem. "The circulation of a vector field on a closed loop is the flux of its curl through a surface delimited by this loop." Proof : Stokes' theorem is to curl what Gauss' theorem is to divergence. They are both special cases of a very general theorem, also called Stokes' theorem, which can be proved for any finite-dimensional space with the means of differential geometry. Faraday's law. "The work of the electric force on a unit charge along a closed loop is equal to the opposite of the rate of change of the flux of the magnetic field through an area bounded by the loop." The work of electric force on a unit charge along a closed loop is an electromotive force. A closed loop can always be thought of as a circuit that connects a generator to a resistor. The resistance is that of the loop. The generator is assumed to have zero resistance and to provide an electromotive force equal to the rate of change of the magnetic field flux. This is why we can always define a potential in an electric circuit, even though it contains loops which can be traversed by variable magnetic fields. Each time an electromotive force appears, we reason as if a generator instantly imposed a potential difference equal to this electromotive force. We can thus define a fictitious potential on the entire electrical circuit. Faraday's law leads to Maxwell's third equation: formula_131 Proof: let a loop be placed in the electromagnetic field formula_132. By Stokes' theorem, the circulation of formula_74 over this loop is the flux of formula_134 through it . According to Faraday's law, it is also the opposite of the rate of change of the flux of formula_135. The rate of change of the flux of formula_135 is the flux of formula_137. The equality of the fluxes of formula_134 and of formula_139 for any small loop proves that formula_134 and formula_139 are necessarily equal. Maxwell's fourth equation. Maxwell's fourth equation determines formula_142 as the sum of two terms. One can be obtained with the Biot-Savart law. The magnetic field lines around a wire of infinite length carrying a current formula_143 are circular and centered on the wire in a plane perpendicular to the wire. With the Biot-Savart law, we can calculate the magnitude formula_144 of the field formula_135 formula_146 where formula_39 is the distance to the wire. The circulation of formula_135 along a field line is therefore formula_149 By Stokes' theorem, the flux of formula_142 through a loop is always the circulation of formula_135 over this loop. If formula_152 is the current density, formula_143 is the flux of formula_152 through a surface crossed by the current. By setting formula_155, we therefore find the Biot-Savart law for a wire of infinite length. formula_155 cannot be true everywhere. Proof: let a circuit be made up of a capacitor which discharges into a resistor. Consider a loop that surrounds the circuit wire. The flux of formula_152 through a surface passing through the wire is equal to the current formula_143. But the flux of formula_152 through a surface that passes between the two plates of the capacitor is zero, since there is no current between the two plates. Now the flux of formula_160 through a surface delimited by the loop only depends on the loop. So formula_161 cannot always be true. On the other hand, if we put formula_162 we correct this error. Proof: between the capacitor plates, formula_163 and is directed perpendicular to the plates. The flow of formula_74 through a surface between the plates is therefore formula_165. So the flux of formula_166 between the capacitor plates is equal to formula_167, the intensity of the current. Maxwell's equations. formula_51 formula_169 formula_131 formula_162 With the Lorentz equation, formula_172) Maxwell's equations are the fundamental laws of the classical theory of electromagnetism. Classical means here that it is not a quantum theory. Maxwell's equations explain how moving charges are the sources of the electromagnetic field and how it changes over time. The Lorentz equation explains how this electromagnetic field exerts forces on moving charges. Light is an electromagnetic wave. We can prove the existence of light from Maxwell's equations. Aside from gravitation and nuclear forces, electromagnetic forces explain all natural phenomena. Light, atoms, molecules, ions, solids, liquids, gases, plasmas, liquid crystals, electric motors, radio waves, x-rays... are all explained from electromagnetic forces . For physicists, the Maxwell and Lorentz equations are therefore like the tables of the law. Maxwell's and Lorentz's equations can be deduced from generalized Coulomb's law and the relativistic geometry of space-time. Generalized Coulomb's law is therefore the most fundamental law of classical electromagnetism. Spacetime geometry assumes the existence of the speed formula_88 of light, but it does not impose the existence of light. It is enough to assume the existence of particles that travel at the speed of light. So Coulomb's generalized law and the geometry of space-time prove the existence of light, without presupposing it. Maxwell's equations in matter. We generally think about point charges. The charge density formula_33 is infinite at the point where the charge is. We assume that the charge density formula_33 is a Dirac delta such that formula_176 formula_177 for a point charge formula_178, where formula_30 is a volume that contains only the charge. If the charges are ponctual, space is empty almost everywhere, and the Maxwell equations in matter are the same as the equations above, where the charge and current densities are calculated with Dirac deltas.
4,645
Magar/Colours. ""Colours Name""
11
Melbourne Suburban Railway. Technology. Modern day suburban (or metropolitan) railway networks are economic arteries, connecting the central business districts of many cities around the world to their workers in the suburbs. The deployment of suburban railways around the world coincided with the development of cities and has contributed significantly to urban sprawl. Suburban railway networks are advantageous for metropolitan regions as they can operate over long distances and at a higher capacity than other forms of transport such as trams and buses. The Melbourne Suburban Railway network, officially known as Metro Trains Melbourne, is just one example of many suburban railway networks around the world that are designed to connect the Central Business District of its city to the surrounding suburbs. Today, the network operates 226 electric trains on 15 lines along 998 kilometres of railway track (Metro Trains Melbourne , 2024). Origins of the passenger railway. The development of the suburban railway network as a mode of transport in Melbourne can be attributed to several prior events. Whilst the following events did not occur in Melbourne, instead in the modern-day United Kingdom, each introduced a component of the technologies which formed both the early and modern-day Melbourne suburban railway network. These are as follows: 1. Deployment of Passenger Railway Systems - Oystermouth Railway. The Oystermouth Railway was the first passenger railway service to commence operation and did so in 1807 (Shaw-Taylor & You, 2018). This service-connected Swansea and Oystermouth with rail wagons being pulled by horses. 2. Deployment of the Locomotive - Stockton & Darlington Railway. The Stockton and Darlington Railway opened in 1825 and whilst built to transport freight and not passengers, it was the first railway to use a steam locomotive (Shaw-Taylor & You, 2018). The opening of this railway line was a catalyst for the development of further locomotive hauled railways in the immediate future (Darroch, 2014). 3. Deployment of Locomotive Hauled Passenger Railway System - Liverpool to Manchester Railway. The Liverpool to Manchester Railway opened in 1930 and is widely attributed as being the first locomotive hauled passenger railway service to enter operation (Shaw-Taylor & You, 2018). George Stephenson’s Rocket locomotive was the first to be used on the railway, capable of reaching almost 50kmh (30mph) (Shaw-Taylor & You, 2018). 4. Deployment of Underground Metropolitan Railway – London Underground. London’s Metropolitan Railway saw the world’s first underground railway open in 1863 connecting the busy London stations of Paddington & Farringdon (Darroch, 2014). The opening of the Underground quickly saw other cities construct and operate their own underground railways. Melbourne, like many other cities globally, would incorporate underground railways service into its network. As seen above, the deployment of passenger transport on rails, the introduction of the locomotive to railways, the combination of both and the introduction of sub-surface railways all contributed to the suburban railway system adopted in Melbourne. Context. Before the Railway. The city of Melbourne was established in 1835 on Port Phillip, just 19 years before its first steam railway opened for operation. Melbourne was established by farmers and pastoral workers from Tasmania who sought after more land for agriculture (Records and Archives Branch of the City of Melbourne, 1997). In the short period of time between the establishment of the city and the opening of its first railway, transportation around the newly established city and surrounds was primarily undertaken by foot, horse drawn wagons or other animal hauled methods and by ship through Port Phillip. As the port established at Sandridge (modern day Port Melbourne) experienced increases in import and exports through the 1840s, transporting both people and goods from the port to the city and surrounds became of increasing importance (Harrigan, 1954). Limitations of foot traffic and animal drawn vehicles. Although the distance between Melbourne and Sandridge was walkable, it was a very unpopular method of transportation, especially in adverse weather conditions and for people carrying luggage and goods. Furthermore, Animal drawn wagons and carts where ultimately limited in their capacity for carrying passengers or goods between the port at Sandridge and the city. Horse drawn vehicles were often unpleasant, slow, unreliable, and caused significant road damage over time. It is these limitations and the increase in demand for transport which made a faster and higher capacity locomotive-hauled railway a popular alternative. Invention. The invention of the steam locomotive and its deployment on the Stockton and Darlington and Liverpool to Manchester Railways in England allowed for the development of the suburban railway network in Melbourne (as mentioned in section 1). Steam locomotives use coal as a source of thermal energy to heat water in and convert it to steam, this steam is held in the boiler and causes an increase in pressure. The pressurised steam drives the wheels of locomotive by pushing pistons which are connected to a crank mechanism, creating the rotating motion required (Bathurst Rail Museum, n.d). The deployment of the suburban railway in Melbourne would see improvements made to the transport mode in its early development. The permanent adoption of steel wheels and steel rails in 1868, moving away from the use of iron rails commonly seen in England saw a significant improvement in the durability of both locomotives and track infrastructure (Harrigan, 1954). Early Market Development. The decision to build Australia’s first steam locomotive railway between the city of Melbourne and Sandridge (modern day Port Melbourne) was in response to the growing industrial activity in the Sandridge area and the increase in ship movements in and out of the port. The success of railways in the United Kingdom prompted many entrepreneurs in Melbourne to investigate building their own railway services in the city. This resulted in the first public meeting for railway entrepreneurs being held on the 7th of December 1851 (Harrigan, 1954). Whilst this meeting attracted interest from potential stakeholders in a railway service, the impact of the gold rush on the city’s population and economy was significant and no definitive action was taken on a railway service until August 1952, when the Melbourne & Hobson’s Bay railway company was formed. W.S Chauncey was subsequently appointed Chief Engineer on the 17th of August 1952 and the official approval to construct the railway line passed on the 20th of January 1953, with the Act of Incorporation (Lee, 2007). The official opening of the Melbourne to Sandridge railway occurred on the 12th of September 1854. The first year of operation for the Melbourne & Hobson’s Bay Railway Company’s first line was a success, with shareholders receiving an 8% dividend and the capacity for transport between Melbourne & Sandridge increasing significantly (Harrigan, 1954). This success prompted the investigation for the construction of more railway lines and creation of more railway companies. The market appetite for railways following the opening of the Melbourne to Sandridge line is reflected in the opening of the branch line to St. Kilda in May 1857, its subsequent extension to Brighton by the St. Kilda & Brighton Railway Company and the Melbourne & Suburban Railway Company’s lines from Princes Bridge to Hawthorn and Windsor (Dee, 2006; Lee, 2007). The Role of Policy. During the birthing phase of the Victorian suburban railway network, several key policies shaped its design and patronage. Gauge Policy. Following the appointment of W.S Chauncey as Chief Engineer in 1852, prior to the opening of the Melbourne-Sandridge Railway, a decision was to be made on the gauge width for the railway line. Chauncey advised the company to adopt the broad-gauge system using a rail spacing of 1600mm (5 feet 3 inches), the same rail gauge which was to be adopted by the Sydney Railway Company (Lee, 2007). Chauncey believed that a standardised rail gauge should be adopted across Australia for the future development of the continent’s railways, and thus wanted to align with the gauge already decided upon in NSW. It should be noted that Sydney would change its decision and adopt the standard gauge of 1435mm with its first railway in 1855 (Wardrop, 2022). The effect of this policy can still be seen today with Melbourne still operating a broad-gauge system and Sydney operating a standard-gauge system. Building Tickets. To boost ridership on its services, the United Company began to distribute ‘building tickets’ to those who built houses along the railway corridors to Brighton and Elsternwick in October 1865 (Harrigan, 1954). This policy created an incentive for people to build their houses and live along these railway corridors, resulting in increased ridership from these areas to the city. Growth. Government Purchases United Company. Following the opening of the Melbourne to Sandridge Line, the extension to St. Kilda and Brighton, and the lines to Hawthorn and Windsor, the Melbourne & Hobson’s Bay United Railway Company purchased their competitors services and by 1865 operated all 26km of track. By 1870, the United Company would operate over 240 daily services. In 1872 the Victorian Government purchased the United Company’s shares for £1,320,820 pounds (Harrigan, 1954). The newly amalgamated network of railway lines developed significantly in the 1880s into a radial system connecting the city to the surrounding suburbs in all directions. Introduction of Rail Motor Cars. The growth period of the Melbourne Suburban Railway saw the development of rail technology and subsequent improvement of passenger service. The first railway motor car was introduced in May 1883, combining an engine cabin containing a vertical boiler and motor into a carriage capable of carrying 40 passengers. These motor cars operated on the Fairfield to Oakleigh and Essendon to Broadmeadow lines until their withdrawal from service in the late 1890s (Harrigan, 1954). Construction of the new Flinders Street Station. The rapid expansion of the railway network in the late 19th Century resulted in a significant increase in daily train services reaching the city of Melbourne. By 1900 it was evident that the existing station at Flinders Street, originally built for the Melbourne-Sandridge Railway was unable to provide the required capacity needed as expansion continued. In May 1900, the “Green Light” syndicate by James Fawcett and H.P.C Ashworth, two Victorian rail officials, was chosen to provide their design for a new station on Flinders Street (Lee, 2007). The large, French renaissance style station was completed in 1909 and officially opened in 1910, serving as the main station for the city of Melbourne (Victorian Railways , 1965). The Electrification of the Suburban Railway. The electrification of the railway was initially considered in 1896 by A.W Jones from the General Electric Company. C.H. Merz was appointed in 1907 to further investigate electrifying Melbourne’s Suburban lines and proposed the electrification of 200km (124 miles) of track by 1912 at a cost of £2,227,000 (Harrigan, 1954). Little action was taken on Merz’s initial plan and in 1911 the Victorian Government requested an updated plan in which the cost rose to £3,991,000 (Harrigan, 1954). The first railway power station was constructed at Newport in 1913 with a production capability of 60,000 kW. The First World War saw significant delays to the electrification programme, with the first generator becoming operational in 1918. It would not be until 1919 that the first electric line commenced operation. By April 1923, the suburban railway network was electrified with the opening of the final stage between Heidelberg and Eltham (Victorian Railways , 1965). The cost of the project had risen to 6,270,000 pounds by its completion (Harrigan, 1954). At this time Melbourne boasted one of the most expansive electrified railway networks in the world. The electrification of the suburban railway proved to be a great success. Passengers were offered a faster, quieter, and smoother railway service at a lower cost, which significantly boosted ridership in the 1920s and 1930s (Harrigan, 1954). The Electrification of the network brought the suburban railway into the modern era and can be considered the end of the growth phase of the system. Maturity. By 1949, the Melbourne suburban network had grown in ridership to 182 million in that year compared to 123 million in 1919 when electric operations began (Wardrop, 2022). However, two major factors would cause rapid decline in the usage of the network in the 1940s and 1950s. Decrease in Patronage - The Deployment of the Motor Car. The private motor vehicle significantly changed the way Melbournians travelled in the years following the 1940s (Lee, 2007). Using a car to travel from home to work became commonplace and there was a significant increase in trips between homes and workplaces that were not in the city of Melbourne. The suburban railway had and still has the disadvantage of being a radial network, where its design is primarily to transport passengers from the suburbs to the city. Thus, the increase in trips to other suburban areas for work and the convenience of the motor vehicle played a major factor in the decline in patronage in the mid to late 20th century (Wardrop, 2022). Decrease in Patronage - Industrial Action. The Mid 20th century saw a significant increase in the influence of union movements in Victoria. Industrial action taken by the union movements on the Melbourne Suburban Railway did much to destroy confidence in the transport system (Lee, 2007). Concurrent to this, the state of the railway network had severely declined, with overcrowded trains and poor infrastructure causing frequent disruption to service and passenger discomfort. Several instances of industrial action taken by the union movement over conditions such as pay and working conditions caused frequent disruption to the system. Some of the major disruptions are listed below: The above are three examples of many industrial strikes which took place on the railway network in the 1940s and 1950s and are attributed to the 20% decrease in annual ridership between 1949 and 1951 (Lee, 2007). Industrial disputes continued well into the 1970s. Consistent disruptions to service caused a lack of faith amongst Melbournians in the network and further encouraged the use of motor cars. Saving the Network - New Trains & The City Loop.. By 1978, 88% percent of trips in Melbourne were made by car and annual train patronage had dropped to 94.3 million in the same year. By 1983, the network was operating at a loss of $286 million per year. With the intention of stopping the decline in rail patronage, the Victorian Government ordered the production of 300 Comeng (Commonwealth Engineering) railcars which were to be locally manufactured at Dandenong (Lee, 2007). These trains would have automatic doors, be airconditioned and use improved materials with the aim of reducing operating and maintenance costs over their lifecycle. These cars arrived on the network in 1981 and proved a success, providing a more comfortable and reliable journey for customers. As a result, 270 more railcars were ordered in 1985 (Lee, 2007). Another initiative undertaken by the Victorian Government was the construction of the city loop tunnels. Until the 1980s, the city of Melbourne was served only by 2 major stations at Flinders St. and Spencer St. (now Southern Cross). From these two stations, it was common for passengers to switch to a tram, bus or walk to their place of work. The Melbourne Underground Rail Loop Act was passed in January 1971 and saw the creation of the Melbourne Underground Rail Loop Authority (Lee, 2007; Mees, 2008). The authority oversaw the construction of quadruple tunnels travelling around the city with new sub-surface stations at Flagstaff, Melbourne Central and Parliament. The loop system opened in January 1981, providing increased capacity across all suburban lines, providing new station locations in the city, and reducing pressure on Flinders and Spencer St. stations (Mees, 2008). The new trains and stations played a significant role in the recovery of patronage leading into the 21st century. Saving the Network - Additional Policies. Several policies were created in between the 1970s and 1990s to attempt to recover patronage levels on the network. Smoking was banned on all services in November 1976 and intermodal ticketing was trialled at the same time to make services more comfortable and ticketing simpler. The Transport Act in July 1983 brought the railway network, tram network and bus network under a new organisation nicknamed “The Met” in an additional effort to improve the reliability of all transport modes and simplify ticketing (Lee, 2007). Between 1983 and 1989, the suburban railway network and trams and buses shared the same tickets and liveries on their vehicles. Electronic ticketing machines were introduced in 1996 to further simplify the ticketing process and make transactions faster with the intent of attracting more patronage. Saving the Network – Privatisation of Melbourne’s Suburban Railway. Under Jeffery Kennett’s Liberal Government, the Public Transport Reform Program began to privatise much of the railway’s operations. A significant change made by the Kennett government was the removal of guards from trains, which reduced the required staff for network operations to increase reliability and cut costs. Following a major industrial strike in 1997, the government privatised the suburban railway, splitting the operations across two companies, Hillside and Bayside Trains (Lee, 2007). The privatisation of the suburban railway was aimed at improving reliability and reducing the impacts of industrial action on the network. Various iterations of private operation of Melbourne’s railway network have existed since 1997. The network is still privatised as of 2024, being run under “Metro Trains Melbourne”, a joint venture between MTR, John Holland and UGL Rail (Metro Trains Melbourne , 2024; Victorian Auditor-General's Office , 2023). Quantitative Analysis. Quantitative analysis of annual patronage on the Melbourne Metro Trains network using a three-parameter logistic function has been undertaken using information from several sources including data from Alex Wardrop’s “A Tale of Two Systems”, and several reports released by Victorian Government Agencies (Wardrop, 2022; Public Transport Victoria , 2013; Public Transport Victoria , 2014; Public Transport Victoria , 2012; Public Transport Patronage Trends in Australasian Cities , 2015). Using the three-parameter logistic function allows for the identification of growth peak and decline periods and passenger projections. The equation for the Three Parameter Logistic Function is shown below: S(t)=S_max/(1+e^((-b(t-t_i )) ) ) Where: S(t) is the predicted annual patronage for year t S_max is the saturation level for patronage (in thousands) t is the chosen year t_i is the inflection time b is an estimated coefficient An appreciation for the fluctuating nature of passenger patronage on Melbourne’s suburban railway network since 1900 prompted the creation of two models, one during the phase of the network’s electrification (1900-1930) and a second model with annual patronage from 1900 to 2022. As will be seen, using data over a 30-year period from 1900 during the electrification phase of the railway produces a logistic curve with a strong fit. However, using data over a 120 year period produces a very weak fitting curve due to several events that have occurred over the life of the network. Model 1 (1900-1930). The model using data from 1900-1930 produces an r-squared value of 0.945, just under the acceptable threshold of 0.95 for the three-parameter logistic function. This means that 94.5% of values fit within the regression and for this time, the logistic curve is a strong fit. Model 2 (1900-2022). Creating a model using a the three-parameter logistic function yields an r-squared value of 0.222, well below the generally accepted threshold of 0.95. this means that only 22.2% of values fit the regression. As can be seen in the model, there is a fluctuation in annual ridership growth and decline over the 122-year period, with noticeable local peaks in, 1949 and 2019. As explained previously, the electrification of the network resulted in substantial patronage growth in the 1920s, and upgrades to the network in the 1970s and 1980s also resulted in substantial patronage growth up until the global peak in 1919 prior to the COVID-19 pandemic. These fluctuations make it extremely difficult to fit a three-parameter logistic function as reflected in the r-squared value meaning that the curve is not a strong fit. Furthermore, this model is constrained by the following limitations & assumptions: Data is only available from 1900 onwards (railway service begun in 1854). Data collection methods have changed between 1900 and 2022. Data is collected from ticketing & station entries and thus does not count fare evaders. Using model two to estimate the birthing, growth, maturity and decline periods of the Victorian suburban railway thus proves difficult. Using the data, the growth period can be estimated to be between 1900 and 1940 with the maturity period being between 1940 and 1970 with decline occurring post 1970. The period between 1996 and 2010 can be potentially classified as a second growth period.
5,412
Celestia/Tutorials/Globulars. This tutorial will describe how to add Globular into Celestia. Globular are a bit hard objects to add to Celestia, because most of them can be simulated as bits of STC code. To add a Globular to Celestia, you need a .stc file. This can be created by taking any plain text file (codice_1) and renaming the file extension to codice_2. This file can be named anything as long as it has the codice_2 suffix. Then, it should be placed into the "extras" directory (i.e. a folder), or any folder within the "extras" directory. Now, after you open the .stc file, you need to define a Globular by writing some code in it with your favorite text editor. Information for many Globular can be found in various places, outside of the milky way core Basic definition. The basic definition looks like this: Galaxy Name Globular "Globular Name" { RA <number> Dec <number> Distance <number> Radius <number> With additional (optional) parameters, it looks like this: SMC Globular "Globular Name" { RA <number> Dec <number> Distance <number> Radius <number> CoreRadius <number> KingConcentration <number> We'll go through the parameters one by one. Let's use the Globular NGC 1466 as an example. Also, at any point you can add a comment to your .stc code. A comment is started by a codice_4 sign and lasts until the next line break. It's a good idea to add comments explaining if you calculated parameters yourself, or if they are guesses. List of parameters. Name. Galaxy Name "Name" Here, codice_5 is just the name (or names) of the Globular. If a Globular has multiple names, separate them with colons (codice_6). like this: SMC "NGC 1466:PGC 2802621" And it other names. so it would look like this: SMC "NGC 1466:PGC 2802621:ESO 54SC16" RA "and" Dec. RA <number> Dec <number> These are the right ascension and declination of the Globular, i.e. the coordinates of the Globular on the sky. Note that in an STC file, codice_7 is in degrees, unlike a DSC file where it's in hours. Usually the RA will be in hours/minutes/seconds format, and the Dec will be in degrees/arcminutes/arcseconds format. To convert to Celestia's decimal format, use a tool like the RA DEC flexible converter. You can also tell SIMBAD to output decimal coordinates by going to the Output options page and selecting "decimal" from the drop-down menu next to "Coordinates". Distance. Distance <number> The distance to the Globular in light-years. CoreRadius. CoreRadius "Number" the size of the Core of the Globular the core can be 0.2 (NGC 1466 Size) AppMag "or" AbsMag. AppMag <number> or AbsMag <number> This is the apparent magnitude of the Globular (how bright it appears from Earth), "or" the absolute magnitude (how bright it would appear from a distance of 10 parsecs), without extinction (dimming, caused by dust that is blocking light). The codice_8 in Celestia corresponds to the V-band magnitude in sources such as SIMBAD. <hr> The above parameters are all that's "required" to define a Globular. Here's the basic definition for NGC 1466: SMC "NGC 1466:PGC 2802621" RA 0.4478528 Dec -71.5225806 Distance 2.12e+05 Radius 175 CoreRadius 0.2 KingConcentration 1.25 Axis [-0.7429 -0.2364 -0.6263] AppMag 11.705 Now for the optional parameters... <hr> Axis. Axis [ <Number> <number> <number> axis where the Globular rotation KingConcentration. KingConcentration <number> Where it rotation Radius. Radius <number> Most globular clusters have a half-light radius of less than ten parsecs (pc), although some globular clusters have very large radii, like NGC 2419 Angle. Angle <number> It Angle where it on our view Example code. For the sake of reference, here is what the final .stc code might look like for NGC 1466: SMC "NGC 1466:PGC 2802621" RA 0.4478528 Dec -71.5225806 Distance 2.12e+05 Radius 175 CoreRadius 0.2 KingConcentration 1.25 Axis [-0.7429 -0.2364 -0.6263] AppMag 11.705
1,403
Transportation Deployment Casebook/2024/Henan Railway Network. 1 Introduction. The first railway in China was born in 1825, and laid by the UK. Since then, China's railway network began to form gradually. A railway network includes interconnected railway trunk lines, branch lines, contact lines, stations, and hubs. It is built to meet the needs of passenger and freight transportation under certain historical conditions within a certain spatial scope. The essential feature of this technology is to improve passenger accessibility to more places through efficient railway systems. Henan Province is one of the vital transportation hubs in China, so the railway network in Henan Province has been developing rapidly. Therefore, the biggest advantage of the Henan railway network is that it is located in the center of mainland China and has convenient transportation links. Its main market includes both passenger and freight transport, as passengers or goods from all over the country are often transited here. This study focuses on the life-cycle of the Henan railway network from 1979 to 2022, which shows a life-cycle including the phase of birthing, growth, and maturity. 2 History. 2.1 Context. Before the year of 1978, there were many questions about the Chinese railway network. For example, it was difficult for people to buy a railway ticket. Most people never had the chance to travel far away from their hometown. At that time, people who had to travel for a long distance usually preferred coaches, because they were cheaper and more accessible. However, the coaches were too slow and uncomfortable. And this kind of vehicle easily went wrong during a long-distance trip. Thanks to the layout of the reform and opening-up policy, China started to attach more importance to the development of the railway network. A series of policies had been formulated to promote the improvement of the railway network in Henan Province. 2.2 Invention. Invention is always a long-term path. Generally, to complete a railway system, people should figure out the source of power, routes, and how to manage different railway lines. In terms of power, the rails were pulled by horses at first. Later, steam engines were used to improve the efficiency of railways. In the invention of the railway, various shapes of tracks were also essential. As for the second shift from the initial technology to the predominant technology, the technology of electric engines was invented, replacing the status of steam engines after the year 1871. Then in 1899, a high-speed railway was born in Prussia, which further promoted the development of railways. 3 Birthing Phase. 3.1 Early Market Development. In 1978, the main market of Henan Railway was freight. Because Henan is located in the plains of central China, it is not only suitable for railway construction but also convenient for transporting goods to various parts of China. For example, the Longhai Railway is one of the most famous freight lines in Henan, running through the east, central, and western parts of China. By 1997, Henan witnessed the born of its first high-speed railway, which greatly improved the transportation function of Henan's railway network. Passenger transport has gradually become another mainstream market, bringing new vitality to the development of the Henan railway network. Meanwhile, the Freight market remains booming. The opening of the Zhengzhou-Europe International Railway has increased Henan's global reputation and opened up a broader market for its railway freight. Today, Henan railway network development ranks among the best in China. 3.2 The Role of Policy. After the reform and opening up, China Railway organized a series of construction campaigns, and the scale and quality of the road network continued to improve. In the 1980s, railway construction was accelerated by implementing the three ‘campaigns’ of ‘battle against Daqin in the north, attack Hengguang in the south, and capture East China in the middle’; in the 1990s, China implemented the ‘forced attack on Beijing-Kowloon and Lanxin, rapid attack on Baozhong and Houyue’ Then the government will focus on East China and Southwest China, complete the supporting facilities of Daqin, and concentrate its efforts on building several large-capacity trunk lines. Most of these plans passed through Henan Province. In the meantime, the government of Henan also carried out policies to renovate existing lines and increase major railway speed. Most of the various policies during this period were innovative and greatly promoted the development of the Henan railway network. However, in the early years of these policies, people invested too much energy in railways and other industries. They were somewhat unable to do so after 1989, and it became difficult to buy railway tickets. Therefore, the passenger volume in the period thereafter has declined. 4 Growth Phase. From 1979 to 1989, driven by the above policies, the passenger volume of the Henan railway network increased significantly. In 1979 there were only 4,513,000 passengers who travelled to Henan or transformed there, but by 1988 the number of passengers had reached 6,141,000. Then for a few years, the curve remains flat. In 1997, the first high-speed railway landed in Henan Province and its railway network began a mode of rapid development. Entering the 21st century, China has developed rapidly in all aspects. The social economy and people's living standards continue to grow. Therefore, more and more people have time and money to travel, which further improves the utilization of the railway network. 5 Maturity Phase. From 2012 to 2019, the Henan railway network grew in a fast way, towards its maturity phase of the life-cycle. Unfortunately, when the COVID-19 epidemic broke out in 2020, the government introduced a strict lockdown policy. For the next 3 years, the development of the network was negative. This will have a huge impact on the transportation model that is about to enter its mature stage. However, as the lockdown policy was unwinding, the Henan railway network will be back to its growth soon. On 20 June 2022, the Puyang-Zhengzhou section of the Jinan-Zhengzhou High-speed Railway, the Zhengzhou-Chongqing High-speed Railway, and the Zhengzhou Airport Station jointly opened for operation, marking the completion of the "Mi"-shaped high-speed rail network (which leads to eight directions) in Henan Province with Zhengzhou as the core. So far, the operating mileage of railways in Henan Province has reached 6,716 kilometers, of which the mileage of high-speed railways with a speed of 350 kilometers per hour has reached 1,924 kilometers, ranking first in the country. 6 Quantitative Analysis. This study collects the number of passengers on the Henan railway network from 1979 to 2022. All statistics are from the National Bureau of Statistics of China. To analyze the data, Use it to estimate a three-parameter logistic function as follows: S(t)=S_max/(1+exp⁡[-b(t-t_i )]) Where: S(t) is the status measure (passengers), t is time (in years), t_i is the inflection time (year in which 1/2S_max is achieved), S_max is the saturation status level, b is a coefficient to be estimated. Then, use a single variable linear regression to simply estimate the coefficients c and b in a model of the form: Y=bX+c Where: Y=ln⁡[Passengers/(S_max-Passengers)], X=Year. Table 1 shows all data and Chart 1 shows the s-curve of this mode. Both were at the end of this case study. After many calculations, the results are as follows: S_max=35000 b=0.03489 r^2=0.61835 t_i=2041.69 The life-cycle of the Henan railway network has experienced the birthing phase and the growth phase as the high-speed railway is still under development, so the railway network will stay in its growth phase for years. According to the chart, the forecast is accurate. For the function of the Chinese transportation hub, the number of passengers on the Henan railway network has grown faster than forecast since 2003. However, from the year 2020, the data dropped significantly on account of the COVID-19 pandemic. This year, thanks to the end of lockdown policies, the number of passengers is expected to rise.
1,980
Infrastructure Past, Present, and Future Casebook/Metro Flood Diversion (MN/ND). This page is a case study on the Metro Flood Diversion Project in Fargo, North Dakota and Moorhead, Minnesota, created by Abigail Dodson, Ameera Ali, and Shareef Ibrahim. It is part of the GOVT 490-003 (Synthesis Seminar for Policy & Government) / CEIE 499-002 (Special Topics in Civil Engineering) class offered at George Mason University taught by Jonathan Gifford. Summary. The 1997 Red River Flood in the Fargo-Moorhead metropolitan area caused extensive damage, displacing thousands and damaging schools, businesses, government offices, and homes. In response, the Metro Flood Diversion Project was initiated as a pivotal infrastructure initiative to mitigate flooding risks in the region, spanning from Fargo, North Dakota, to Moorhead, Minnesota. Its primary goal was to address the historical flooding issues caused by the overflow of the Red River during snow melt or heavy rainfall. The Metro Flood Diversion Project comprises essential components, including a diversion channel, control structures, levees, embankments, environmental considerations, and community engagement and funding. A central feature of the project is the diversion channel, designed to redirect excess water from the Red River during flood events, thus bypassing the metropolitan area. This project represents a collaborative effort between two states to protect the Fargo-Moorhead region from the devastating effects of flooding and enhance the resilience of local communities against future weather catastrophes. This project marks the first Public-Private Partnership (P3) venture undertaken by the U.S. Army Corps of Engineers, representing a nationwide initiative for the split-delivery model. It is North America's leading P3 water management endeavor, setting a precedent for collaborative infrastructure development. The project also represents a groundbreaking green finance initiative for climate change adaptation in the United States, exemplifying innovative approaches to environmental resilience. Timeline of Events. 1997: Major flooding occurred after Blizzard Hannah. The Red River rose to a high of 44.83 feet and caused over $6.4 billion of damage (after inflation) to Grand Forks, North Dakota and East Grand Forks, Minnesota .  1998 - 2007: The Fargo and Moorhead communities recover from the effects of the Red River Flood. The local governments use buy-out programs to manage properties in high flood areas. Businesses like the Alerus Center and the Ralph Engelstad Arena reopen. A commercial office complex center is built. 2008: The U.S. Army Corps of Engineers conducts a feasibility study for a flood diversion project. 2009: The Red River floods. 2010: The U.S. Army Corps of Engineers publishes the Draft Feasibility Report and Environmental Impact Statement. Two diversion projects were proposed from this study: One in North Dakota and one in Minnesota. 2011: The Red River floods. 2014: The Metro Flood Diversion Project was authorized for construction by the Water Resources Reform and Development Act. 2016: 2022: The Metro Flood Diversion Project was started. 2027: Expected completion of the Project. Key Actors, Institutions, and Agreements. U.S. Army Corps of Engineers. The U.S. Army Corps of Engineers spearheads a unique Public-Private Partnership (P3) model, collaborating with the private sector to efficiently address the Midwest's flood risks. This partnership brings expertise and funding and ensures the construction of levees, flood walls, and essential infrastructure. Public Private Partnership. A Public-Private Partnership (PPP) is a collaborative effort between government entities and private sector organizations to develop, finance, and manage public infrastructure or services. In the metro flood diversion project context, PPPs played a significant role in leveraging private sector expertise, resources, and innovation to address the region's flood risks effectively. By partnering with private entities, the project gained access to additional funding sources, accelerated project timelines, and enhanced technical capabilities. PPPs also facilitate sharing of risks and responsibilities between the public and private sectors, ensuring greater accountability and efficiency in project delivery. Moreover, PPPs enabled the metro flood diversion project to harness innovative financing mechanisms and management strategies, ultimately enhancing the project's resilience and sustainability in mitigating flood hazards for the Fargo-Moorhead community. Metro Flood Diversion Authority (MFDA). the Metro Flood Diversion Authority (MFDA), formed in 2016, selected the Red River Valley Alliance to design, build, finance, operate, and maintain a 30-mile-long diversion channel crucial for managing floodwaters around the two cities. The MFDA is tasked with ensuring safe and timely construction, land acquisition, and environmental permit compliance, while the Corps oversees the Southern Embankment and Associated Infrastructure construction. Red River Valley Alliance. The Red River Valley Alliance includes members with responsibilities associated with the Diversion Project. Guarantors. Responsible for the project's design, construction, financing, operation, and maintenance for 30 years post-construction: Joint Powers Agreement. The Joint Powers Agreement (JPA) in the metro flood diversion project allows North Dakota and Minnesota to work together in the Fargo-Moorhead area to reach a common goal: reduce flooding and protect communities. This agreement helps both states work together smoothly by setting clear rules for decision-making and resource sharing. It also simplifies handling legal and regulatory matters across state lines, saving time and effort. By working together, North Dakota and Minnesota can address shared challenges more effectively, ensuring better community protection against flooding. The Joint Powers Agreement further solidifies stakeholder collaboration, aiming to deliver permanent, reliable flood protection by 2027. Metro Flood Diversion Project Details. The Metro Flood Diversion Project has two main parts: a 30-mile-long channel to redirect surplus storm water flow around the metropolitan area and temporarily store it on vacant land (acquired by flowage easements) and the in-town levee project to modify 13 levees and 27 storm water lift stations. In addition, a 20-mile-long embankment, 19 highway bridges, three railroad bridges, three gated structures, and two aqueduct structures are being constructed. Stormwater Diversion Channel. The Red River Valley Alliance is responsible for delivering the stormwater diversion channel for this project. The channel is 30 miles long and includes diversion outlets and aqueducts along the Sheyenne and Maple Rivers. In addition, there will be fourteen drainage inlets, three railroad crossings, two pairs of interstate crossings, and twelve county road crossings. Southern Embankment. The U.S. Army Corps of Engineers is responsible for delivering the Southern Embankment portion of this project. The Southern Embankment is 22 miles long and has three gated control structures: the Diversion Inlet Structure, the Wild Rice River Structure, and the Red River Structure. Each structure will have a radial arm that allows gates to raise and lower for water control during the project. This portion of the Metro Flood Diversion Project also includes the I-29 Bridge crossing, 4-mile grade raise, and county and township road crossings. Mitigation Projects. The Metro Flood Diversion Authority and the U.S. Army Corps of Engineers will collaborate with local governments to accomplish various mitigation projects, with the USACE not directly involved in their delivery. The upstream mitigation efforts will focus on safeguarding or relocating property structures and cemeteries, which will be acquired through flowage easements facilitated by the MFDA. Additionally, levees will be constructed in Oxbow-Hickson-Bakke and Christine, North Dakota, and Wolverton, Minnesota. Drain 27 in Oxbow, North Dakota, will be enhanced to protect a wetland area, while stream restoration will be carried out on the Lower Otter Trail River. Cost, Financing Sources, and Funding. The Metro Flood Diversion Project incurred a total expenditure of $3.2 billion, with allocations as follows: $989 million for the construction of the channel under the Public-Private Partnership, $703 million allocated to the southern embankment project overseen by the U.S. Army Corps of Engineers, $502 million designated for Lands and Impacted Property Mitigation, $266 million directed towards in-town levees, $250 million attributed to non-construction costs, and $44 million set for other mitigation construction endeavors. Various avenues of financing were utilized to support this extensive project. Notably, $273 million in Private Activity Bonds, including Tax-exempt Green Bonds, were secured through institutions such as Morgan Stanley, Citigroup, Mikko Securities Indonesia, and Sumitomo Mitsui Banking Corporation. MetLife Investment Management contributed $197 million through Private Placement funds, while the Water Infrastructure Finance Innovation Act loans facilitated $643 million in Revolving Credit. The Infrastructure Investment and Jobs Act provided an additional $437 million. Furthermore, communities demonstrated their commitment by voting to augment long-term sales taxes, ensuring sustained funding through a multi-generational Public-Private Partnership (P3) payment structure. Funding for the project came from federal, state, and local P3 channels. A substantial portion, amounting to $1 billion, was procured through voter-approved sales taxes extended until 2084, adhering to the P3 model. The State of Minnesota contributed $86 million, while the State of North Dakota allocated $870 million. At the federal level, a Project Partnership Agreement (PPA) signed in 2016 secured $750 million towards the endeavor.Technical Issues Technical Issues. Land Acquisition. Land acquisition is critical to securing the space required to implement flood mitigation measures such as levees, channels, and infrastructure. The process consists of acquiring pieces of property from owners within the project area through negotiation, purchase agreements, and, when necessary, eminent domain. Land acquisition is undertaken to ensure adequate space for construction of flood protection and water flow management structures, which reduce flood risks and safeguard communities in the Fargo-Moorhead region. The MFDA uses the following steps to acquire land for the project: Flowage Easements. Landowners receive payment from the MFDA for the right to periodically store floodwater on the landowner's property. The entities associated with the Metro Flood Diversion Authority (MFDA) will be responsible for acquiring flowage easements. The Cass County Joint Water Resource District (CCJWRD) will oversee the acquisition process in North Dakota. In Minnesota, the acquisition of easements will be managed by either the City of Moorhead, Clay County, or the Moorhead-Clay County Joint Powers Authority (MCCJPA). Property Acquisitions. Project engineers identify land parcels that the Fargo-Moorhead area will impact, and appraisals are scheduled for each parcel based on construction timelines. Appraisals are reported to Cass County Joint Water Resource District (CCJWRD) in North Dakota or the Moorhead-Clay County Joint Powers Authority (MCCJPA) in Minnesota. Based on the appraisal, an offer to purchase is made and serves as a basis for negotiations between land agents and landowners. After an agreement, a closing date is set where updated property rights are established, and the landowner is paid. Crop Damage Programs. The MFDA offers comprehensive crop coverage insurance, ensuring that landowners will be protected in the event of flooding caused by the MFDA's activities. Under this coverage, landowners can claim compensation equivalent to the producer's yield multiplied by the crop insurance price. It is important to note that if a landowner receives payment from a federal crop insurance policy for a crop loss claim, it will be accounted for before the MFDA's supplemental crop loss program provides additional compensation. Even if federal crop insurance ceases, the MFDA remains bound by its obligations outlined in the Settlement Agreement, guaranteeing continued support to affected landowners. Environmental Considerations. The Metro Flood Diversion Project is supported by environmental policies that minimize its ecological footprint and ensure sustainable development. These policies contain a variety of measures designed to protect and enhance biodiversity, water quality, and natural habitats within the project area. Environmental impact assessments evaluate potential effects on wildlife, wetlands, and other sensitive ecosystems, guiding the project's design and implementation to mitigate adverse impacts. Additionally, the project incorporates green infrastructure elements such as vegetative buffers, wetland and stream restoration, and erosion control measures to protect the floodplains and improve overall environmental resilience. The project adheres to strict regulatory requirements and mitigation protocols, ensuring compliance with federal, state, and local environmental laws and standards. By prioritizing environmental stewardship and conservation, the Metro Flood Diversion Project aims to balance flood risk reduction and ecological sustainability, safeguarding human communities and the natural environment for future generations. Narrative of the Case. Following the 1997 Red River Flood in the Fargo-Moorhead region, it became evident that decisive action was necessary to mitigate the threat of future flood events. As data analysts and scientists linked the floods to global climate change, it undersCorpsd the potential recurrence of similar catastrophic events. While addressing atmospheric impacts to curb climate change remains a priority, the States of North Dakota and Minnesota face limitations in altering natural processes such as snowfall, melting, and subsequent flooding during the spring thaw. The topography of the region also poses challenges, limiting extensive modifications. In response to these constraints, the Metro Flood Diversion Project emerges as a proactive solution to effectively manage flood risks and protect the Fargo-Moorhead area from recurring threats. The necessity of the Metro Flood Diversion Project had widespread community support, notably as residents of the Fargo-Moorhead area voted to increase taxes until 2084 to contribute to its funding. This endorsement reflected the community's recognition of the project's critical importance in mitigating the devastating impacts of events like the 1997 flood, which imposed significant financial losses on the region and increased insurance rates. This grassroots initiative propelled the issue to higher levels of government, fostering collaboration between local, state, and federal entities. The U.S. Army Corps of Engineers played a vital role at the state and federal levels, leveraging its expertise and resources to support the project's development and implementation. Through this multi-tiered collaboration, the Metro Flood Diversion Project emerged as a unified response to the pressing need for comprehensive flood protection, underscoring the effectiveness of intergovernmental cooperation in addressing complex regional challenges. Lessons Learned / Key Takeaways. The 1997 Red River flood emphasized the significance of infrastructure resilience and the collaboration between state and federal government entities, local governments, and private engineering firms. Rainfall patterns in the midwest continued to increase after 1997; flooding in the midwest was inevitable due to the flat terrain and the northward flow to Canada. In addition, proper satellite data and rain gauges needed to be in place to predict and monitor rainfall intensity and distribution accurately. This combination of factors led to community stakeholders and the government utilizing various flood mitigation techniques to mitigate the effects of future flood events as best as possible. The project progress was initially slow but gained momentum after the establishment of the Metro Flood Diversion Authority (MFDA). The MFDA brought together representatives from Fargo, Moorhead, and surrounding municipalities to coordinate efforts for flood mitigation and resilience. This unified approach ensured that diverse stakeholders worked together towards common goals, streamlining decision-making processes and enhancing overall project effectiveness. By establishing the MFDA, stakeholders from different political and administrative entities could align their efforts, resources, and priorities to address shared flood risks comprehensively. Discussion Questions. 1.) How does the metro flood diversion project reflect broader trends in urban planning and infrastructure development, particularly in the context of climate change adaptation and mitigation strategies? 2.) What is the role of government agencies and stakeholders in planning and executing the metro flood diversion project? 3.) What are the potential controversies surrounding the metro flood diversion project? Discuss concerns about property rights, environmental justice, and alternative flood mitigation approaches. 4.) What are the potential ethical concerns related to the metro flood diversion project, and why are they of concern? Are there competing societal interests in resource allocation and decision-making processes? 5.) North Dakota and Minnesota have been able to work together on this project to reach a common goal. Do you think Maryland and Virginia could ever do the same pertaining to bridges connecting the two states over the Potomac River?
3,922
AQA A-Level Physics/Particles and Anti-particles/Conserving and illustrating interactions. Fundamental rules of particle conservation. 4 Things must always be conserved in a particle interaction; Charge, Baryon Number, Lepton number & Mass/Energy. The first three are the ones which will most commonly come up in exam questions or be expected when asked to determine if an interaction could occur. There is also 1 thing which is almost always conserved, being Strangeness. Conservation of Charge, Baryon Number & Lepton Number. Conservation of these factors is calculated using the values of these given in the formula book. For an interaction to occur, charge must be completely equal on both sides of the 'equation'. It is important to note that the data given in the formula booklet regards quarks, so you must have knowledge of how quarks come together to form hadrons to answer these questions. Conservation of Mass/Energy. These conservations are mainly studied in the mechanics section, however, it is important to know that these will also come up in the 'Radiation' subtopic in particles. In the context of Particles and Radiation, the mass of a particle is equal to its energy at rest. This, and its applications will come up in the Radiation subtopic. Conservation of strangeness. Strangeness must be conserved in all interactions, except for weak interactions. A weak interaction can be easily noticed in a few ways; Weak interactions can change the flavour of quarks and leptons, while other interactions cannot. In weak interactions, strangeness can vary by 1 either positively or negatively. So, for example, if there is a strangeness of 1 on one side, in a weak interaction, this can become either 0, or 1. It can also stay the same. Illustrating interactions. Feynman Diagrams. to be finished In the Exam. To be finished
445
Transportation Deployment Casebook/2024/Sydney On-Demand Bus Network. Introduction. What is an on-demand bus? On-demand bus series are a form of public transport which are not on dedicated routes, rather they pick the passenger up from a selected location (either home or another convenient location) and takes them to their destination. The objective of these on-demand services is to help facilitate local trips (<5 kilometers) and connect to larger transportation hubs where passengers can extend their journeys. As such, on-demand services can be thought of as a form of 'micro'-transit providing a similar utility to that of a taxi. Building Blocks. There are many aspects of different technology which have influenced and led to the implementation of on demand public transport. The most significant influential technologies which have contributed to on-demand buses include: History. Sydney Bus Network. Sydney's bus network has steadily developed throughout Sydney's history. Through this historic period, what constitutes a 'bus' has changed and developed over time. The following phases have been observed for the technology: On-demand (2017 - current) . On-demand bus services are relatively new to Sydney with pilot programs beginning as early as November 2017. To date, there have been a total of 12 on-demand bus programs, with five still active as of 2023. Most of these services have been congregated in 'suburban' locations, localities such as The Ponds, Edmondson Park, Carlingford, and Northern beaches. As on-demand services primary role is to provide micro-mobility, providing the general public a public transport option for local trips, it has not seen much success in its implementation in more metropolitan areas. Inner West is an exception to this as the pilot program has been running since 2018 with continued patronage. This is partly due to inner West being a large Local Government Area with much of it not serviced by either heavy or light rail and a higher portion of residents not owning cars, thus providing the potential for an on-demand service to capture a portion of the market. On-Demand Bus Service Lifecycle. Transport systems, like all other technological inventions, can be categorised within a lifecycle stage. The three key stages within a transport Technolgies life cycle are: It is to be noted that while the above is seen as the general rule across technologies, it does not apply to all technologies. Certain technologies do not progress past the birthing phase as they are never adopted, while some technologies fluctuate between growth and maturing phase going through multiple irritations of the lifecycle. In the context of transportation technologies, the lifecycle stage can be classified via the assessment of various factors including number of users, extent of the transport network and accessibility via that transportation technology. Data Collection. The assessment of Sydney's on-demand bus network has utilised transit patronage data spanning a period between January 2018 to the end of 2023. The following data has been provided by Transport for New South Wales (TfNSW), which details the number of passengers who barded'/alighted the bus service, excluding cancelations and no-shows. Patronage data covers all reportable services to date. Quantitative Assessment. Recorded patronage data was used to make a quantitative assessment of the life cycle of the technology via the application of a logistic function: formula_1 The purpose of this function is to create a S-curve to represent the three key phases in within a transport technologies lifecycle, where: Estimating S-max. While the technology of buses is well established with networks operating throughout Sydney's modern history, the introduction of on-demand buses is relatively new with services official recorded services popping up around 2017. As a result of the technology being in its infancy, in order to assess its lifecycle, a number of educated assumptions will need to be made in order to capture an approximate maximum market share (S-max) for the technology. To calculate S-max, the following assumptions have been made: A base trip rate has been calculated for the different urban environments. The trip rate has been calculated by using patronage data and the existing population (Trip Rate = service patronage / population of service area) The trip rate assuming an uptick of demand as people move from regular buses services and taxi/car hailing services: 2036 population has been taken as the future population scenario as it is expected that by 2036 every LGA has an on-demand service running. 2036 populations are based on future projections data taken from the Department for Planning and Environment (DPEs) Travel Zone Projections 2022. On demand patronage for each urban environment category has been calculated by the following: On demand Patronage = Trip rate * 2036 population Results. Based off the quantitative methodology, the following forecasted values were produced: Assessment Critique. As the R-squared value equaled 0.283, the model is not considered a statistically accurate representation. The low R-squared value is as a result of: Furthermore, another issue regarding the model was the estimation of S-max. While the methodology was built of educated assumptions and projections, the market size for on-demand buses is difficult to estimate. In the future the following could be done for a better estimation of S-max: It should be noted that as the technology is in its infancy (early life stage of a technology), it's expected that there will be a lot of unknowns surrounding the technology and its development. References. "On-Demand Patronage - Dataset". "TfNSW". Retrieved 2024-03-06. "Population Projections". "TfNSW". Retrieved 2024-03-06. "Long-term trends in urban public transport". "BITRE". Retrieved 2024-03-07. "World Population Prospects". "United Nations". Retrieved 2024-03-07. Ashton, Paul, (December 2008). "Suburban Sydney". Sydney Journal. Retrieved 2024-03-07.
1,452
Sewing/Hand sewing. For big projects such as sheets or clothing, sewing the fabric is done with a sewing machine, but for smaller/more precise projects, and parts that go on bigger projects, hand sewing is more practical, and finer details can be done. Examples of situations where hand sewing is a better option is for mending and closing seams, and doing decorative stitches, and when using very thin or thick fabric that a sewing machine needle can't go through. Common utensils for hand sewing: Additional utensils A new hand sewing project begins by the maker deciding what they want to make, using the materials they have. Common items sewn by hand are small bags, coasters, and decorative bows. Patterns are commonly used in hand sewing. Patterns are like a 2D net that when sewn together at the right places, turn 3D and can be used. Patterns have markings to show where to cut, where to sew, where to fold, and where to pin the fabric together. Patterns can be found online and some are free and others are paid. Be careful when buying the paid ones online because they could be scams. Patterns can be downloaded and printed onto paper, cut out and placed onto fabric, and traced. Now you have the pattern on fabric.
294
Isometric Pixel Art/Other Shapes. Now that you've shaded a cube, you can make other shapes (cones, rectangles, spheres) and shade them too. Cones Star the cone by drawing a flat-ish oval shape on the ground. Give the oval a center of one, and use the center to make the point of the cone. Connect the outermost sides of the oval to the point, and there's your cone. To shade it, use your four colors and the sun, add spots where the light is brightest. Rectangles Similar to cubes, just lengthen one side and shade it according to the sun's position. In the example, I added two colors on the same face of the rectangle that is facing the side, to give some depth as to where the sun is. Spheres For this, I drew a circle, then the sun, and shaded it by using lighter colors for the area closer to the sun. The example is shaded in a circular shape to give off that 3D look, and show it's a sphere.
241
Isometric Pixel Art/Car. Now that you've gotten some more experience, you can make a car! In this case, it's a Land Rover, as it's the only car I could find that was boxy and was facing the right way. We'll start by drawing the cubes for the base of the car. Instead of being by 2s, I'll be using 3s, as it fits the slope of the car better.
102
Transportation Deployment Casebook/2024/High Speed Rail China. Introduction of High-Speed Rail China High Speed Rail (HSR) is a train network that consists of trains that travel at speeds consisting of 350km/hr. The main difference between high-speed rail and normal freight rail besides the model of train is that high speed railway networks cannot have tight turns or significant changes in elevation [1]. In 2007 China introduced high speed rail services and later in 2008 China unveiled its first high speed rail track, connecting Beijing to Tianjin. Initially in 2008 there was only 1000 km worth of operational high speed rail lines in China. Since 2008 China has rapidly expanded its high-speed rail network, it reported that currently the network is 45,000 km at the end of 2023 with more new lines to be added to the network. Currently there are three types of high-speed rail trains in China: C Class for intercity travel, D Class second fastest type of train and the G Class the fastest train used for long distance journeys. Innovation of High-Speed Rail The introduction of High-Speed Rail was first introduced in Japan in 1964 and was known as the Shinkansen also commonly referred to as the bullet train. High speed rail was different to conventional rail as high-speed rail trains could travel almost twice as fast as the conventional trains at the time. The main technological difference that separated high speed rail from the traditional rail was high speed rail had to be run on specific tracks, the design of the tracks had fever curves than the typical tracks to reduce the chance of derailment. Another key difference was that the Shinkansen had specific gauges that were different for standard trains [2]. High speed rail would bring together and integrate technology from rail and modern-day electrical systems to create an efficient and fast running train that meets the demands of modern consumers. China would take these initial blueprints from the original high speed rail line in Japan and expand on it during the development of HSR in China. Specifically, for HSR in China there were four technological improvements that helped to further enhance operation speed of the railway system in China. The four improvements are related to the railway line, control system, traction power supply and the EMU (electrical moving unit). As technology advances and safety concerns arise, construction of track became a focus, specifically strength, stiffness, durability, settlement after construction and geometric dimensions [3]. The EMU is an important component of high-speed trains, high speed trains in China re-innovated this by altering such components of the trains such as aluminium body, high-speed bogie, traction converter and braking system. These alterations have allowed its models of trains to achieve operations speed of 350 km/hr. Control system is used to control and regulate the speed of the train stops at stations. This process runs through an advanced control network. Traction power supply system is responsible for the safety and reliability of electrical processes. Effects of the introduction of High-Speed Rail in China The introduction of High-Speed Rail in China as it allowed for more efficient and faster travel across China with the main target market being Chinese people and tourist. In addition, the introduction of more lines has allowed greater accessibility to multiple cities in China, while also transforming the way people travel across China. Since the introduction of high-speed rail in China alongside its rapid expansion, it was found that there was a decrease of 28.7% in the number of passenger flights and a reduction of 31.8% of aviation passengers. The environmental benefits of these reductions in air travel means that there will be a reduction in air traffic emissions, it was found that between 2012 to 2015 there was a net saving of 1.76 million tons of carbon dioxide [3]. In addition, HSR has also increased economic development of regions and economic growth, this is because HSR has increased the population mobility in China meaning more quicker and frequency travel to different regions. Mode of Transport Before the Introduction High Speed Rail in China Before the introduction of high-speed rail and high-speed rail lines in 2008, people would use traditional trains such as stream trains and electric trains to travel across China. In addition, cars and buses were also used and are favoured for shorter trips, while commercial flights are favoured for longer trips. Although the railway network was extensive due to the early development of railway in the late 19th century, the main disadvantages of these trains where they are uncomfortable, overcrowded, and slow, with a traveling speed of 49 km/hr-120 km/hr in the 1994. As more modern trains were introduced, a fast speed rail of 200 km/hr was added to the network [4]. These additions to the network did not do much to address the current disadvantages of the old network. Another form of travel around China was domestic commercial flights also known as civil aviation. This mode was mainly used for longer journeys of travel. The main issues with this mode of transport are that the flights are usually slow due to security reasons and are considered often more unreliable. Furthermore, another disadvantage of flights is that they have a lower frequency than High Speed Rail and the price of tickets are higher than train tickets. In addition, cars and buses also are also another mode of transport. The main disadvantages of this mode of travel are that it is slower than high speed rail due to speed limits and unpredictable traffic. Specifically for buses they are often considered uncomfortable and are more prone to reliability issues compared to HSR. Also, multiple switching between buses for longer routes is also a major disadvantage. Since the introduction of high-speed rail in China, studies have shown that there has been a reduction in occupation rates in buses where HSR lines exist and reduction in domestic air travel of 27.9% [5].This can be attributed to the evolution of the HSR network as more routes have been added to the system and the Chinese consumer has been given more reason to choose HSR over the prior existing forms of transport available. Early Market Development The primary market during the development of High-Speed Rail in China was identified to be Chinese passengers, specifically road and airport users due to increasing passenger demand for this, but also with a goal to impact passenger transportation patterns. Functional enhancement and functional discoveries of HSR with the introduction of additional new lines changed the transportation market. At the early stages of the HSR being introduced in China ridership was low, as people had not fully adopted HSR, current modes such as cars, buses and planes were still favoured. In 2011, China made policy changes to the operation of the High-Speed Rail. These changes were made after the Wenzhou collision in July 2011, where two trains collided on a viaduct and derailed killing 40 passengers and injuring 200. This incident impacted the public confidence in high rail and later in August the Chinese government would act by introducing railway deceleration policy which lowered the operational speed of High-Speed Rail trains from 350km/hr to 300km/hr [6]. Furthermore, with public confidence at a low and a lower-than-expected ridership due to high ticket prices, In August 2011, to help generate greater ridership numbers of ticket prices were lowered by about 5%. In 2012 the Chinese government reinvested in high-speed rail. This investment increased the budget of spending which led to the expansion of high-speed rail and reaffirming the Chinese government's commitment to high-speed rail. The introduction of these new lines allowed for greater accessibility between cities in China, as new lines were introduced public perception of HSR changed as it was cheaper than flying, quick reliable travel time and high frequency of departures. As HSR ridership increased, a shift in the transportation market occurred as more people switched to using HSR. Research found as more people switched to using HSR other modes of transport have been impacted, influencing the transport market. An example of this can be seen in the Ningbo-Hangzhou as occupancy rates of buses decreased by 25.89% and Beijing and Tianjin decreased 48.2% after the opening of the line. As ridership numbers of HSR increased in 2017 fares for certain lines of HSR were raised by 10-50% [7]. Policies in the birthing phase Various policies were deployed and introduced by the Chinese government to achieve the goals set out for the High-Speed Rail network. The main policies implemented can be broken down into four main themes which are technology development, financial support, HSR engineering projects and safety systems with technology development and engineering projects as the high priority focus for the Chinese government. During the early stages of developments since HSR is a technology-intensive industry, early policies focused on the design of technology and introduction. Policies such as Provisional Regulations for Design of Beijing-Shanghai High-Speed Railway was issued in 2004, as technology advanced later the China High Speed Train Independent Innovation Joint Action Plan was launched in 2008 [8]. These policies would later help develop other standards and specifications such as the Code for Design of High-Speed Railway 2014 helping ensure . Furthermore, early policies focused on reforming the railway market, such as ownership separation of the railway operations and railway infrastructure proposed in 1998 with railway operation reforms being imposed in 2003 [8]. At this stage the Chinese government had several financial funding policies such as tax incentives and subsidies to help stimulate the development of the HSR. These policies would later help set policies relating to the improvement of investment and financing of HSR. Regarding HSR engineering many policies were implemented by the Chinese government revolving around construction and operation of HSR projects. The early stages of the policies focused on railway infrastructure with strong emphasis on network planning, bridges, and platform design [8]. Later as technology began to grow the focus shifted towards HSR equipment. As more HSR rail projects were being undertaken, policies would later shift to project management. These policies would help to establish the importance of improving transport services within the High-Speed Rail network. The Chinese government closely monitors the safety system of High-Speed Rail. During the whole development of the HSR safety was a primary focus for policies. At the start of development HSR safety focused mainly on transportation safety and would later shift to emergency systems. These early policies would help to develop regulations such as Regulation on the Administration of Railway Safety. These four key areas of policy are the building blocks that would help develop further policies and regulations to help improve the high-speed rail network. By formulating policies related to investment, financing, taxation, and price the government encourages participation in market activities. Current policy making HSR is mainly market-oriented with a primary focus on the development of HSR. Most of China’s success in HSR can be attributed to the policies supported by the Chinese government. Mature Phase HSR in China begins to enter its mature phase in 2019 as ridership data can be seen to have begun to decrease and would follow the same trend in the subsequent years. This trend can be attributed many factors such as more competitive prices for commercial air travel to compete with the demands of high-speed rail, but the main driving force can be attributed to COVID-19 pandemic which saw a reduction in public transport usage due to growing health concerns and eventual lockdown and travel restrictions which greater impacted the ridership numbers. To contract and restimulate the HSR network, China has decided to further expand its network widening the market as well as to complete with other markets the HSR has improved operating plan timetable and increased the capacity of trains. The HSR network is also developing new train that can achieve speed of 620 km/hr [6] and further developing high speed rail technology. Reinventing High Speed Rail in China HSR in China is pretty flawless as it is efficient and reliable with high frequency of service, reinventing High Speed Rail apart from the addition of more lines would be lowering prices of tickets as further research found that people from lower income households do not use HSR as much compared to middle to high income household, improving accessibility to all social economic classes would go far in positive change that will meet the needs of consumers. In addition, the integration of cleaner and renewable sources of energy into the system to power the network will greatly benefit China, since there is currently an environmental problem. Quantitative Analysis of High-Speed Rail in China A quantitative analysis of High-Speed Rail in China was performed using the ridership data from 2008 to 2021. The life cycle of mode can be visualized and modelled using the S-curve inputting annual ridership data which can be used to determine the birthing, growth, and maturity phases. The data used to generate the curve was from Statista and an estimate was made. S(t) = Smax/[1+exp(-b(t-ti)] Where: • S(t) is the status measure, (Annual Ridership) • t is time (Years) • ti is the inflection time (year in which 1/2 Smax is reached), • Smax is saturation status level to be estimated. • b is a coefficient to be estimated. Single linear regression was used to find coefficients to find the estimation of passengers. Specific coefficients c and b from the equation below: Y = bX + c Different values of K (maximum saturation) were used to determine the maximum value of R^2 closet to 1.0. To achieve this single variable linear regression was used for each year and every year with different K value. Results and Discussion Table 1: Annual actual data compared to predicted data Figure 1: S-Curve of HSR China From the results shown in Figure 1, the maximum saturation of passengers 2.5 billon passengers and that the half saturation can be observed in 2016. The predicted ridership line in comparison to the actual ridership is fairly accurate in the initial stages, but as the mode continues to grow both the passengers and predicted passengers greatly differ as the actual data is significantly greater than the predicted data. Hence the model is not completely accurate as it doesn’t correct predict the lifecycle. The reason for this could be that the expansion of additional lines were finished quicker or ahead of schedule which expanded the market for HSR, this could be a reason for why the ridership data is different from the predicted. From the graph it can be assumed that the birthing phase is from 2008-2010 is there is very slight increases in ridership. The graph shown in figure 1 also indicates significant increases in ridership in the period of 2011-2018 which signals the growth phase. From 2019 onwards can be considered the maturity phase as ridership takes a drastic decline in ridership, however this data is not completely accurate and illustrates the life cycle of HSR in China as external factors such as the COVID 19 pandemic has influenced ridership numbers. References 1. High Speed Rail Construction: How is it different? (2023) R&S Track, Inc. Available at: https://rstrackinc.com/high-speed-rail-track-construction/ (Accessed: 01 March 2024). 2. Elite (2023) High Speed Rail Construction: How is it different?, R&S Track, Inc. Available at: https://rstrackinc.com/high-speed-rail-track-construction/ (Accessed: 07 March 2024). 3. Lu, C. (2019) ‘A discussion on technologies for improving the operational speed of high-speed railway networks’, Transportation Safety and Environment, 1(1), pp. 22–36. doi:10.1093/tse/tdz003. 4. LHuillier, R.C. (2019) The evolution of China’s High Speed Rail Network, Welcome To China. Available at: https://welcometochina.com.au/the-evolution-of-high-speed-rail-in-china-7726.html (Accessed: 07 March 2024). 5. Chen, Z. (2017) ‘Impacts of high-speed rail on domestic air transportation in China’, Journal of Transport Geography, 62, pp. 184–196. doi:10.1016/j.jtrangeo.2017.04.002. 6. Jones, B. (2022) The evolution of China’s incredible high-speed rail network, CNN. Available at: https://edition.cnn.com/travel/article/china-high-speed-rail-cmd/index.html (Accessed: 07 March 2024). 7. (No date a) The development of high-speed rail in China. Available at: https://transition-china.org/wp-content/uploads/2022/06/20220621_HSR-Study-English-Final.pdf (Accessed: 07 March 2024). 8. Li, H. et al. (2021) ‘Policy Analysis for high-speed rail in China: Evolution, evaluation, and expectation’, Transport Policy, 106, pp. 37–53. doi:10.1016/j.tranpol.2021.03.019. 9. Zhang, W. (2024) China: Length of High Speed Rail Operation Network 2023, Statista. Available at: https://www.statista.com/statistics/1120063/china-length-of-high-speed-rail-operation-network/ (Accessed: 07 March 2024). 10. Yuan, Z., Dong, C. and Ou, X. (2023) ‘The substitution effect of high-speed rail on civil aviation in China’, Energy, 263, p. 125913. doi:10.1016/j.energy.2022.125913. 11. Zhang, W. (2024b) China: Passenger transport volume of highspeed rail 2021, Statista. Available at: https://www.statista.com/statistics/1120071/china-passenger-transport-volume-of-highspeed-rail/ (Accessed: 01 March 2024).
4,263
Czech/Nouns/Case/Instrumental. =Instrumental Case in Czech (7th)= Instrumental case is used to mark the instrument, after some prepositions (such as "s" "with") or with some verbs.
55
IB/Group 4/Computer Science/Computer Organisation/The Information Layer. What is positional notation ? Numbers are written using positional notation. The rightmost digit represents its value multiplied by the base of the zeroth power. The digit to the left of that one represents its value multiplied by the base to the first power. The next digit represents its value multiplied by the base to the second power. The next digit represents its value multiplied by the base to the third power, and so on. One is usually familiar with positional notation even if they are not aware. One is instinctively inclined to utilise this method to calculate the number of ones in 943: A more formal way of defining positional notation is to say that the value is represented as a polynomial in the base of a number system. But what is a polynomial? A polynomial is a sum of two or more algebraic terms, each of which consists of a constant multiplied by one or more variables raised to a non-negative integral power. When defining positional notation, the variable is the base of the number system. Thus 943 is represented as a polynomial as follows, with formula_1 acting as the base: formula_2 To express this idea formally, a number in the base-formula_3 number system has formula_4 digits, it is represented as follows, where formula_5 represents the digit in the formula_6th position in the number: formula_7 Look complicated? Take a concrete example of 63578 in the base 10. Here formula_4 is 5 (the number has exactly 5 digits), and formula_3 is 10 (the base). The formula states that the fifth digit (the last digit on the left) is multiplied by the base to the fourth power; the fourth digit is multiplied by the base to the third power, and so on: formula_10 In the previous calculation, one assumed that the number base is 10. This is a logical assumption because the generic number system "is" base 10. However, there is no reason why the number 943 could not represent a value in base 13. In order to distinguish between these values, the general notation of formula_11 is utilised in order to represent a number formula_12 in base formula_3. Therefore, formula_14 is a number found in the base 10 system. In order to turn turn formula_15 into formula_16 one simply uses the previous calculation: Therefore, 943 in base 13 is equal to 1576 in base 10, or formula_17. One must keep in mind that these two numbers have an equivalent value. That is, both represent the same number of "things." If one bag contains formula_15 beans and a second bag contains formula_19 beans, then both bags contain the exact same number of beans. The number systems just allow one to represent the value in various ways. Note that in base 10, the rightmost digit is the "ones" position. In base 13, the rightmost digit is also the "ones" position. In fact, this is true for any base, because anything raised to the power of zero is one. Why would anyone want to represent values in base 13? It is not done very often, granted, but it is sometimes helpful to understand how it works. For example, a computing technique called hashing takes numbers and scrambles them, and one way to scramble numbers is to interpret them in a different base. Other bases, such as base 2 (binary), are particularly important in computer processing. It is also helpful to be familiar with number systems that are powers of 2, such as base 8 (octal), and base 16 (hexadecimal). Binary. What is binary ? The term binary refers to any encoding in which there are two possible values. In computer science they refer to the electronic values 0 and 1 (see figure below). Booleans values are binary. In computers we use binary instead of the decimal system as they offer a much simpler and efficient way to perform calculations through electronics than they would with a decimal system. This can be explored in the following sections regarding binary gates. Interpretation of voltages to binary by the computer. Given that we are in a system where the max voltage is 5 Volts, then input current in an electrical wire is interpreted as a logical 0 by a transistor if the voltage is between 0 volts and 0.8 volts. On the other hand if the voltage is between 2 volts and 5 volts then the transistor interprets the input signal as a logical 1 (right diagram). It is important to note that there is a non-usable area of voltage that does not correspond to either a logical 0 or 1. This non-usable area is particularly useful since voltage always slightly fluctuates, having this grey zone allows us to better identify electrical signals (0 and 1) from each other. Similar voltage conversions from volts to binary digits also exist for output signals from transistors.¹ The base of a number system specifies the number of digits used in the system. The digits always begin with 0 and continue through one less than the base. For example, there are 2 digits in base 2: 0 and 1. There are 8 digits in base 8: 0 through to 7. There are 10 digits in base 10: 0 through 9. The base also determines what the positions of digits mean. When one adds 1 to the last digit in the number system, one has to carry the digit position to the left. What is a bit ? A bit is the smallest unit of data that a computer can process and store. It can either store a 1 or a 0 and represents on/off, true/false, yes/no. What is a byte ? 1 byte is a unit of data that consists of/stores 8 bits. 1 byte represents 28 distinct values. How do I convert a decimal number to a binary number ? If we want to convert a decimal number to a binary number we will also use our previous conversion table. For example let's convert decimal number 56: Now we mark the powers of two that we used for our calculus as a binary 1 and the other powers as 0. We get 00111000 How do I convert a binary number to a decimal number ? To convert a binary number into decimal, we just need to sum all powers of 2 that have a 1 in the binary digit. First off, we must count the number of digits in the Binary Number and correlate them with the powers of 2, starting from formula_20 up to formula_21. For each digit in binary we must take multiply the number by the corresponding exponent of 2. For example: Take the binary number 1011. Following the previous rule of positional number we achieve the following table: Now following the same logic, but just written differently, if we convert the following byte into decimal: 10011100 We only add the columns where the binary digit is equal to 1. Finally, we add each number up and this is the number converted from binary. So 10011100 = 128 + 16 + 8 + 4 = 156 How many possible numbers can I store in a byte ? A byte is 8 bits, which means that the amount of values that can be stored in a byte of data is 28 = 256. How many possible numbers can I store in a an n-bit number ? Since there are n possible bits that each can take two possible values (1 or 0), we can store formula_22 possible numbers in a n-bit number. In a byte this means we can store 28 = 256 different numbers. What is the biggest number I can get in a an n-bit number ? Since there are n possible bits that each can take two possible values (1 or 0), the biggest number that we can get is 2n -1. Indeed the smallest number that we can get is a 0 hence the -1 in the formula. In a byte this means the biggest number we can get is 28 -1 = 255, being 11111111, and the smallest number is 0 being 0000000. Exercises - Binary representation of numbers. 1. What is the decimal representation of 1001 ? 23 + 20 = 8 + 1 = 9 2. What is the decimal representation of 01000101 ? 26 + 22+ 20 = 64 + 4 + 1 = 69 3. What is the binary representation of 42 ? 42 - 32 = 10 with 32 = 25 10 - 8 = 2 with 8 = 23 2 - 2 = 0 with 2 = 21 Hence the binary representation: 00101010 4. What is the binary representation of 129 ? 129 - 128 = 1 with 128 = 27 1 - 1 = 0 with 1 = 20Hence the binary representation: 01000001 5. How many different integers can I represent in 2 bytes ? 2 bytes are 16 bits (2 x 8bits), hence 216 = 65536 different numbers can be represented in 2 bytes 6. What is the biggest number I can represent in 2 bytes ? 2 bytes are 16 bits (2 x 8bits), hence 216 -1 = 65535 is the biggest number that can be represented in 2 bytes. Hexadecimal. What is Hexadecimal ? How are digits represented in bases higher than 10? In bases higher than 10, the digits above the integer 9 will be represented as symbols. All digits are written in the following conversion table. Hexadecimal is a numbering system that uses 16 possible digits. Its primary attraction is its ability to represent very high numbers with fewer digits that in binary or in the decimal system. How do I convert a decimal number to a hexadecimal number ? The method is following: For example if we want to convert the decimal number 2545 to hexadecimal: How do I convert a hexadecimal number to a decimal number ? This conversion is a bit simpler as we can use positional notation to convert our number. Indeed, we just need to multiply each hexadecimal digit by its corresponding power of 16. For example, if we want to convert the hexadecimal number ABC to a decimal number we multiply formula_23 by formula_24 as there are 12 ones in the number, formula_25 by formula_26 as there are 11 sixteens, and formula_27 by formula_28 as there are 10 two-hundred-fifty-sixes. How do I convert a binary number to a hexadecimal number? To convert a binary number to a hexadecimal number, start by making sure the length of your binary number is a multiple of 4. If not, add 0s to the front of the number until it is a multiple of 4. Next, split the binary number into groups of 4 digits. Then, for each group of 4 digits, turn the binary number into a single number, by converting each digit in the 4-digit values to the corresponding power of 2: For instance, the binary value 1111 becomes 8421, while 0101 becomes 0401 Then, add up the digits. 8421 becomes 8+4+2+1 = 15, while 0401 becomes 4+1 = 5. Finally, convert that value to hexadecimal. If the value is from 0 to 9, it stays that value in hexadecimal. If it is 10 or higher, it changes to the corresponding alphabet letter: 10 is A, 11 is B, et cætera. For example if we want to convert the binary number: 1011111011 Octal. What is Octal ? Not needed for the examsOctal is a numbering system in base 8, meaning only digits from 0 to 7 exist in this notation. For example, one can calculate the decimal equivalent of 754 in octal (base 8) − simply put, finding formula_29. As before, it is the case of expanding the number in its polynomial form and adding up the numbers: Representing Text. What is ASCII? ASCII stands for American Standard Code Information Interchange. It was a 7-bit code, later changed to 8-bit, that had the purpose of setting a number to a letter and other foreign symbols which would allow for the interchanging of information between computers in a single country or using the same language. The total number of letters and symbols stored in ASCII was 128. Countries would input their own letters and symbols instead of English. However, there was one major drawback, the fact that ASCII only held symbols with English letters, computers from around the world could not successfully interchange information and data with each other. This drawback led to the creation of the larger database of letters and symbols, Unicode which allows all languages to store their letters in Unicode. What is Unicode ? Though ASCII was the industry standard for a very long time, it simply did not have enough characters and symbols to cover the fast internationalisation of the world. The initial solution was to have a different organisation system per country, however if a person in Japan were to send a email with Japanese characters to someone in the UK it would be an incomprehensible mess. The solution was Unicode. Unicode now build on a 16-24 bit systems instead of ASCII's 8 bit system and is able to cover 149,813 different characters, which is enough to cover every single character in every single language that exists. As of 2024, a total of 161 scripts are included in the latest version of Unicode (covering alphabets, abugidas and syllabaries), although there are still scripts that are not yet encoded, particularly those mainly used in historical, liturgical, and academic contexts. Further additions of characters to the already encoded scripts, as well as symbols, in particular for mathematics and music (in the form of notes and rhythmic symbols), also occur. What is UTF-8 ? UTF-8 is a way of translating between Unicode characters and binary text. This is done by storing each Unicode character in the form of up to four one-byte binary strings. Each binary string is created by converting the code point of the Unicode character (letters and numbers that form an index) into a set of binary strings, where each string represents one character in the code point. For example, the character "A", is represented as `"U+0041" (where "41" is the code point), which is encoded to "01000001" (which is 41 in hexadecimal). How can I represent characters from different languages in a text? The best way to represent characters from different languages is to use Unicode. ASCII is inconvenient for different languages as it was designed specifically for English, and does not contain any non-English characters. Meanwhile, Unicode has Chinese, Japanese, and Korean characters, the Cyrillic and Arabic alphabets, as well as any other symbol from a wide variety of languages. Representing Floating point numbers. How are floats represented/stored ? Section to go more in-depth on the content. Floats are typically represented in 4 bytes. Out of these 32 bits: 1 bit is used for the sign of the number, 8 bits are used for the exponent and 23 bits are used for the floating point of the number. It is a way of storing numbers in scientific notation. Putting the values into mathematical terms, it is calculated by ±A*formula_30. The sign is represented by the blue area, A is given by the red area and B is given by the green area. It is interesting to note that if we want to represent negative and positive numbers (whole numbers or decimal) we need to use one bit for storing the sign. If we only store positive numbers we don't need to keep that sign bit. Representing sound. How is sound represented/stored ? After sound has been recorded with a microphone, which converts sound waves into a digital signal, computers use a technique called sampling. The computer samples the sound by taking measurements of the amplitude of the sound signal at regular intervals, often 44.1 kHz (or 44,100 times per second), which are then saved as numbers in binary format. Representing images. How are colors represented/stored ? Colors we see on the computer screens today are represented by binary numbers of 24 bits. As we know, all colors originate from the 3 primary colors red, blue, and green. Depending on how much of the 3 colors we mix, we are able to achieve many different colors. Therefore, the computer only analyses the 3 primary colors and how much they are used when being mixed. This system is called an RGB representation. In the total of 24 bits, we divide it into 3 sets of bits for the 3 different primary numbers: 8 bits are for blue, 8 bits are for green, and 8 bits are for red. Let's first take an example of the color red. In the color red, the corresponding RGB value is: R - 255 (most amount of color we can use), G - 0 (using none of this color), B - 0 (using none of this color). Therefore, in the 8 bits, the number 255 in base 10 can be transformed into base 2 in 8 bits, which is in this case 11111111. Hence the color red in binary of 24 bits would be 111111110000000000000000. This case applies for both green (000000001111111100000000) and blue (000000000000000011111111). To summarize, here are the steps: - divide the 24 bits into 3 sets of 8 bits to represent the colors red, green, and blue - find the RGB representation of the color in base 10 - transform each the RGB values that are in base 10 to base 2 - combine the 3 sets of 8 bits and here we have the final binary representation of a color Generally, 24 bits are sufficient to depict realistic colors (photo-realistic). However, even more bits can be used such as 32 bits or 48 bits to illustrate even more realistic colors.
4,409
Punjabi/Dictionary/ਤ. ਤੋਤਾ - Parrot ਤਲਿਆਂ - Fry ਤਰਖ਼ਾਣ - Carpenter ਤੱਕੜੀ - Scale ਤੇਰਾਂ - 13 ਤੇਰਾ - Tera ਤਿੱਖਾ - Sharp ਤਿੰਨ - 3 ਤੱਤਾ - Hot ਤੱਕਣਾ - Watch ਤਲਵਾਰ - Sword ਤਾਰੀਖ਼ - Date ਤੋਲਣਾ - Weight ਤਾੜੀਆਂ - Clapping ਤਾਰ - Wire
251
Punjabi/Dictionary/ਦ. ਦਫ਼ਤਰ - Office ਦਸਤਾਰ - Turban ਦਿਮਾਗ - Brain ਦਿੱਲੀ - City ਦੁਕਾਨ - Shop ਦੁਨੀਆ - World ਦੋਸਤੀ - Friends ਦੱਸਣਾ - Tell ਦਸੰਬਰ - Month ਦਾਲਾਂ - Pulses ਦਸ਼ਾ - Direction ਦਹੀਂ - Curd ਦੰਦ - Teeth ਦੀਵਾ - Lamp
233
Punjabi/Dictionary/ਧ. ਧੰਨਵਾਦ - Thank you ਧਾਗਾ - Thread
42
IB/Group 4/Computer Science/Computer Organisation/The Hardware Layer. Electrical Components. What are Transistors ? A transistor is an electrical component made up of a semiconductor that is capable of regulating the flow of electricity. Transistors are commonly made of silicon or germanium. A transistor has three connections: the emitter, which functions as an output, the base, which is what controls the current, and the collector, which functions as an output. Electrical flow from the collector to the emitter is regulated using the base. Transistors are used to create logic gates or switches, done using the ability to control the output from the emitter. Different types of transistors can also be made, using different structures, to represent different logic gates. For example, the transistor layout for an AND gate is different to an OR gate, with the OR gate having the emitter in a different location. Transistors were originally created to replace vacuum tubes, which served a similar purpose. Originally, transistors were discrete objects which functioned individually, and were and computers were assembled with individual transistors. Today, transistors can be assembled in large numbers in integrated circuits ("see below"). What are integrated circuits ? Overview An Integrated Circuit (or IC) is the central component of electronic devices or, as some say, “the heart and brains of most circuits” "(SparkFun Learn)." More commonly called a ‘chip’ or ‘microelectronic circuit’, an IC is a semiconducting wafer - usually made out of silicon (a semiconducting material) -  on which tiny elements are placed to ensure the functioning of the device (such as resistors - which regulate the electric current - capacitors - which store energy - diodes - which signal current - and transistors - binary switch gates "/see the What are Transistors?" article). The capacity of such microchips fluctuates  from thousands to millions of transistors, depending on the need of the machine. Making-process: photolithography ICs are made thanks to the repetition of a process called photolithography, a technique using light rays to transfer complex patterns onto patterned films of suitable material called a photomask (often metal) placed onto the silicon wafer (a semiconducting material); to build transistors. Fitting transistors onto an IC The process is then repeated until all desired patterns are transferred onto the sample, which then becomes a transistor. See Moore’s Law graph (transistor page) to understand how the number of transistors which could be fitted in an IC evolved  throughout the last decades. ICs are electronic elements that are the equivalent of our DNA: they constitute the building bricks of the device, ensuring its basic functioning. Its role is thus key in the functioning of any unit, ensuring the simultaneous running of numerous actions (acting as an amplifier, an oscillator, a timer, a counter etc.) "(Whatis.com, 2021)."   Even if the first attempts of combining several electronic particles have been traced back to the 1920s, the first chip which could be compared to modern IC was made by  Warner Jacobi, a German engineer who filed in 1949 a patent for his invention: a   semiconductor amplifying device showing five transistors fitted on a three-stage amplifier arrangement "(Wikipedia, 2022)". However, most of the progress regarding ICs took place in the 60s, 70s and 80s, as engineers tried to overcome the Tyranny of Numbers problem (when millions of electronic parts were to be all assembled onto form one single chip, of which then hundreds of thousands were to be fitted onto a single computer thanks to the photolithographic process). In 2022, 63 years after Jacobi’s discovery, Apple managed to fit 114 billion transistors in its ARM-based dual-die M1 Ultra system, a chip using TSMC’s 5 nanometer (1 metre x 10-9) semiconductors "(Wikipedia, 2022)". Boolean gates. Use BOOLR to practice using logic gates. 2.1.11 Define the Boolean operators; AND, OR, NOT, NAND, NOR, and XOR. 2.1.12 Construct truth tables using the above operators. The gates in a computer are sometimes referred to as logic gates because they each perform just one logical function. That is, each gate accepts one or more input values and produces a single output value. Because one is dealing with binary information, each input and output is either codice_1, corresponding a low-voltage signal, or codice_2, corresponding to a high-voltage signal. The type of gate and the input values determine the output value. What is boolean logic ? Boolean logic() is a mathematical tool commonly used for computer logic. In boolean logic, values are only represented by "False" or "True"(codice_1 or codice_2). In the hardware, the "False" and "True" is represented by the current being "off" or "on". There are boolean operators (AND, OR, NOT, NAND, NOR, and XOR) which takes one or two inputs and spit out 1 output. These manipulations are fundamentals to more complex computer algorithms. How are boolean gates built from transistors ? Transistors are composed of three main sections, the collector, base and emitter (labelled "C", "B" and "E" respectively in the diagram to the right). The collector constantly has a stream of electricity, however it is blocked if the base has no electricity as well. Following the same logic, if the base has electricity flowing but the collector does not, electricity is blocked. If both the collector and the base have electricity flowing, then the emitter outputs the electricity. Two transistors in series can create the boolean gate AND, since if the first transistor outputs nothing then the second transistor's collector will have no electricity. Two parallel transistors can create the boolean gate OR since if either one's emitter outputs electricity then it will reach the output. What is the NOT boolean gate ? A NOT gate takes in one input signal and produces one output signal. The table to the right illustrates a NOT gate in three ways: a Boolean expression, a logical diagram symbol, and a truth table. In each representation, the variable codice_5 represents the input signal, which is either codice_1 or codice_2. The variable codice_8 represents the output signal, whose value (codice_1 or codice_2) is determined by the value of codice_5. By definition, if the input value for a NOT gate is codice_1, the output is codice_2; if the input value is codice_2, the output is codice_1. A NOT gate is sometimes referred to as an "inverter" because it inverts the input value. In boolean expressions, the NOT operation is represented by the codice_16 mark. Sometimes this operation can also be shown as a horizontal bar over the value being negated (eg. codice_17). In the respective representation in the table to the right, a value is assigned to codice_8 by applying a NOT operation to input value codice_5. In such an "assignment statement", the variable on the left of the equal sign takes on the value of the expression on the right-hand side. The logical diagram symbol for a NOT gate is represented by a triangle with a small circle (called an "inversion bubble") on the end. The input and output are shown as lines flowing in and out of the gate. Sometimes these lines are labeled, though not always. The input signal codice_5is often being put to the left of the diagram and the output codice_8 is often on the right of the diagram representing the result of the NOT gate. The truth table shows all possible input values for a NOT gate and their respective output values. Because there is only one input signal to a NOT gate, and that signal can only be codice_1 or codice_2, there are only two possibilities for the column labelled input signal A in the truth table. The column labelled codice_8 shows the output of the gate, which is the inverse of the input. Note that out of the three representations, only the truth table actually defines the behavior of the gate for all situations or possibilities. These three notations are different ways of representing the NOT gate. For example, the result of the Boolean expression codice_25 is always codice_2, and the result of the Boolean expression codice_27 is always codice_1. This behavior is always consistent with the values shown in all three notations. What is the AND boolean gate ? Unlike the NOT gate, which accepts one input signal, an AND gate accepts two input signals. The values of both input signals determine what the output signal will be. If the two input values for an AND gate are both codice_2, the output is codice_2; otherwise, the output is codice_1. The AND operation in Boolean algebra is expressed using a single dot (codice_32) or, in some cases, an asterisk (codice_33). Often the operator itself is assumed, for example codice_34 is often written as codice_35. Because there are two inputs and two possible values for each input, four possible combinations of codice_2 and codice_1 can be provided as input to an AND gate. Therefore, four situations can occur when the AND operator is used in a Boolean expression. Likewise, the truth table showing the behaviour of the AND gate has four rows, showing all four possible input combinations. What is the OR boolean gate ? Like the AND gate, there are two inputs to an OR gate. If the two input values are both 0, the output value is 0; otherwise, the input is 1. The Boolean algebra OR operation is expressed using a plus sign (+). The OR gate has two inputs, each of which can be one of two values, so as with an AND gate there are four input combinations and therefore four rows in the truth table. What are the NAND and NOR boolean gates ? The NAND and the NOR gate accept two input values. The NAND and NOR gates are essentially the opposite of the AND and OR gates, respectively. That is, the output of a NAND gate is the same as if one took the output of an AND gate and put it through an inverter (a NOT gate). There are typically no specific symbols used to express the NAND and NOR gates operations in Boolean algebra. Instead, one should rely on their definitions to express the concepts. That is, the Boolean algebra expression for NAND is the negation of the AND operation. Likewise, the Boolean algebra expression for NOR is the negation of an OR operation. The logic diagram symbols for the NAND and NOR are the same as those for the AND and OR except that the NAND and NOR symbols have the inversion bubble (to indicate the negation). Compare the output columns for the truth tables for the AND and NAND. They are the opposite, row by row. The same is true for OR and NOR gates. <br> <br> What is the XOR boolean gate? The XOR boolean gate differs in two main ways from the regular OR gate, often referred to as an inclusive OR gate. The XOR boolean gate differs in two main ways from the regular OR gate, often referred to as an inclusive OR gate. First off, The XOR boolean gate, or exclusive OR functions exactly the same as an OR with the exception that if both inputs are true (1), then the resulting output is false. A XOR gate produces false (0) if the two inputs are true (1), and true (1) otherwise. Note the difference between the XOR gate and the OR gate; they differ only in one input situation. When both input signals are true (1), the OR gate produces true (1) and the XOR produces false (0). The Boolean algebra symbol ⊕ is sometimes used to express the XOR operation. However, the XOR operation can also be expressed using the other operators such as А̄B + AB̅. The other main difference between an inclusive OR gate and an exclusive OR gate, is the symbol used to show the XOR gate. The XOR gate symbol has the same main shape of the inclusive OR gate, however its left, a curve following the left curved porting of the inclusive OR gate. <br> <br> How to construct a boolean truth table from a boolean expression ? Six specific types of gates have been observed. It may seem to be a difficult task to keep them straight and remember how they all work. Well, that probably depends on how one thinks about it. One should not be encouraged to try and memorise truth tables. The processing of these gates can be described briefly in general terms. If one thinks of them that way, one can produce the appropriate truth table any time needed. Some of these descriptions are in terms of what input values cause the gate to produce as 1 as a n output; in any other case, it produces a 0: A truth table is a visual representation of the various inputs of a circuit and the outputs that are associated with each combination of inputs. To create a truth table, draw a table sequentially with one column for each input and one for the output. The image to the right represents a truth table to Q = A NOR B. In the first row, write the name of your inputs and output in order as seen in the image. Then, add every possible combination of inputs, each on a new row, and determine the output for each row depending on the inputs for the selected boolean gate. The number of input combinations should correspond to 2 raised to the number of inputs (2 inputs = 4 combinations, 3 inputs = 8 combinations). For more complex boolean expressions, it is best to add extra columns for intermediate operations. For instance, for the expression Q = A AND (B OR C), in addition to the columns for the inputs A, B and C, you can add a new column for "B OR C". This is not mandatory but helps visualise the operations and prevent mistakes. Exercises - Boolean gates. 1. Construct a truth table for the following expression: (A AND B) XOR C Logic components. A combinational circuit consists of logic gates whose outputs are determined at any time from the present combination of inputs irrespective of previous inputs. It consists of input variables, logic gates and output variables. The logic gate accept signals from inputs and generate signals at the outputs. Some examples of combinational circuits are binary adder, decimal adder, decoder, multiplexer, half adder, half subtractor etc. What is a half adder ? A logic circuit for the addition of two one-bit numbers is called a half adder. Two bits are outputted: the sum of the addition and the carry. It just so happens that, since 0 + 0 = 0 and 1 + 0 = 0 + 1 = 1, if A and B aren't both 1, a half adder's outputs are identical to those of a XOR gate, an AND gate only becoming necessary to add 1 and 1 together. The carry is therefore only necessary for the addition of 1 + 1, which equals 10 (binary for 2). The addition process is reproduced in this truth table. What is a full adder ? A full adder is a circuit made up of logic gates that is similar to a half adder, but with an extra input to account for the carry variable. The full adder can add up to three one-bit variables, and outputs a sum and carry, the same as a half adder. A full adder is made up of two half adders and an OR gate that is used to handle the carry. What is an 8 bit ripple carry adder ? Now that you are familiar with half adders and full adders, we can build a complete circuit that add two 8 bit integers. This circuit is called the 8 bit ripple carry adder. The structure is simple. The first circuit is a half adder and the rest that follows are full adders. The carry from the first circuit will be given to the full adders who will then give its carry to the next full adder and so on. Each component receives a bit from two numbers from its respective position. Each component also outputs the result. When combined together, the string of results of each individual components give the final result. What is an ALU ? The Arithmetic Logic Unit is one of the main components of the CPU (Central Processing Unit) responsible for the execution of arithmetic operations on binary numbers provided from different storage places such as the RAM or the hard disk. The ALU is composed of 2 essential components: arithmetic unit and logic unit. The Arithmetic/Logic Unit (ALU) is capable of performing basic arithmetic operations such as adding, subtracting, multiplying, and dividing two numbers. This unit is also capable of performing logical operations such as codice_38, codice_39, and codice_40. The ALU operates on words, a natural unit of data associated with a particular computer design. Historically, the word length of a computer has been the number of bits processed at once by the ALU. However, the current Intel line of processors has blurred this definition by defining the word length to be 16 bits. The processor can work on words (of 16 bits), double words (32 bits), and quadwords (64 bits). In the rest of this wiki, when referring to word, it will be of the historical nature aforementioned. Most modern ALUs have a small number of special storage units called Registers. These registers contain one word and are used to store information that is needed again immediately. For example, in the calculation of 1 × (2 + 3). 2 is first added to 3 and the result is multiplied by 1. Rather than storing the result of adding 2 and 3 in memory and then retrieving it to multiply it by 1, the result is left in a register and the contents of the register is multiplied by 1. Access to registers is much faster than access to memory locations. Metaphorically, the ALU can be seen both as a calculator and a human brain (where all the logic is handled). What is a carry look ahead adder ? Ripple carry adders work fine, but are slow due to how each full adder in the "ladder" have to wait for the previous one's carry in order to perform calculations. A carry look ahead adder can perform much faster calculations by circumventing this issue, calculating the carries ahead of time with a separate module. What is a shifter circuit ? First off, a shifter circuit is a type of logic gate which implements a shift on the input by a specified amount and in a specified direction. They are mostly used to move data from one area to another by shifting them a certain number of bits. A shifter circuit, such as the one shown in the image below, is made up of OR gates and AND gates. A binary shifter circuit takes 2 inputs and outputs one time. One of the inputs (A) is the input that will be shifted and the other input (D) is the shifter. The D input will shift the A input by a certain amount. When D = 0, the A input will shift 1 bit to the left, and the opposite happens when D = 1, the A input will shift 1 input to the right. Shifting a number by 1, 2, or 3 bits would mean dividing it or multiplying it 2, 4, of 8 times. "So what would be needed to shift an A input?" We would need another input, the D input which would shift the A input any number of bits in a specified direction. What is a comparator circuit ? A comparator circuit is an electronic circuit that compares 2 inputs to finally produce an output. The output value of this circuit indicates which of the input is greater or less using an op-amp. An op-amp can help amplify the voltage between 2 inputs. There are 2 types of comparator circuits: Inverting and non-inverting. An inverting comparator circuit What is a multiplexer ? Multiplexers are a a combinational circuit that can have many outputs based on one or many selection inputs. In the image to the left we can see that the possible outputs (labelled 'A' and 'B') are changed based on the selector input (labelled 'sel'). The Truth table for this multiplexer would be: In the truth table above we can observe that based on the output of 'sel', the output of the entire circuit changed from A to B. If we want to have even more possible outputs, we must also change the amount of selector inputs. For N input lines, we need formula_1 selector inputs. Following the same logic, we need formula_2 input lines per formula_3 selector lines. The truth table for this multiplexer would be: Memory. Memory is a collection of cells, each with a unique physical address. We use the generic word cell here rather than byte, because the number of bits in each addressable location, called the memory's addressability, varies from one machine to another. Today, most computers are byte addressable. To fully illustrate the concept of memory, take a computer with 4GB of RAM as an example. The memory consists of 4 × 230 bytes, which means each byte of the computer is uniquely addressable, and the addressability of the machine is 8 bits. The cells in memory are numbered consecutively beginning with 0. For example, if the addressability is 8, and there are 256 cells of memory, the cells would be addressed as follows: What are the contents of address codice_41? The bit pattern stored at that location is codice_42. However, what the contents actually represents, be it instructions, value, sign, etc. will be discussed later. What is important, however, is to understand that the bit pattern is information and can be interpreted. When referring to the bits in a byte or word, the bits are numbered from right to left beginning with zero. The bits in address codice_43 are numbered as follows: What is a gated latch? A gated latch is a circuit which has 3 inputs. Similar to a regular latch circuit, a gated latch uses a set and reset latch, it differs from a regular latch by its use of an enabler switch. The set and reset switches, commonly known as the S-R latch are used to either Set the input to 1 or Reset the input to 0. This switch turns on the previously mentioned switches, set and reset. What would happen if this enabler switch was turned off? If the enabler switch is off, then the set and reset switches are also turned off. So when the enabler switch is on, the two other switches, the Set switch and Reset switch, are able to power up if need be. What are gated switches primarily used for? Gated Latches are primarily used for digital storage and are essential for every electronic device. When there is memory that is stored in the gated latch that does not need to be used by the computer the gated latch will be turned off. What is a Register ? A register is small block of memory that stores a single number. Larger memory structures, such as RAM, are made up of multiple registers. Registers are also used in CPUs where they act as small but fast ways of temporarily storing information (these are called processor registers). What is Read Only Memory (ROM) ? Read Only Memory(ROM) is a memory device that cannot be changed upon creation. To do this, the device usually has to be hardwired. As the name implies, one can only read data from ROM. Typically, ROM is used for softwares that rarely needs changing. However the executions speed of a ROM is slower than that of a RAM. To the right is an example of how ROM might look like on a mother board. What is primary memory ? Primary memory is defined as memory that is accessible and modifiable by the CPU directly. This comes with the side effect of being "volatile", which means deleted when power is switched off, however it increases the speed of reading and writing to memory by the CPU. Primary memory is made up of a computer's registers, cache, and RAM. Primary memory is used for operations that need fast memory accesses, as it is faster than writing everything to the secondary memory. It is also used for storing data that does not need to be saved (for example when you copy something it is stored in primary memory), and for remembering instructions for the CPU to do. What is secondary memory ? Secondary memory is non-volatile, meaning that data and programs can be accessed or retrieved even when the power is turned off. Secondary memory consists of all permanent or persistent storage devices, such as read-only memory (ROM). In computing operations, secondary memory is accessed only by the primary memory and later transported to the processor - that is the reason why it runs slower than RAM. Generally, anything that is long term goes into the second memory. This is mainly the boot up part of a computer system that should stay consistent like the files needed for Windows to run. It can be interpreted as the file storage system in the computer. RAM can easily access data from the secondary memory when needed. What are secondary storage devices ? 2.1.5 Identify the need for persistent storage. An input device is the means by which data and programs are entered into the computer and stored into memory. An output device is the means by which results are sent back to the user. Because most of main memory is volatile and limited, it is essential that there be other types of storage devices where programs and data can be stored when they are no longer being processed or when the machine is not turned on. These other types of storage devices (other than that of main memory) are called "secondary" or "auxiliary" storage devices. Because data must be read from them and written to them, each storage device is also an input and an output device. Examples of secondary storage devices include;
5,924
IB/Group 4/Computer Science/Computer Organisation/The Application Layer. What are Web browsers ? Examples of Web browsers Google Chrome, Firefox, Safari, Opera, Internet Explorer What is it? A web browser is a general application software that allows access to websites. It is a software application that allows users to access and view web pages and other content on the internet. It retrieves information from web servers and displays it on the user's device in a format that can be easily read and navigated. Web browsers provide users with a graphical interface (GUI) that allows them to interact with web pages through menus, buttons, and other controls. They also support features such as bookmarks, history, and tabbed browsing, which allow users to easily navigate and manage their online activities. How does it work? The user, also called ‘client’, enters the URL of a website in the browser. The browser then uses a world wide database called the DNS (Domain Name System) to match the website URL to the corresponding IP address. The IP address is a storage location which specifies where the website data is stored. The server is a computer hardware that stores data and provides functionality for other programs called ‘clients’. The browser then makes a request to the server that has the IP address. The server then retracts the html (the source code) of the website which is itself either stored in the server or generated by the server. This source code is received and read by the browser. Here's a diagram of how it works: Thus the purpose of a web browser is to bring information from a server to the client (through the request and response) allowing the client to view the information. What are Database Management systems ? The Database Management System(DBMS) is responsible for organising data in a structured manner along with security and access controls of that database(create, protect, read, update and delete data). While the spread sheet allows us to store data, a Database Management system enables functions such as "filtering by type" which facilitates the management of large amount of data for big companies. Within a data base, we can have users with different rights. MySQL and mongoDB are two examples of databases. What are emailing softwares ? Email software is a program that has features and capability for sending and receiving electronic mail. In most cases, these programs are email editors with various formats, layout, and message capability tools. It hosts, optimizes, or secures digital communications for personal or business use. Organizations, small and large, deploy email softwares because digital communication is essential to conducting modern business. Here are the features of emailing softwares: There are 3 types of emailing softwares. The first one is marketing, where emails are sent in real time for customed communications such as advertisements, brands, or fundraisers. The second one is security, this is a program built to prevent, detect, and respond to potential threats. The last one is optimization, that improves performance and adapt to evolving email marketing solutions. An example of an emailing software is Gmail, where users are allowed to organize, send, and receive emails. It has layouts for different email types and has tools to format emails. What are word processing softwares? Word processing software refers to a type of application which manages the creation, storage and printing of text files. It allows the user to write and modify text files, with all usable tools displayed in an intuitive GUI. It is one of the most common types of applications, present in virtually every modern computer. Common examples of word processing softwares include Microsoft Word, Google Docs, and Apple Pages. What are Computer Aided Design (CAD) softwares? Computer Aided Design softwares are programs that can create digital designs on a computer. It is often used by architects, product designers, engineers, manufacturers, etc… It allows users to create a design in 2D or 3D, visualize its construction, modify and optimize design processes, and view the final product. CAD software is also used to be able to design things better than would be possible without it, as CAD allows users to create designs with large amounts of detail. non-comprehensive list of different CAD softwares: An example of a large scale use of CAD design is within the car manufacturer Tesla. Tesla uses CAD to design their electric vehicles and their parts. They use these tools in order to create a complete 3D model of the car (interior and exterior). These models are mainly used to ensure that all the parts fit well together and function as predicted. They can also use these models to simulate the car’s performance in different situations, testing acceleration, charging time, wind resistance, fuel consumption, etc… This is also used to tweak the car’s aesthetic and appearance. Note: CAD software is not to be confused with Graphics processing software, although they do have many similarities and many types of CAD software are also graphics processing software. What are spreadsheet softwares? Spreadsheet software is general application software that allows for data that is arranged in rows and columns to be used in calculations. Spreadsheets have many usages ranging from storing data, creating budgets, to displaying graphs and charts. Spreadsheet softwares are used a lot because they allow users to manipulate data, organize it, and arrange it in any way they need. They are usually user-friendly and relatively easy to use. Most spreadsheets are able to be shared with others and have multiple people collaborate on the same page. This is also why spreadsheet softwares are used often by businesses and organizations. Some examples of popular spreadsheet softwares are: But why are spreadsheet softwares important? They allow users to easily and efficiently organize, analyze, visualize, and manipulate data. What are the main GUI elements of a software ? In order to maintain clear interaction and communication with the user, software generally follows the WIMP (window, icon, menu, pointer) paradigm, using the following GUI elements of the user interface: - Windows: a resizable area of the screen that displays information and can be dragged around the desktop with a mouse cursor. Different applications' windows can appear to overlap, and closing these windows generally also halts its associated program. - Menus: a (usually) collapsible display of different options that allow the user to perform different tasks. - Icons: a small image that represents different objects that have tasks associated with them (a floppy disk for saving, for example). - Controls (or widgets): display a collection of related items or actions related to a certain concept (see image), such as Insert for inserting different types of media into a document. - Pointer: echoes the user's movements through a pointing device, such as a mouse or trackpad, and allows the user to directly interact with the program by clicking and dragging GUI elements.
1,539
IB/Group 4/Computer Science/Computer Organisation/The Instruction Cycle Layer. What is a CPU ? The CPU is the element in a computer that is responsible for processing and executing operations and instructions given by applications. This is done through registers, RAM, and ALUs. CPUs function similar to the brain of a computer and are present in every computer. CPUs are made from large amounts of transistors placed on chips that collectively perform operations to process instructions. The rate at which CPUs can process information is known as their clock speed, which is measured in Hertz. Modern CPUs function at billions of Hertz meaning that they can run billions of operations per second. CPUs can also have multiple "cores" which is the main element of the CPU. The number of cores that a CPU has, along with its clock speed, determines its operating speed. There is a wide range of core amounts that CPUs can have, with some CPUs having one core, and some having up to 256. What is the fetch decode execute cycle ? Before looking at "how" a computer does what it does, let us look at "what" it can do. The definition of a computer outlines its capabilities; a computer is an electronic device that can store, retrieve, and process data. Therefore, all of the instructions that we give to the computer relate to storing, retrieving, and processing data. The underlying principle of the von Neumann machine is that data and instructions are stored in memory and treated alike. This means that instructions and data are both addressable. Instructions are stored in contiguous memory locations; data to be manipulated are stored together in a different part of memory. To start the Fetch-Execute Cycle, first of all the data is loaded to the main memory by the operating system, the address of the first instruction is placed onto the program counter. The process cycle includes four steps: Fetch the Next Instruction. The PC increments one by one to point to the next instruction to be executed, so the control unit goes to the address in the memory address register (MAR) which holds the address of the next instruction specified in the PC. Then it takes it to the main memory through the address bus and returns it to the memory buffer register via the data bus. The MBR is a two way register that temporarily holds data fetched from the Memory(cache or RAM), makes a copy of the contents, and places the copy in the IR. Now, the IR contains the instruction to be executed. Before going to the next step in the cycle, the PC must be updated to hold the address of the next instruction to be executed when the current instruction has been completed. There after the instruction register is responsible for the instruction to be solved by the CU. The CU checks on the status of the instruction and then allows execution. Because the instructions are stored contiguously in memory, adding the number of bytes in the current instruction to the PC should put the address of the next instruction into the PC. Thus the control unit increments the PC. (It is possible that the PC may be changed later by the instruction being executed) In the case of an instruction that must get additional data from memory, the ALU sends an address to the memory bus, and the memory responds by returning the value at that location. In some computers, data retrieved from memory may immediately participate in an arithmetic or logical operation. Other computers simply save the data returned by the memory into a register for processing by a subsequent instruction. At the end of execution, any result from the instruction may be saved either in registers or in memory. Decode the Instruction. To execute the instruction in the instruction register, the control unit has to determine what instruction it is. It might be an instruction to access data from an input device, to send data to an output device, or to perform some operation on a data value. At this phase, the instruction is decoded into control signals. That is, the logic of the circuitry in the CPU determines which operation is to be executed. This step shows why a computer can execute only instructions that are expressed in its own machine language. The instructions themselves are literally built into the circuit. For example, lets take an imaginary RAM with registers of 8 bits in the table below. In the 8 bits, the first 4 bits are what we use to determine the instruction. For the 1 address: 01010010 0101 is our opcode. For our case, 0101 corresponds to "load", meaning we are loading the value from the address 00000010 to a register. Example continued in "Execute the instruction". Get Data If Needed. In most programs, the instruction to be executed may potentially require additional memory accesses to complete its task. For example, if the instruction says to add the contents of a memory location to a register, the control unit must get the contents of the memory location. Execute the Instruction. Once an instruction has been decoded and any operands (data) fetched, the control unit is ready to execute the instruction. Execution involves sending signals to the arithmetic/logic unit to carry out the processing. In the case of adding a number to a register, the operand is sent to the ALU and added to the contents of the register. When the execution is complete, the cycle begins again. If the last instruction was to add a value to the contents of a register, the next instruction probably says to store the results into a place in memory. However, the next instruction might be a control instruction—that is, an instruction that asks a question about the result of the last instruction and perhaps changes the contents of the program counter. Hardware has changed dramatically in the last half-century, yet the von Neumann model remains the basis of most computers today. As Alan Perlis, a well-known computer scientist once said; Continuing with our example from above. We left off at where we read 0101's instruction from the instruction table. After the decoding, we are executing the CPU. In our case, we are storing the value in the address 00000010 (2 in denary) to a register. So now in our register, we would have the value 2 in denary to use for later operations. When this instruction cycle is over, all the circuits will be turned off and a new instruction cycle will begin when the value in the Instruction address register is increased by 1. What is a control unit ? The Control Unit is the organising force in the computer, for it is in charge of the Fetch-Execute Cycle. There are two special registers in the control unit. The Instruction Register (IR) contains the instruction that is being executed, and the Program Counter (PC) contains the address of the next instruction to be executed. Because the ALU and the control unit work so closely together they are often thought of as one unit called the Central Processing Unit (CPU). What is Random access Memory (RAM) ? Random Access Memory (RAM) is a storage device that functions as storage, and can give instructions to the CPU. RAM is used for temporary storage for the CPU, and the data that the RAM holds will only be stored while the program needing it is running, or if the computer is Running. This means that ram is volatile (the data will be erased if it is powered off). The speed of RAM is defined by the storage amount, clock speed, and latency. The clock speed is how many operations it can handle per second (in hertz), with modern computers being able to handle millions of operations per second. The latency is how long it takes for data to be written to RAM to be available to use. The storage amount is how much data the RAM can hold, with modern RAM sticks being able to hold Gigabytes of data (billions of bytes). It also functions and read and write. Meaning memory can both be read and accessed. What are instructions for the CPU ? All CPUs come with an instruction set with associates many numbers, called opcodes, with different instructions. Instructions are read by the CPU from the RAM, where numbers are stored which have a certain number of bits reserved for opcodes and the rest dedicated to information on memory location. Depending on how expansive a CPU's instruction set is, instructions can range from operations such as addition and division, to more complex logic tasks. For example, the 8-bit instruction 00011100, which has opcode 0001, could mean, in the following case, to add the numbers at the register and at memory location 1100. The number 00110011 (opcode 0011) would then store the result in memory location 0011. What is the MDR ? The MDR, or Memory Data Register, temporarily stores data while it is being processed by the CPU, as an intermediary between the RAM and the CPU. It also temporarily holds data that is being placed in memory. It can also be know as the MBR, or Memory Buffer Register. What is the MAR ? First off, MAR stands for Memory Address Register. It is a subcomponent of the CPU component, registers which holds the Program counter, the Current Instruction Register, Accumulator, the Memory Data Register, and the Memory Address Register. It is essentially a storage unit that holds the memory address, where something is in the memory of the CPU, of data that needs to be accessed. It also holds the memory address of data that will be sent to that address via (next chapter). The MAR works hand-in-hand with the MDR, Memory Data Register, which holds the actual data which was held at the memory address from the MAR. What is a data bus and an address bus ? In the diagram above all the parts are connected to one another by a collection of wires called a bus, through which data travels in the computer. Each bus carries three kinds of information; address, data, and control: - An address is used to select the memory location or device to which data will go, or from which it will be taken. - Data then flows over the bus between CPU, memory, and I/O Devices. - The control information is used to manage the flow of addresses and data. For example, a control signal will typically be used to determine the direction in which the data is flowing either to or from the CPU. The Bus Width is the number of bits that it can transfer simultaneously. The wider the bus, the more address or data bits it can move at once. A widespread example of databus used billions of times every day is the 'Universal Serial Bus' aka USB. This bus, used to transfer data, charge your phone and power your electric pencil sharpener has been the standard on earth since 1996. What is cache memory ? Because memory accesses are very time consuming relative to the speed of the processor, many architectures provide Cache Memory. Cache memory is a small amount of fast-access memory into which copies of frequently used data are stored. Before a main memory access is made, the CPU checks whether the data is stored in the cache memory. Pipelining is another technique used to speed up the Fetch-Execute Cycle. This technique splits an instruction into smaller steps that can be overlapped. There are 3 types of cache memory. L1 cache is the smallest but the fastest cache memory including data and instructions. L2 cache has bigger memory, but slower and is data only. L3 cache as the biggest memory but the slowest speed and is data only. In a personal computer, the component in a von Neumann machine reside physically a printed circuit board called the Motherboard. The motherboard also has connections for attaching other devices to the bus, such as a mouse, a keyboard, or additional storage devices. So what does it mean to say that a machine is an "n"-bit processor? The variable "n" usually refers to the number of bits in the CPU general registers: Two "n"-bit numbers can be added with a single instruction. It also can refer to the width of the bus, which is the size of the addressable memory—but not always. In addition, "n" can refer to the width of the data bus—but not always. What is the Von Neumann Architecture ? A major defining point in the history of computing was the realisation in 1944–1945 that data and instructions to manipulate data were logically the same and could be stored in the same place. The computer design built upon this principle, which became known as the "von Neumann Architecture", is still the basis for computers today. Although the name honours John von Neumann, a brilliant mathematician who worked on the construction of the atomic bomb, the idea probably originated with J. Presper Echkert and John Mauchly, two other early pioneers who worked on the ENIAC at the Moore School at the University of Pennsylvania during the same time period. Another major characteristic of the von Neumann architecture is that the units that process information are separate from the units that store information. This characteristic leads to the following components of the von Neumann architecture. Memory, Process, and CPU Management. Recall that executing a program resides in main memory and its instructions are processed one after another in the fetch-decode-execute cycle. Multiprogramming is the technique of keeping multiple programs in main memory at the same time; these programs compete for access to the CPU so that they can do their work. All modern operating system employ multiprogramming to one degree or another. An operating system must therefore perform memory management to keep track of which programs are in memory and where in memory they reside. Another key operating system concept is the idea of a process, which can be defined as a program in execution. A program is a static set of instructions. A process is a dynamic entity that represents the program while it is being executed. Through multiprogramming, a computer system might have many active processes at once. The operating system must manage these processes carefully. At any point in time a specific instruction may be the next to be executed. Intermediate values have been calculated. A process might be interrupted during its execution, so the operating system performs process management to carefully track the progress of a process and all of its intermediate states. Related to the ideas of memory management and process management is the need for CPU scheduling, which determines which process in memory is executed by the CPU at any given point. One must remember, however, that the OS itself is just a program that must be executed. Operating system processes must be managed and maintained in the main memory along with other system software and application programs. The OS executes on the same CPU as other programs, and it must take its turn among them. The following sections (Single Contiguous Memory Management, Partition Memory Management, Paged Memory Management) are for interest purposes only. The Computer Science Guide clearly notes under 2.1.6 that "technical details are not needed. For example, memory management should be described but how this is handled in a multitasking environment is not expected." Therefore, these sections serve the purpose of illustrating memory management techniques and can be summarised rather than utilised in an exam. Single Contiguous Memory Management. Has only the operating system and one other program in memory at one time. This is the most basic type of memory management. Partition Memory Management. The memory is broken up into different parts. Has the operating system and any number of programs running at the same time through these different partitions. The partitions have a base and a bound register. Base register Bounds register Partition selection First fit Best fit Worst fit Paged Memory Management. A technique in which processes are divided into fixed-size pages and stored in memory frames when loaded. Frame Page
3,469
IB/Group 4/Computer Science/Computer Organisation/The Operating System Layer. What is an operating system (OS) and what are its main tasks ? The OS, operating system, is a program that after being loaded manages the computer's memory, processes, software, and hardware. Essentially, the OS allows the user to communicate with the Computer without knowing the computer's language. There are different parts of the OS and the central part is called the Kernel. It has control over everything in the OS. It works as link between the hardware in the computer and the processing that run on it. It manages the correspondence between the two. It also coordinates the Computer's access to the CPU and the memory so that the computer is able to run multiple operations at once. It allows for memory allocation and the allocation of resources towards multiple programs so that the computer can run these processes all at the same time. For example, when the computer needs to print something, the CPU is not what would take care of that. The OS would send the documents to be printed it the "print queue" and would "talk" to the printer for the CPU. The OS has many main tasks such as: Describe the main functions of an operating system.. As early as the end of the first generation of software development, there was a split between those programmers who wrote tools to help other programmers and those who used to solve problems. Modern software can be divided into two categories, system software and application software, reflecting this separation of goals. Application Software is written to address specific needs — to solve problems in the real world. Word processing programs, games, inventory control systems, automobile diagnostic programs, and missile guidance programs are all application software. System Software manages a computer system at a more fundamental level. It provides the tools and an environment in which application software can be created and run. System software often interacts directly with the hardware and provides more functionality than the hardware does itself. The Operating System of a computer is the core of its system software. An operating system manages computer resources, such as memory, and input/output devices, and provides an interface through which a human can interact with the computer. Other system software supports specific application goals, such as a library or graphics software that renders images on a display. The operating system allows an application program to interact with these other system resources. Memory Management. Memory management is a function in the OS (operating systems) that handles or manages primary memory and a single user operating system (a function that will only have one single user at any given time). It moves processes back and forth between main memory and disk during execution. Memory management keeps track of each and every memory location, regardless of either it is allocated to some process or it is free. It checks how much memory is to be allocated to process; it decides which process will get memory at what time; it tracks whenever some memory gets freed or unallocated and correspondingly it updates the status. All of the above is done in the goal of achieving an efficient utilization of memory. Why is memory management important or necessary? Peripherals/device Management. A peripheral is an external hardware device that interacts with the OS, without being a core component of the computer. This means input and output devices, such as keyboards and mice for input, and a computer monitor or printer for output. The OS must read the input and act on it, which usually means executing a set of instructions. It also is responsible of managing output devices, for instance queuing printing tasks for a printer. The OS and peripherals interact through a driver, which "translates" data from the OS to the peripheral and back. Each peripheral has its own driver, and it differs depending on the OS. Networking. Networking is the process of computer communicating and connecting between each other. Computers use common communication protocols over digital interconnections to communicate with each other. Networking works using two components, Nodes and Links: - Nodes are physical pieces of technology that allow for the communication between systems, these can be routers or a Modem. - Links are how these Nodes communicated between each other, examples of these are: - Wired using simple wires such as USBs or more complicated cables such as Ethernet cable/ Fiber Optic cables - Wireless using free space that allows for wireless communication such as Bluetooth and Wifi Key Terms for Networking are: - Protocol which is a set of rules and standards that govern how data is transmitted over a network these include, TCP/IP, HTTP, and FTP. - Topology refers to the physical and logical arrangement of nodes on a network. Examples of these are bus, star, ring, mesh, and tree. - IP Addresses are how different devices can indentify themselves, each computer has a unique IP Address. Security. Security can be provided by the OS using different methods: - By monitoring what permissions certain applications have (what these applications have control over and what data they can access). For example, one wouldn't want their emails to be readily available to all applications they download, as this could lead to fake emails and theft of personal information. - The OS should have the minimum amount of permissions and privileges to perform its function in order to avoid compromising the whole OS if one component is breached. - The OS should have multiple layers of security: antivirus, authentication (username, password or passcode), firewall, encryption etc. Input Output devices. All of the computing power in the world would not be useful if one could not input values into the calculations from the outside or report to the outside the results of said calculations. Input and output units are the channels through which the computer communicates with the outside world. An Input Unit is a device through which data and programs from the outside world are entered into the computer. The first input units interpreted holes punched on paper tape or cards. Modern-day input devices, however, include, but are not limited to; keyboards, mice, cameras (also known as, simply, webcams), and scanning devices. An Output Unit is a device through which results stored in the computer memory are made available to the outside world. Examples include printers and screen monitors. Peripheral devices refer to all hardware components that are connected / attached to the computer system but that are not part of the core computer architecture. OS cannot interact with an exterior hardware device directly (there are always new ones coming out), therefore there is an intermediary called a device driver which is a type of software. Instead the device driver tells the OS what the peripheral device is and acts as a translator: the OS will send standard commands to the driver that will then translate them and drive them to the device. A distinction can be made between input and output peripheral devices / units. Note that for a laptop for example these drivers and peripherals are integrated into the computer.
1,497