title
stringlengths 1
149
⌀ | section
stringlengths 1
1.9k
⌀ | text
stringlengths 13
73.5k
|
---|---|---|
Desktop environment | Examples of desktop environments | Mainstream desktop environments for Unix-like operating systems use the X Window System, and include KDE, GNOME, Xfce, LXDE, and Aqua, any of which may be selected by users and are not tied exclusively to the operating system in use. |
Desktop environment | Examples of desktop environments | A number of other desktop environments also exist, including (but not limited to) CDE, EDE, GEM, IRIX Interactive Desktop, Sun's Java Desktop System, Jesktop, Mezzo, Project Looking Glass, ROX Desktop, UDE, Xito, XFast. Moreover, there exists FVWM-Crystal, which consists of a powerful configuration for the FVWM window manager, a theme and further adds, altogether forming a "construction kit" for building up a desktop environment. |
Desktop environment | Examples of desktop environments | X window managers that are meant to be usable stand-alone — without another desktop environment — also include elements reminiscent of those found in typical desktop environments, most prominently Enlightenment. Other examples include OpenBox, Fluxbox, WindowLab, Fvwm, as well as Window Maker and AfterStep, which both feature the NeXTSTEP GUI look and feel. However newer versions of some operating systems make self configure. |
Desktop environment | Examples of desktop environments | The Amiga approach to desktop environment was noteworthy: the original Workbench desktop environment in AmigaOS evolved through time to originate an entire family of descendants and alternative desktop solutions. Some of those descendants are the Scalos, the Ambient desktop of MorphOS, and the Wanderer desktop of the AROS open source OS. WindowLab also contains features reminiscent of the Amiga UI. Third-party Directory Opus software, which was originally just a navigational file manager program, evolved to become a complete Amiga desktop replacement called Directory Opus Magellan. |
Desktop environment | Examples of desktop environments | OS/2 (and derivatives such as eComStation and ArcaOS) use the Workplace Shell. Earlier versions of OS/2 used the Presentation Manager.
The BumpTop project was an experimental desktop environment. Its main objective is to replace the 2D paradigm with a "real-world" 3D implementation, where documents can be freely manipulated across a virtual table.
Gallery |
SPIM | SPIM | SPIM is a MIPS processor simulator, designed to run assembly language code for this architecture. The program simulates R2000 and R3000 processors, and was written by James R. Larus while a professor at the University of Wisconsin–Madison. The MIPS machine language is often taught in college-level assembly courses, especially those using the textbook Computer Organization and Design: The Hardware/Software Interface by David A. Patterson and John L. Hennessy (ISBN 1-55860-428-6). |
SPIM | SPIM | The name of the simulator is a reversal of the letters "MIPS".
SPIM simulators are available for Windows (PCSpim), Mac OS X and Unix/Linux-based (xspim) operating systems. As of release 8.0 in January 2010, the simulator is licensed under the standard BSD license. |
SPIM | SPIM | In January, 2011, a major release version 9.0 features QtSpim that has a new user interface built on the cross-platform Qt UI framework and runs on Windows, Linux, and macOS. From this version, the project has also been moved to SourceForge for better maintenance. Precompiled versions of QtSpim for Linux (32-bit), Windows, and Mac OS X, as well as PCSpim for Windows are provided. |
SPIM | The SPIM operating system | The SPIM simulator comes with a rudimentary operating system, which allows the programmer usage of common used functions in a comfortable way. Such functions are invoked by the syscall-instruction. Then the OS acts depending on the values of specific registers.
The SPIM OS expects a label named main as a handover point from the OS-preamble. |
SPIM | SPIM Alternatives/Competitors | MARS (MIPS Assembler and Runtime Simulator) is a Java-based IDE for the MIPS Assembly Programming Language and an alternative to SPIM.
Its initial release was in 2005 and is under active development.Imperas is a suite of embedded software development tools for MIPS architecture which uses Just-in-time compilation emulation and simulation technology.
The simulator was initially released in 2008 and is under active development.
There are over 30 open source models of the MIPS 32 bit and 64 bit cores.
Other alternative to SPIM for educational purposes is The CREATOR simulator. CREATOR is portable (can be executed in current web browsers) and allow students to learn several assembly languages of different processors at the same time (CREATOR includes examples of MIPS32 and RISC-V instructions). |
Carbon tetraiodide | Carbon tetraiodide | Carbon tetraiodide is a tetrahalomethane with the molecular formula CI4. Being bright red, it is a relatively rare example of a highly colored methane derivative. It is only 2.3% by weight carbon, although other methane derivatives are known with still less carbon. |
Carbon tetraiodide | Structure | The tetrahedral molecule features C-I distances of 2.12 ± 0.02 Å. The molecule is slightly crowded with short contacts between iodine atoms of 3.459 ± 0.03 Å, and possibly for this reason, it is thermally and photochemically unstable.
Carbon tetraiodide crystallizes in tetragonal crystal structure (a 6.409, c 9.558 (.10−1 nm)).It has zero dipole moment due to its symmetrically substituted tetrahedral geometry. |
Carbon tetraiodide | Properties, synthesis, uses | Carbon tetraiodide is slightly reactive towards water, giving iodoform and I2. It is soluble in nonpolar organic solvents. It decomposes thermally and photochemically to tetraiodoethylene, C2I4. Its synthesis entails AlCl3-catalyzed halide exchange, which is conducted at room temperature: CCl EtI CI EtCl The product crystallizes from the reaction solution.
Carbon tetraiodide is used as an iodination reagent, often upon reaction with bases. Ketones are converted to 1,1-diiodoalkenes upon treatment with triphenylphosphine (PPh3) and carbon tetraiodide. Alcohols are converted in and to iodide, by a mechanism similar to the Appel reaction. In an Appel reaction, carbon tetrachloride is used to generate alkyl chlorides from alcohols. |
Carbon tetraiodide | Safety considerations | Manufacturers recommend that carbon tetraiodide be stored near 0 °C (32 °F). As a ready source of iodine, it is an irritant. Its LD50 on rats is 18 mg/kg. In general, perhalogenated organic compounds should be considered toxic, with the narrow exception of small perfluoroalkanes (essentially inert due to the strength of the C-F bond). |
Asystole | Asystole | Asystole (New Latin, from Greek privative a "not, without" + systolē "contraction") is the absence of ventricular contractions in the context of a lethal heart arrhythmia (in contrast to an induced asystole on a cooled patient on a heart-lung machine and general anesthesia during surgery necessitating stopping the heart). Asystole is the most serious form of cardiac arrest and is usually irreversible. Also referred to as cardiac flatline, asystole is the state of total cessation of electrical activity from the heart, which means no tissue contraction from the heart muscle and therefore no blood flow to the rest of the body. |
Asystole | Asystole | Asystole should not be confused with very brief pauses in the heart's electrical activity—even those that produce a temporary flatline—that can occur in certain less severe abnormal rhythms. Asystole is different from very fine occurrences of ventricular fibrillation, though both have a poor prognosis, and untreated fine VF will lead to asystole. Faulty wiring, disconnection of electrodes and leads, and power disruptions should be ruled out. |
Asystole | Asystole | Asystolic patients (as opposed to those with a "shockable rhythm" such as coarse or fine ventricular fibrillation, or unstable ventricular tachycardia that is not producing a pulse, which can potentially be treated with defibrillation) usually present with a very poor prognosis. Asystole is found initially in only about 28% of cardiac arrest cases in hospitalized patients, but only 15% of these survive, even with the benefit of an intensive care unit, with the rate being lower (6%) for those already prescribed drugs for high blood pressure.Asystole is treated by cardiopulmonary resuscitation (CPR) combined with an intravenous vasopressor such as epinephrine (a.k.a. adrenaline). Sometimes an underlying reversible cause can be detected and treated (the so-called "Hs and Ts", an example of which is hypokalaemia). Several interventions previously recommended—such as defibrillation (known to be ineffective on asystole, but previously performed in case the rhythm was actually very fine ventricular fibrillation) and intravenous atropine—are no longer part of the routine protocols recommended by most major international bodies. 1 mg epinephrine by IV every 3–5 minutes is given for asystole.Survival rates in a cardiac arrest patient with asystole are much lower than a patient with a rhythm amenable to defibrillation; asystole is itself not a "shockable" rhythm. Even in those cases where an individual suffers a cardiac arrest with asystole and it is converted to a less severe shockable rhythm (ventricular fibrillation, or ventricular tachycardia), this does not necessarily improve the person's chances of survival to discharge from the hospital, though if the case was witnessed by a civilian, or better, a paramedic, who gave good CPR and cardiac drugs, this is an important confounding factor to be considered in certain select cases. Out-of-hospital survival rates (even with emergency intervention) are less than 2 percent. |
Asystole | Cause | Possible underlying causes, which may be treatable and reversible in certain cases, include the Hs and Ts. |
Asystole | Cause | Hypovolemia Hypoxia Hydrogen ions (acidosis) Hypothermia Hyperkalemia or hypokalemia Toxins (e.g. drug overdose) Cardiac tamponade Tension pneumothorax Thrombosis (myocardial infarction or pulmonary embolism)While the heart is asystolic, there is no blood flow to the brain unless CPR or internal cardiac massage (when the chest is opened and the heart is manually compressed) is performed, and even then it is a small amount. After many emergency treatments have been applied but the heart is still unresponsive, it is time to consider pronouncing the patient dead. Even in the rare case that a rhythm reappears, if asystole has persisted for fifteen minutes or more, the brain will have been deprived of oxygen long enough to cause severe hypoxic brain damage, resulting in brain death or persistent vegetative state. |
Lie–Kolchin theorem | Lie–Kolchin theorem | In mathematics, the Lie–Kolchin theorem is a theorem in the representation theory of linear algebraic groups; Lie's theorem is the analog for linear Lie algebras.
It states that if G is a connected and solvable linear algebraic group defined over an algebraically closed field and ρ:G→GL(V) a representation on a nonzero finite-dimensional vector space V, then there is a one-dimensional linear subspace L of V such that ρ(G)(L)=L. |
Lie–Kolchin theorem | Lie–Kolchin theorem | That is, ρ(G) has an invariant line L, on which G therefore acts through a one-dimensional representation. This is equivalent to the statement that V contains a nonzero vector v that is a common (simultaneous) eigenvector for all ρ(g),g∈G It follows directly that every irreducible finite-dimensional representation of a connected and solvable linear algebraic group G has dimension one. In fact, this is another way to state the Lie–Kolchin theorem. |
Lie–Kolchin theorem | Lie–Kolchin theorem | The result for Lie algebras was proved by Sophus Lie (1876) and for algebraic groups was proved by Ellis Kolchin (1948, p.19).
The Borel fixed point theorem generalizes the Lie–Kolchin theorem. |
Lie–Kolchin theorem | Triangularization | Sometimes the theorem is also referred to as the Lie–Kolchin triangularization theorem because by induction it implies that with respect to a suitable basis of V the image ρ(G) has a triangular shape; in other words, the image group ρ(G) is conjugate in GL(n,K) (where n = dim V) to a subgroup of the group T of upper triangular matrices, the standard Borel subgroup of GL(n,K): the image is simultaneously triangularizable. |
Lie–Kolchin theorem | Triangularization | The theorem applies in particular to a Borel subgroup of a semisimple linear algebraic group G. |
Lie–Kolchin theorem | Counter-example | If the field K is not algebraically closed, the theorem can fail. The standard unit circle, viewed as the set of complex numbers {x+iy∈C∣x2+y2=1} of absolute value one is a one-dimensional commutative (and therefore solvable) linear algebraic group over the real numbers which has a two-dimensional representation into the special orthogonal group SO(2) without an invariant (real) line. Here the image ρ(z) of z=x+iy is the orthogonal matrix (xy−yx). |
Isotope fractionation | Isotope fractionation | Isotope fractionation describes fractionation processes that affect the relative abundance of isotopes, phenomena which are taken advantage of in isotope geochemistry and other fields. Normally, the focus is on stable isotopes of the same element. Isotopic fractionation can be measured by isotope analysis, using isotope-ratio mass spectrometry or cavity ring-down spectroscopy to measure ratios of isotopes, an important tool to understand geochemical and biological systems. For example, biochemical processes cause changes in ratios of stable carbon isotopes incorporated into biomass. |
Isotope fractionation | Definition | Stable isotopes partitioning between two substances A and B can be expressed by the use of the isotopic fractionation factor (alpha): αA-B = RA/RBwhere R is the ratio of the heavy to light isotope (e.g., 2H/1H or 18O/16O). Values for alpha tend to be very close to 1. |
Isotope fractionation | Types | There are four types of isotope fractionation (of which the first two are normally most important): equilibrium fractionation, kinetic fractionation, mass-independent fractionation (or non-mass-dependent fractionation), and transient kinetic isotope fractionation. |
Isotope fractionation | Example | Isotope fractionation occurs during a phase transition, when the ratio of light to heavy isotopes in the involved molecules changes. When water vapor condenses (an equilibrium fractionation), the heavier water isotopes (18O and 2H) become enriched in the liquid phase while the lighter isotopes (16O and 1H) tend toward the vapor phase. |
Isotope fractionation | Literature | Faure G., Mensing T.M. (2004), Isotopes: Principles and Applications (John Wiley & Sons).
Hoefs J., 2004. Stable Isotope Geochemistry (Springer Verlag).
Sharp Z., 2006. Principles of Stable Isotope Geochemistry (Prentice Hall). |
Cloud chamber | Cloud chamber | A cloud chamber, also known as a Wilson cloud chamber, is a particle detector used for visualizing the passage of ionizing radiation. |
Cloud chamber | Cloud chamber | A cloud chamber consists of a sealed environment containing a supersaturated vapor of water or alcohol. An energetic charged particle (for example, an alpha or beta particle) interacts with the gaseous mixture by knocking electrons off gas molecules via electrostatic forces during collisions, resulting in a trail of ionized gas particles. The resulting ions act as condensation centers around which a mist-like trail of small droplets form if the gas mixture is at the point of condensation. These droplets are visible as a "cloud" track that persists for several seconds while the droplets fall through the vapor. These tracks have characteristic shapes. For example, an alpha particle track is thick and straight, while a beta particle track is wispy and shows more evidence of deflections by collisions. Cloud chambers were invented in the early 1900s by the Scottish physicist Charles Thomson Rees Wilson. They played a prominent role in experimental particle physics from the 1920s to the 1950s, until the advent of the bubble chamber. In particular, the discoveries of the positron in 1932 (see Fig. 1) and the muon in 1936, both by Carl Anderson (awarded a Nobel Prize in Physics in 1936), used cloud chambers. Discovery of the kaon by George Rochester and Clifford Charles Butler in 1947, also was made using a cloud chamber as the detector. In each of these cases, cosmic rays were the source of ionizing radiation. Yet they were also used with artificial sources of particles, for example in radiography applications as part of the Manhattan Project. |
Cloud chamber | Invention | Charles Thomson Rees Wilson (1869–1959), a Scottish physicist, is credited with inventing the cloud chamber. Inspired by sightings of the Brocken spectre while working on the summit of Ben Nevis in 1894, he began to develop expansion chambers for studying cloud formation and optical phenomena in moist air. Very rapidly he discovered that ions could act as centers for water droplet formation in such chambers. He pursued the application of this discovery and perfected the first cloud chamber in 1911. In Wilson's original chamber (See Fig. 2) the air inside the sealed device was saturated with water vapor, then a diaphragm was used to expand the air inside the chamber (adiabatic expansion), cooling the air and starting to condense water vapor. Hence the name expansion cloud chamber is used. When an ionizing particle passes through the chamber, water vapor condenses on the resulting ions and the trail of the particle is visible in the vapor cloud. Wilson received half the Nobel Prize in Physics in 1927 for his work on the cloud chamber (the same year as Arthur Compton received half the prize for the Compton Effect). This kind of chamber is also called a pulsed chamber because the conditions for operation are not continuously maintained. Further developments were made by Patrick Blackett who utilised a stiff spring to expand and compress the chamber very rapidly, making the chamber sensitive to particles several times a second. A cine film was used to record the images. |
Cloud chamber | Invention | The diffusion cloud chamber was developed in 1936 by Alexander Langsdorf. This chamber differs from the expansion cloud chamber in that it is continuously sensitized to radiation, and in that the bottom must be cooled to a rather low temperature, generally colder than −26 °C (−15 °F). Instead of water vapor, alcohol is used because of its lower freezing point. Cloud chambers cooled by dry ice or Peltier effect thermoelectric cooling are common demonstration and hobbyist devices; the alcohol used in them is commonly isopropyl alcohol or methylated spirit. |
Cloud chamber | Structure and operation | Diffusion-type cloud chambers will be discussed here. A simple cloud chamber consists of the sealed environment, a warm top plate and a cold bottom plate (See Fig. 3). It requires a source of liquid alcohol at the warm side of the chamber where the liquid evaporates, forming a vapor that cools as it falls through the gas and condenses on the cold bottom plate. Some sort of ionizing radiation is needed. |
Cloud chamber | Structure and operation | Isopropanol, methanol, or other alcohol vapor saturates the chamber. The alcohol falls as it cools down and the cold condenser provides a steep temperature gradient. The result is a supersaturated environment. As energetic charged particles pass through the gas they leave ionization trails. The alcohol vapor condenses around gaseous ion trails left behind by the ionizing particles. This occurs because alcohol and water molecules are polar, resulting in a net attractive force toward a nearby free charge (See Fig. 4). The result is a misty cloud-like formation, seen by the presence of droplets falling down to the condenser. When the tracks are emitted from a source, their point of origin can easily be determined. Fig. 5 shows an example of an alpha particle from a Pb-210 pin-type source undergoing Rutherford scattering. |
Cloud chamber | Structure and operation | Just above the cold condenser plate there is a volume of the chamber which is sensitive to ionization tracks. The ion trail left by the radioactive particles provides an optimal trigger for condensation and cloud formation. This sensitive volume is increased in height by employing a steep temperature gradient, and stable conditions. A strong electric field is often used to draw cloud tracks down to the sensitive region of the chamber and increase the sensitivity of the chamber. The electric field can also serve to prevent large amounts of background "rain" from obscuring the sensitive region of the chamber, caused by condensation forming above the sensitive volume of the chamber, thereby obscuring tracks by constant precipitation. A black background makes it easier to observe cloud tracks, and typically a tangential light source is needed to illuminate the white droplets against the black background. Often the tracks are not apparent until a shallow pool of alcohol is formed at the condenser plate. If a magnetic field is applied across the cloud chamber, positively and negatively charged particles will curve in opposite directions, according to the Lorentz force law; strong-enough fields are difficult to achieve, however, with small hobbyist setups. This method was also used to prove the existence of the Positron in 1932, in accordance with Paul Dirac's theoretical proof, published in 1928. |
Cloud chamber | Benefits and functionality | Particle Visualization: Cloud chambers allow scientists to observe the paths of charged particles as they pass through the chamber. By creating a supersaturated vapor environment, the particles ionize the vapor molecules, creating a visible trail of tiny droplets or clouds. This visualization helps researchers study the behavior, properties, and interactions of these particles.
Particle Identification: Cloud chambers can be used to identify different types of particles based on their path and characteristics. By analyzing the curvature, density, and other properties of the particle tracks, scientists can distinguish between various particles, such as electrons, muons, alpha particles, and more.
Studying Radioactivity: Cloud chambers are particularly useful in studying radioactive decay and radiation. Radioactive particles emitted from a radioactive source can be observed and their properties analyzed within the cloud chamber. This helps scientists understand the nature of radioactivity, decay processes, and the behavior of radioactive particles.
Educational Tool: Research and Discovery: Cloud chambers have been instrumental in numerous scientific discoveries throughout history, including the identification of new particles and the study of particle interactions. By providing a means to observe and analyze particle tracks, cloud chambers have contributed significantly to advancing our knowledge of the subatomic world. |
Cloud chamber | Other particle detectors | The bubble chamber was invented by Donald A. Glaser of the United States in 1952, and for this, he was awarded the Nobel Prize in Physics in 1960. The bubble chamber similarly reveals the tracks of subatomic particles, but as trails of bubbles in a superheated liquid, usually liquid hydrogen. Bubble chambers can be made physically larger than cloud chambers, and since they are filled with much-denser liquid material, they reveal the tracks of much more energetic particles. These factors rapidly made the bubble chamber the predominant particle detector for a number of decades, so that cloud chambers were effectively superseded in fundamental research by the start of the 1960s.A spark chamber is an electrical device that uses a grid of uninsulated electric wires in a chamber, with high voltages applied between the wires. Energetic charged particles cause ionization of the gas along the path of the particle in the same way as in the Wilson cloud chamber, but in this case the ambient electric fields are high enough to precipitate full-scale gas breakdown in the form of sparks at the position of the initial ionization. The presence and location of these sparks is then registered electrically, and the information is stored for later analysis, such as by a digital computer. |
Cloud chamber | Other particle detectors | Similar condensation effects can be observed as Wilson clouds, also called condensation clouds, at large explosions in humid air and other Prandtl–Glauert singularity effects. |
Popular beat combo | Popular beat combo | Popular beat combo, which originated as a synonym for "pop group", is a phrase within British culture. It may also be used more specifically to refer to The Beatles, or other such purveyors of beat music.
The phrase is frequently used in Private Eye and in the BBC panel game Have I Got News For You, making fun of Ian Hislop's supposed lack of knowledge about modern music. |
Popular beat combo | Derivation | It is widely held that the phrase "popular beat combo" was coined in an English courtroom in the 1960s, by a barrister in response to a judge asking (for the benefit of the court's records) "Who are The Beatles?"; the answer being "I believe they are a popular beat combo, m'lud."However, neither the question nor the answer has ever been reliably attributed, and remains the stuff of urban legend. Marcel Berlins, legal correspondent for The Guardian newspaper, failed in his attempt to track down any verification. In 2007, Berlins restated his offer of "a bottle of best Guardian champagne to any reader with a solution". Christie Davies attributes the encounter to Judge James Pickles.The phrase is part of a trope in postwar British culture where judges are seen to be out of touch, the ultimate example being in the 1960 obscenity trial of Lady Chatterley's Lover, in which the legal profession was ridiculed for being out of touch with changing social norms when the chief prosecutor, Mervyn Griffith-Jones, asked jurors to consider if it were the kind of book "you would wish your wife or servants to read". |
Protoaphin-aglucone dehydratase (cyclizing) | Protoaphin-aglucone dehydratase (cyclizing) | The enzyme protoaphin-aglucone dehydratase (cyclizing) (EC 4.2.1.73) catalyzes the chemical reaction protoaphin aglucone ⇌ xanthoaphin + H2OThis enzyme belongs to the family of lyases, specifically the hydro-lyases, which cleave carbon-oxygen bonds. The systematic name of this enzyme class is protoaphin-aglucone hydro-lyase (cyclizing; xanthoaphin-forming). Other names in common use include protoaphin dehydratase, protoaphin dehydratase (cyclizing), and protoaphin-aglucone hydro-lyase (cyclizing). |
Coeliac UK | Coeliac UK | Coeliac UK is a UK charity for people with coeliac disease - a condition estimated to affect 1 out of every 100 people and to be twice as common in women as in men - and the skin manifestation of the condition, dermatitis herpetiformis (DH). |
Coeliac UK | History | Founded in 1968 by Amnesty International founder Peter Benenson, and Elizabeth Segall, Coeliac UK (originally called The Coeliac Society) launched the first symbol that acknowledged and advertised that a product contained no gluten, namely the Crossed Grain symbol. Noted clinician Sir Christopher Booth was a founding member.
The charity renamed itself Coeliac UK in 2001 and has since established the All Party Parliamentary Group on coeliac disease and DH and worked with the Food Standards Agency to introduce a new law that governed the labelling of gluten-free food.English actress Caroline Quentin is the current patron of the charity. |
Apoptosis-inducing factor, mitochondria-associated 3 | Apoptosis-inducing factor, mitochondria-associated 3 | Apoptosis-inducing factor, mitochondria-associated 3 is a protein that in humans is encoded by the AIFM3 gene. |
Troponin | Troponin | Troponin, or the troponin complex, is a complex of three regulatory proteins (troponin C, troponin I, and troponin T) that are integral to muscle contraction in skeletal muscle and cardiac muscle, but not smooth muscle. Measurements of cardiac-specific troponins I and T are extensively used as diagnostic and prognostic indicators in the management of myocardial infarction and acute coronary syndrome. Blood troponin levels may be used as a diagnostic marker for stroke or other myocardial injury that is ongoing, although the sensitivity of this measurement is low. |
Troponin | Function | Troponin is attached to the protein tropomyosin and lies within the groove between actin filaments in muscle tissue. In a relaxed muscle, tropomyosin blocks the attachment site for the myosin crossbridge, thus preventing contraction. When the muscle cell is stimulated to contract by an action potential, calcium channels open in the sarcoplasmic membrane and release calcium into the sarcoplasm. Some of this calcium attaches to troponin, which causes it to change shape, exposing binding sites for myosin (active sites) on the actin filaments. Myosin's binding to actin causes crossbridge formation, and contraction of the muscle begins. |
Troponin | Function | Troponin is found in both skeletal muscle and cardiac muscle, but the specific versions of troponin differ between types of muscle. The main difference is that the TnC subunit of troponin in skeletal muscle has four calcium ion-binding sites, whereas in cardiac muscle there are only three. The actual amount of calcium that binds to troponin has not been definitively established. |
Troponin | Physiology | In both cardiac and skeletal muscles, muscular force production is controlled primarily by changes in intracellular calcium concentration. In general, when calcium rises, the muscles contract and, when calcium falls, the muscles relax.Troponin is a component of thin filaments (along with actin and tropomyosin), and is the protein complex to which calcium binds to trigger the production of muscular force. Troponin has three subunits, TnC, TnI, and TnT, each playing a role in force regulation.. Under resting intracellular levels of calcium, tropomyosin covers the active actin sites to which myosin (a molecular motor organized in muscle thick filaments) binds in order to generate force. When calcium becomes bound to specific sites in the N-domain of TnC, a series of protein structural changes occurs, such that tropomyosin is rolled away from myosin-binding sites on actin, allowing myosin to attach to the thin filament and produce force and shorten the sarcomere.Individual subunits serve different functions: Troponin C binds to calcium ions to produce a conformational change in TnI Troponin T binds to tropomyosin, interlocking them to form a troponin-tropomyosin complex Troponin I binds to actin in thin myofilaments to hold the actin-tropomyosin complex in placeSmooth muscle does not have troponin. |
Troponin | Physiology | Subunits TnT is a tropomyosin-binding subunit which regulates the interaction of troponin complex with thin filaments; TnI inhibits ATP-ase activity of acto-myosin; TnC is a Ca2+-binding subunit, playing the main role in Ca2+ dependent regulation of muscle contraction.TnT and TnI in cardiac muscle are presented by forms different from those in skeletal muscles. Two isoforms of TnI and two isoforms of TnT are expressed in human skeletal muscle tissue (skTnI and skTnT). Only one tissue-specific isoform of TnI is described for cardiac muscle tissue (cTnI), whereas the existence of several cardiac specific isoforms of TnT (cTnT) are described in the literature. No cardiac specific isoforms are known for human TnC. TnC in human cardiac muscle tissue is presented by an isoform typical for slow skeletal muscle. Another form of TnC, the fast skeletal TnC isoform, is more typical for fast skeletal muscles. cTnI is expressed only in myocardium. No examples of cTnI expression in healthy or injured skeletal muscle or in other tissue types are known. cTnT is probably less cardiac specific. The expression of cTnT in skeletal tissue of patients with chronic skeletal muscle injuries has been described.Inside the cardiac troponin complex the strongest interaction between molecules has been demonstrated for cTnI – TnC binary complex especially in the presence of Ca2+ ( KA = 1.5 × 10−8 M−1). TnC, forming a complex with cTnI, changes the conformation of cTnI molecule and shields part of its surface. According to the latest data cTnI is released in the blood stream of the patient in the form of binary complex with TnC or ternary complex with cTnT and TnC. cTnI-TnC complex formation plays an important positive role in improving the stability of cTnI molecule. cTnI, which is extremely unstable in its free form, demonstrates significantly better stability in complex with TnC or in ternary cTnI-cTnT-TnC complex. It has been demonstrated that stability of cTnI in native complex is significantly better than stability of the purified form of the protein or the stability of cTnI in artificial troponin complexes combined from purified proteins. |
Troponin | Research | Cardiac conditions Certain subtypes of troponin (cardiac I and T) are sensitive and specific indicators of damage to the heart muscle (myocardium). They are measured in the blood to differentiate between unstable angina and myocardial infarction (heart attack) in people with chest pain or acute coronary syndrome. A person who recently had a myocardial infarction would have an area of damaged heart muscle and elevated cardiac troponin levels in the blood. This can also occur in people with coronary vasospasm, a type of myocardial infarction involving severe constriction of the cardiac blood vessels. After a myocardial infarction troponins may remain high for up to 2 weeks.Cardiac troponins are a marker of all heart muscle damage, not just myocardial infarction, which is the most severe form of heart disorder. However, diagnostic criteria for raised troponin indicating myocardial infarction is currently set by the WHO at a threshold of 2 μg or higher. Critical levels of other cardiac biomarkers are also relevant, such as creatine kinase. Other conditions that directly or indirectly lead to heart muscle damage and death can also increase troponin levels, such as kidney failure. Severe tachycardia (for example due to supraventricular tachycardia) in an individual with normal coronary arteries can also lead to increased troponins for example, it is presumed due to increased oxygen demand and inadequate supply to the heart muscle.Troponins are also increased in patients with heart failure, where they also predict mortality and ventricular rhythm abnormalities. They can rise in inflammatory conditions such as myocarditis and pericarditis with heart muscle involvement (which is then termed myopericarditis). Troponins can also indicate several forms of cardiomyopathy, such as dilated cardiomyopathy, hypertrophic cardiomyopathy or (left) ventricular hypertrophy, peripartum cardiomyopathy, Takotsubo cardiomyopathy, or infiltrative disorders such as cardiac amyloidosis.Heart injury with increased troponins also occurs in cardiac contusion, defibrillation and internal or external cardioversion. Troponins are commonly increased in several procedures such as cardiac surgery and heart transplantation, closure of atrial septal defects, percutaneous coronary intervention, or radiofrequency ablation. |
Troponin | Research | Non-cardiac conditions The distinction between cardiac and non-cardiac conditions is somewhat artificial; the conditions listed below are not primary heart diseases, but they exert indirect effects on the heart muscle.
Troponins are increased in around 40% of patients with critical illnesses such as sepsis. There is an increased risk of mortality and length of stay in the intensive-care unit in these patients. In severe gastrointestinal bleeding, there can also be a mismatch between oxygen demand and supply of the myocardium. |
Troponin | Research | Chemotherapy agents can exert toxic effects on the heart (examples include anthracycline, cyclophosphamide, 5-fluorouracil, and cisplatin). Several toxins and venoms can also lead to heart muscle injury (scorpion venom, snake venom, and venom from jellyfish and centipedes). Carbon monoxide poisoning or cyanide poisoning can also be accompanied by the release of troponins due to hypoxic cardiotoxic effects. Cardiac injury occurs in about one-third of severe CO poisoning cases, and troponin screening is appropriate in these patients.In both primary pulmonary hypertension, pulmonary embolism, and acute exacerbations of chronic obstructive pulmonary disease (COPD), right ventricular strain results in increased wall tension and may cause ischemia. Of course, patients with COPD exacerbations might also have concurrent myocardial infarction or pulmonary embolism, so care has to be taken to attribute increased troponin levels to COPD. |
Troponin | Research | People with end-stage kidney disease can have chronically elevated troponin T levels, which are linked to a poorer prognosis. Troponin I is less likely to be falsely elevated.Strenuous endurance exercise such as marathons or triathlons can lead to increased troponin levels in up to one-third of subjects, but it is not linked to adverse health effects in these competitors.
High troponin T levels have also been reported in patients with inflammatory muscle diseases such as polymyositis or dermatomyositis. Troponins are also increased in rhabdomyolysis.
In hypertensive disorders of pregnancy such as preeclampsia, elevated troponin levels indicate some degree of myofibrillary damage.Cardiac troponin T and I can be used to monitor drug and toxin-induced cardiomyocyte toxicity. .In 2020, it was found that patients with severe COVID-19 had higher troponin I levels compared to those with milder disease. |
Troponin | Research | Prognostic use Elevated troponin levels are prognostically important in many of the conditions in which they are used for diagnosis.In a community-based cohort study indicating the importance of silent cardiac damage, troponin I has been shown to predict mortality and first coronary heart disease event in men free from cardiovascular disease at baseline. In people with stroke, elevated blood troponin levels are not a useful marker to detect the condition. |
Troponin | Research | Subunits First cTnI and later cTnT were originally used as markers for cardiac cell death. Both proteins are now widely used to diagnose acute myocardial infarction (AMI), unstable angina, post-surgery myocardium trauma and some other related diseases with cardiac muscle injury. Both markers can be detected in patient's blood 3–6 hours after onset of the chest pain, reaching peak level within 16–30 hours. Elevated concentration of cTnI and cTnT in blood samples can be detected even 5–8 days after onset of the symptoms, making both proteins useful also for the late diagnosis of AMI. |
Troponin | Detection | Cardiac troponin T and I are measured by immunoassay methods.
Due to patent regulations, a single manufacturer (Roche Diagnostics) distributes cTnT.
A host of diagnostic companies make cTnI immunoassay methods available on many different immunoassay platforms.Troponin elevation following cardiac cell necrosis starts within 2–3 hours, peaks in approx. 24 hours, and persists for 1–2 weeks. |
ML (programming language) | ML (programming language) | ML (Meta Language) is a general-purpose functional programming language. It is known for its use of the polymorphic Hindley–Milner type system, which automatically assigns the types of most expressions without requiring explicit type annotations, and ensures type safety – there is a formal proof that a well-typed ML program does not cause runtime type errors. ML provides pattern matching for function arguments, garbage collection, imperative programming, call-by-value and currying. It is used heavily in programming language research and is one of the few languages to be completely specified and verified using formal semantics. Its types and pattern matching make it well-suited and commonly used to operate on other formal languages, such as in compiler writing, automated theorem proving, and formal verification. |
ML (programming language) | Overview | Features of ML include a call-by-value evaluation strategy, first-class functions, automatic memory management through garbage collection, parametric polymorphism, static typing, type inference, algebraic data types, pattern matching, and exception handling. ML uses static scoping rules.ML can be referred to as an impure functional language, because although it encourages functional programming, it does allow side-effects (like languages such as Lisp, but unlike a purely functional language such as Haskell). Like most programming languages, ML uses eager evaluation, meaning that all subexpressions are always evaluated, though lazy evaluation can be achieved through the use of closures. Thus one can create and use infinite streams as in Haskell, but their expression is indirect. |
ML (programming language) | Overview | ML's strengths are mostly applied in language design and manipulation (compilers, analyzers, theorem provers), but it is a general-purpose language also used in bioinformatics and financial systems.
ML was developed by Robin Milner and others in the early 1970s at the University of Edinburgh, and its syntax is inspired by ISWIM. Historically, ML was conceived to develop proof tactics in the LCF theorem prover (whose language, pplambda, a combination of the first-order predicate calculus and the simply-typed polymorphic lambda calculus, had ML as its metalanguage).
Today there are several languages in the ML family; the three most prominent are Standard ML (SML), OCaml and F#. Ideas from ML have influenced numerous other languages, like Haskell, Cyclone, Nemerle, ATS, and Elm. |
ML (programming language) | Examples | The following examples use the syntax of Standard ML. Other ML dialects such as OCaml and F# differ in small ways.
Factorial The factorial function expressed as pure ML: This describes the factorial as a recursive function, with a single terminating base case. It is similar to the descriptions of factorials found in mathematics textbooks. Much of ML code is similar to mathematics in facility and syntax. |
ML (programming language) | Examples | Part of the definition shown is optional, and describes the types of this function. The notation E : t can be read as expression E has type t. For instance, the argument n is assigned type integer (int), and fac (n : int), the result of applying fac to the integer n, also has type integer. The function fac as a whole then has type function from integer to integer (int -> int), that is, fac accepts an integer as an argument and returns an integer result. Thanks to type inference, the type annotations can be omitted and will be derived by the compiler. Rewritten without the type annotations, the example looks like: The function also relies on pattern matching, an important part of ML programming. Note that parameters of a function are not necessarily in parentheses but separated by spaces. When the function's argument is 0 (zero) it will return the integer 1 (one). For all other cases the second line is tried. This is the recursion, and executes the function again until the base case is reached. |
ML (programming language) | Examples | This implementation of the factorial function is not guaranteed to terminate, since a negative argument causes an infinite descending chain of recursive calls. A more robust implementation would check for a nonnegative argument before recursing, as follows: The problematic case (when n is negative) demonstrates a use of ML's exception system. |
ML (programming language) | Examples | The function can be improved further by writing its inner loop as a tail call, such that the call stack need not grow in proportion to the number of function calls. This is achieved by adding an extra, accumulator, parameter to the inner function. At last, we arrive at List reverse The following function reverses the elements in a list. More precisely, it returns a new list whose elements are in reverse order compared to the given list. |
ML (programming language) | Examples | This implementation of reverse, while correct and clear, is inefficient, requiring quadratic time for execution. The function can be rewritten to execute in linear time: This function is an example of parametric polymorphism. That is, it can consume lists whose elements have any type, and return lists of the same type. |
ML (programming language) | Examples | Modules Modules are ML's system for structuring large projects and libraries. A module consists of a signature file and one or more structure files. The signature file specifies the API to be implemented (like a C header file, or Java interface file). The structure implements the signature (like a C source file or Java class file). For example, the following define an Arithmetic signature and an implementation of it using Rational numbers: These are imported into the interpreter by the 'use' command. Interaction with the implementation is only allowed via the signature functions, for example it is not possible to create a 'Rat' data object directly via this code. The 'structure' block hides all the implementation detail from outside. |
ML (programming language) | Examples | ML's standard libraries are implemented as modules in this way. |
Transcriptome | Transcriptome | The transcriptome is the set of all RNA transcripts, including coding and non-coding, in an individual or a population of cells. The term can also sometimes be used to refer to all RNAs, or just mRNA, depending on the particular experiment. The term transcriptome is a portmanteau of the words transcript and genome; it is associated with the process of transcript production during the biological process of transcription. |
Transcriptome | Transcriptome | The early stages of transcriptome annotations began with cDNA libraries published in the 1980s. Subsequently, the advent of high-throughput technology led to faster and more efficient ways of obtaining data about the transcriptome. Two biological techniques are used to study the transcriptome, namely DNA microarray, a hybridization-based technique and RNA-seq, a sequence-based approach. RNA-seq is the preferred method and has been the dominant transcriptomics technique since the 2010s. Single-cell transcriptomics allows tracking of transcript changes over time within individual cells. |
Transcriptome | Transcriptome | Data obtained from the transcriptome is used in research to gain insight into processes such as cellular differentiation, carcinogenesis, transcription regulation and biomarker discovery among others. Transcriptome-obtained data also finds applications in establishing phylogenetic relationships during the process of evolution and in in vitro fertilization. The transcriptome is closely related to other -ome based biological fields of study; it is complementary to the proteome and the metabolome and encompasses the translatome, exome, meiome and thanatotranscriptome which can be seen as ome fields studying specific types of RNA transcripts. There are quantifiable and conserved relationships between the Transcriptome and other -omes, and Transcriptomics data can be used effectively to predict other molecular species, such as metabolites. There are numerous publicly available transcriptome databases. |
Transcriptome | Etymology and history | The word transcriptome is a portmanteau of the words transcript and genome. It appeared along with other neologisms formed using the suffixes -ome and -omics to denote all studies conducted on a genome-wide scale in the fields of life sciences and technology. As such, transcriptome and transcriptomics were one of the first words to emerge along with genome and proteome. The first study to present a case of a collection of a cDNA library for silk moth mRNA was published in 1979. The first seminal study to mention and investigate the transcriptome of an organism was published in 1997 and it described 60,633 transcripts expressed in S. cerevisiae using serial analysis of gene expression (SAGE). With the rise of high-throughput technologies and bioinformatics and the subsequent increased computational power, it became increasingly efficient and easy to characterize and analyze enormous amount of data. Attempts to characterize the transcriptome became more prominent with the advent of automated DNA sequencing during the 1980s. During the 1990s, expressed sequence tag sequencing was used to identify genes and their fragments. This was followed by techniques such as serial analysis of gene expression (SAGE), cap analysis of gene expression (CAGE), and massively parallel signature sequencing (MPSS). |
Transcriptome | Transcription | The transcriptome encompasses all the ribonucleic acid (RNA) transcripts present in a given organism or experimental sample. RNA is the main carrier of genetic information that is responsible for the process of converting DNA into an organism's phenotype. A gene can give rise to a single-stranded messenger RNA (mRNA) through a molecular process known as transcription; this mRNA is complementary to the strand of DNA it originated from. The enzyme RNA polymerase II attaches to the template DNA strand and catalyzes the addition of ribonucleotides to the 3' end of the growing sequence of the mRNA transcript.In order to initiate its function, RNA polymerase II needs to recognize a promoter sequence, located upstream (5') of the gene. In eukaryotes, this process is mediated by transcription factors, most notably Transcription factor II D (TFIID) which recognizes the TATA box and aids in the positioning of RNA polymerase at the appropriate start site. To finish the production of the RNA transcript, termination takes place usually several hundred nuclecotides away from the termination sequence and cleavage takes place. This process occurs in the nucleus of a cell along with RNA processing by which mRNA molecules are capped, spliced and polyadenylated to increase their stability before being subsequently taken to the cytoplasm. The mRNA gives rise to proteins through the process of translation that takes place in ribosomes. |
Transcriptome | Types of RNA transcripts | Almost all functional transcripts are derived from known genes. The only exceptions are a small number of transcripts that might play a direct role in regulating gene expression near the prompters of known genes. (See Enhancer RNA.) Gene occupy most of prokaryotic genomes so most of their genomes are transcribed. Many eukaryotic genomes are very large and known genes may take up only a fraction of the genome. In mammals, for example, known genes only account for 40-50% of the genome. Nevertheless, identified transcripts often map to a much larger fraction of the genome suggesting that the transcriptome contains spurious transcripts that do not come from genes. Some of these transcripipts are known to be non-functional because they map to transcribed pseudogenes or degenerative transposons and viruses. Others map to unidentified regions of the genome that may be junk DNA. Spurious transcription is very common in eukaryotes, especially those with large genomes that might contain a lot of junk DNA. Some scientists claim that if a transcript has not been assigned to a known gene then the default assumption must be that it is junk RNA until it has been shown to be functional. This would mean that much of the transcriptome in species with large genomes is probably junk RNA. (See Non-coding RNA) The transcriptome includes the transcripts of protein-coding genes (mRNA plus introns) as well as the transcripts of non-coding genes (functional RNAs plus introns). Ribosomal RNA/rRNA: Usually the most abundant RNA in the transcriptome. |
Transcriptome | Types of RNA transcripts | Long non-coding RNA/lncRNA: Non-coding RNA transcripts that are more than 200 nucleotides long. Members of this group comprise the largest fraction of the non-coding transcriptome other than introns. It is not known how many of these transcripts are functional and how many are junk RNA.
transfer RNA/tRNA micro RNA/miRNA: 19-24 nucleotides (nt) long. Micro RNAs up- or downregulate expression levels of mRNAs by the process of RNA interference at the post-transcriptional level.
small interfering RNA/siRNA: 20-24 nt small nucleolar RNA/snoRNA Piwi-interacting RNA/piRNA: 24-31 nt. They interact with Piwi proteins of the Argonaute family and have a function in targeting and cleaving transposons.
enhancer RNA/eRNA: |
Transcriptome | Scope of study | In the human genome, all genes get transcribed into RNA because that's how the molecular gene is defined. (See Gene.) The transcriptome consists of coding regions of mRNA plus non-coding UTRs, introns, non-coding RNAs, and spurious non-functional transcripts. Several factors render the content of the transcriptome difficult to establish. These include alternative splicing, RNA editing and alternative transcription among others. Additionally, transcriptome techniques are capable of capturing transcription occurring in a sample at a specific time point, although the content of the transcriptome can change during differentiation. The main aims of transcriptomics are the following: "catalogue all species of transcript, including mRNAs, non-coding RNAs and small RNAs; to determine the transcriptional structure of genes, in terms of their start sites, 5′ and 3′ ends, splicing patterns and other post-transcriptional modifications; and to quantify the changing expression levels of each transcript during development and under different conditions".The term can be applied to the total set of transcripts in a given organism, or to the specific subset of transcripts present in a particular cell type. Unlike the genome, which is roughly fixed for a given cell line (excluding mutations), the transcriptome can vary with external environmental conditions. Because it includes all mRNA transcripts in the cell, the transcriptome reflects the genes that are being actively expressed at any given time, with the exception of mRNA degradation phenomena such as transcriptional attenuation. The study of transcriptomics, (which includes expression profiling, splice variant analysis etc.), examines the expression level of RNAs in a given cell population, often focusing on mRNA, but sometimes including others such as tRNAs and sRNAs. |
Transcriptome | Methods of construction | Transcriptomics is the quantitative science that encompasses the assignment of a list of strings ("reads") to the object ("transcripts" in the genome). To calculate the expression strength, the density of reads corresponding to each object is counted. Initially, transcriptomes were analyzed and studied using expressed sequence tags libraries and serial and cap analysis of gene expression (SAGE). |
Transcriptome | Methods of construction | Currently, the two main transcriptomics techniques include DNA microarrays and RNA-Seq. Both techniques require RNA isolation through RNA extraction techniques, followed by its separation from other cellular components and enrichment of mRNA.There are two general methods of inferring transcriptome sequences. One approach maps sequence reads onto a reference genome, either of the organism itself (whose transcriptome is being studied) or of a closely related species. The other approach, de novo transcriptome assembly, uses software to infer transcripts directly from short sequence reads and is used in organisms with genomes that are not sequenced. |
Transcriptome | Methods of construction | DNA microarrays The first transcriptome studies were based on microarray techniques (also known as DNA chips). Microarrays consist of thin glass layers with spots on which oligonucleotides, known as "probes" are arrayed; each spot contains a known DNA sequence.When performing microarray analyses, mRNA is collected from a control and an experimental sample, the latter usually representative of a disease. The RNA of interest is converted to cDNA to increase its stability and marked with fluorophores of two colors, usually green and red, for the two groups. The cDNA is spread onto the surface of the microarray where it hybridizes with oligonucleotides on the chip and a laser is used to scan. The fluorescence intensity on each spot of the microarray corresponds to the level of gene expression and based on the color of the fluorophores selected, it can be determined which of the samples exhibits higher levels of the mRNA of interest.One microarray usually contains enough oligonucleotides to represent all known genes; however, data obtained using microarrays does not provide information about unknown genes. During the 2010s, microarrays were almost completely replaced by next-generation techniques that are based on DNA sequencing. |
Transcriptome | Methods of construction | RNA sequencing RNA sequencing is a next-generation sequencing technology; as such it requires only a small amount of RNA and no previous knowledge of the genome. It allows for both qualitative and quantitative analysis of RNA transcripts, the former allowing discovery of new transcripts and the latter a measure of relative quantities for transcripts in a sample.The three main steps of sequencing transcriptomes of any biological samples include RNA purification, the synthesis of an RNA or cDNA library and sequencing the library. The RNA purification process is different for short and long RNAs. This step is usually followed by an assessment of RNA quality, with the purpose of avoiding contaminants such as DNA or technical contaminants related to sample processing. RNA quality is measured using UV spectrometry with an absorbance peak of 260 nm. RNA integrity can also be analyzed quantitatively comparing the ratio and intensity of 28S RNA to 18S RNA reported in the RNA Integrity Number (RIN) score. Since mRNA is the species of interest and it represents only 3% of its total content, the RNA sample should be treated to remove rRNA and tRNA and tissue-specific RNA transcripts.The step of library preparation with the aim of producing short cDNA fragments, begins with RNA fragmentation to transcripts in length between 50 and 300 base pairs. Fragmentation can be enzymatic (RNA endonucleases), chemical (trismagnesium salt buffer, chemical hydrolysis) or mechanical (sonication, nebulisation). Reverse transcription is used to convert the RNA templates into cDNA and three priming methods can be used to achieve it, including oligo-DT, using random primers or ligating special adaptor oligos. |
Transcriptome | Methods of construction | Single-cell transcriptomics Transcription can also be studied at the level of individual cells by single-cell transcriptomics. Single-cell RNA sequencing (scRNA-seq) is a recently developed technique that allows the analysis of the transcriptome of single cells. With single-cell transcriptomics, subpopulations of cell types that constitute the tissue of interest are also taken into consideration. This approach allows to identify whether changes in experimental samples are due to phenotypic cellular changes as opposed to proliferation, with which a specific cell type might be overexpressed in the sample. Additionally, when assessing cellular progression through differentiation, average expression profiles are only able to order cells by time rather than their stage of development and are consequently unable to show trends in gene expression levels specific to certain stages. Single-cell trarnscriptomic techniques have been used to characterize rare cell populations such as circulating tumor cells, cancer stem cells in solid tumors, and embryonic stem cells (ESCs) in mammalian blastocysts.Although there are no standardized techniques for single-cell transcriptomics, several steps need to be undertaken. The first step includes cell isolation, which can be performed using low- and high-throughput techniques. This is followed by a qPCR step and then single-cell RNAseq where the RNA of interest is converted into cDNA. Newer developments in single-cell transcriptomics allow for tissue and sub-cellular localization preservation through cryo-sectioning thin slices of tissues and sequencing the transcriptome in each slice. Another technique allows the visualization of single transcripts under a microscope while preserving the spatial information of each individual cell where they are expressed. |
Transcriptome | Analysis | A number of organism-specific transcriptome databases have been constructed and annotated to aid in the identification of genes that are differentially expressed in distinct cell populations. |
Transcriptome | Analysis | RNA-seq is emerging (2013) as the method of choice for measuring transcriptomes of organisms, though the older technique of DNA microarrays is still used. RNA-seq measures the transcription of a specific gene by converting long RNAs into a library of cDNA fragments. The cDNA fragments are then sequenced using high-throughput sequencing technology and aligned to a reference genome or transcriptome which is then used to create an expression profile of the genes. |
Transcriptome | Applications | Mammals The transcriptomes of stem cells and cancer cells are of particular interest to researchers who seek to understand the processes of cellular differentiation and carcinogenesis. A pipeline using RNA-seq or gene array data can be used to track genetic changes occurring in stem and precursor cells and requires at least three independent gene expression data from the former cell type and mature cells.Analysis of the transcriptomes of human oocytes and embryos is used to understand the molecular mechanisms and signaling pathways controlling early embryonic development, and could theoretically be a powerful tool in making proper embryo selection in in vitro fertilisation. Analyses of the transcriptome content of the placenta in the first-trimester of pregnancy in in vitro fertilization and embryo transfer (IVT-ET) revealed differences in genetic expression which are associated with higher frequency of adverse perinatal outcomes. Such insight can be used to optimize the practice. Transcriptome analyses can also be used to optimize cryopreservation of oocytes, by lowering injuries associated with the process.Transcriptomics is an emerging and continually growing field in biomarker discovery for use in assessing the safety of drugs or chemical risk assessment.Transcriptomes may also be used to infer phylogenetic relationships among individuals or to detect evolutionary patterns of transcriptome conservation.Transcriptome analyses were used to discover the incidence of antisense transcription, their role in gene expression through interaction with surrounding genes and their abundance in different chromosomes. RNA-seq was also used to show how RNA isoforms, transcripts stemming from the same gene but with different structures, can produce complex phenotypes from limited genomes. |
Transcriptome | Applications | Plants Transcriptome analysis have been used to study the evolution and diversification process of plant species. In 2014, the 1000 Plant Genomes Project was completed in which the transcriptomes of 1,124 plant species from the families viridiplantae, glaucophyta and rhodophyta were sequenced. The protein coding sequences were subsequently compared to infer phylogenetic relationships between plants and to characterize the time of their diversification in the process of evolution. Transcriptome studies have been used to characterize and quantify gene expression in mature pollen. Genes involved in cell wall metabolism and cytoskeleton were found to be overexpressed. Transcriptome approaches also allowed to track changes in gene expression through different developmental stages of pollen, ranging from microspore to mature pollen grains; additionally such stages could be compared across species of different plants including Arabidopsis, rice and tobacco. |
Transcriptome | Relation to other ome fields | Similar to other -ome based technologies, analysis of the transcriptome allows for an unbiased approach when validating hypotheses experimentally. This approach also allows for the discovery of novel mediators in signaling pathways. As with other -omics based technologies, the transcriptome can be analyzed within the scope of a multiomics approach. It is complementary to metabolomics but contrary to proteomics, a direct association between a transcript and metabolite cannot be established. |
Transcriptome | Relation to other ome fields | There are several -ome fields that can be seen as subcategories of the transcriptome. The exome differs from the transcriptome in that it includes only those RNA molecules found in a specified cell population, and usually includes the amount or concentration of each RNA molecule in addition to the molecular identities. Additionally, the transcritpome also differs from the translatome, which is the set of RNAs undergoing translation. |
Transcriptome | Relation to other ome fields | The term meiome is used in functional genomics to describe the meiotic transcriptome or the set of RNA transcripts produced during the process of meiosis. Meiosis is a key feature of sexually reproducing eukaryotes, and involves the pairing of homologous chromosome, synapse and recombination. Since meiosis in most organisms occurs in a short time period, meiotic transcript profiling is difficult due to the challenge of isolation (or enrichment) of meiotic cells (meiocytes). As with transcriptome analyses, the meiome can be studied at a whole-genome level using large-scale transcriptomic techniques. The meiome has been well-characterized in mammal and yeast systems and somewhat less extensively characterized in plants.The thanatotranscriptome consists of all RNA transcripts that continue to be expressed or that start getting re-expressed in internal organs of a dead body 24–48 hours following death. Some genes include those that are inhibited after fetal development. If the thanatotranscriptome is related to the process of programmed cell death (apoptosis), it can be referred to as the apoptotic thanatotranscriptome. Analyses of the thanatotranscriptome are used in forensic medicine.eQTL mapping can be used to complement genomics with transcriptomics; genetic variants at DNA level and gene expression measures at RNA level. |
Transcriptome | Relation to other ome fields | Relation to proteome The transcriptome can be seen as a subset of the proteome, that is, the entire set of proteins expressed by a genome. |
Transcriptome | Relation to other ome fields | However, the analysis of relative mRNA expression levels can be complicated by the fact that relatively small changes in mRNA expression can produce large changes in the total amount of the corresponding protein present in the cell. One analysis method, known as gene set enrichment analysis, identifies coregulated gene networks rather than individual genes that are up- or down-regulated in different cell populations.[1]Although microarray studies can reveal the relative amounts of different mRNAs in the cell, levels of mRNA are not directly proportional to the expression level of the proteins they code for. The number of protein molecules synthesized using a given mRNA molecule as a template is highly dependent on translation-initiation features of the mRNA sequence; in particular, the ability of the translation initiation sequence is a key determinant in the recruiting of ribosomes for protein translation. |
Transcriptome | Transcriptome databases | Ensembl: [2] OmicTools: [3] Transcriptome Browser: [4] ArrayExpress: [5] |
Nervine | Nervine | Nervine was a patent medicine tonic with sedative effects introduced in 1884 by Dr. Miles Medical Company (later Miles Laboratories which was absorbed into Bayer). It is a cognate of 'Nerve', and the implication was that the material worked to calm nervousness. |
Nervine | Formulation | One form of Nervine was formulated with the primary active ingredients sodium bromide, ammonium bromide, and potassium bromide, combined with sodium bicarbonate and citric acid in an effervescent tablet. |
Nervine | Modern appropriation of term | In the late 20th and early 21st century, promulgators of alternative medicine and herbalism have begun to use the term Nervine as an adjective. This is not a term used by mainstream medicine, where anxiolytic is the preferred term. |
Comonotonicity | Comonotonicity | In probability theory, comonotonicity mainly refers to the perfect positive dependence between the components of a random vector, essentially saying that they can be represented as increasing functions of a single random variable. In two dimensions it is also possible to consider perfect negative dependence, which is called countermonotonicity. Comonotonicity is also related to the comonotonic additivity of the Choquet integral.The concept of comonotonicity has applications in financial risk management and actuarial science, see e.g. Dhaene et al. (2002a) and Dhaene et al. (2002b). In particular, the sum of the components X1 + X2 + · · · + Xn is the riskiest if the joint probability distribution of the random vector (X1, X2, . . . , Xn) is comonotonic. Furthermore, the α-quantile of the sum equals of the sum of the α-quantiles of its components, hence comonotonic random variables are quantile-additive. In practical risk management terms it means that there is minimal (or eventually no) variance reduction from diversification. |
Comonotonicity | Comonotonicity | For extensions of comonotonicity, see Jouini & Napp (2004) and Puccetti & Scarsini (2010). |
Comonotonicity | Definitions | Comonotonicity of subsets of Rn A subset S of Rn is called comonotonic (sometimes also nondecreasing) if, for all (x1, x2, . . . , xn) and (y1, y2, . . . , yn) in S with xi < yi for some i ∈ {1, 2, . . . , n}, it follows that xj ≤ yj for all j ∈ {1, 2, . . . , n}. |
Comonotonicity | Definitions | This means that S is a totally ordered set.
Comonotonicity of probability measures on Rn Let μ be a probability measure on the n-dimensional Euclidean space Rn and let F denote its multivariate cumulative distribution function, that is := μ({(y1,…,yn)∈Rn∣y1≤x1,…,yn≤xn}),(x1,…,xn)∈Rn.
Furthermore, let F1, . . . , Fn denote the cumulative distribution functions of the n one-dimensional marginal distributions of μ, that means := μ({(y1,…,yn)∈Rn∣yi≤x}),x∈R for every i ∈ {1, 2, . . . , n}. Then μ is called comonotonic, if min i∈{1,…,n}Fi(xi),(x1,…,xn)∈Rn.
Note that the probability measure μ is comonotonic if and only if its support S is comonotonic according to the above definition.
Comonotonicity of Rn-valued random vectors An Rn-valued random vector X = (X1, . . . , Xn) is called comonotonic, if its multivariate distribution (the pushforward measure) is comonotonic, this means Pr min Pr (Xi≤xi),(x1,…,xn)∈Rn. |
Comonotonicity | Properties | An Rn-valued random vector X = (X1, . . . , Xn) is comonotonic if and only if it can be represented as (X1,…,Xn)=d(FX1−1(U),…,FXn−1(U)), where =d stands for equality in distribution, on the right-hand side are the left-continuous generalized inverses of the cumulative distribution functions FX1, . . . , FXn, and U is a uniformly distributed random variable on the unit interval. More generally, a random vector is comonotonic if and only if it agrees in distribution with a random vector where all components are non-decreasing functions (or all are non-increasing functions) of the same random variable. |
Comonotonicity | Upper bounds | Upper Fréchet–Hoeffding bound for cumulative distribution functions Let X = (X1, . . . , Xn) be an Rn-valued random vector. Then, for every i ∈ {1, 2, . . . , n}, Pr Pr (Xi≤xi),(x1,…,xn)∈Rn, hence Pr min Pr (Xi≤xi),(x1,…,xn)∈Rn, with equality everywhere if and only if (X1, . . . , Xn) is comonotonic. |