title
stringlengths 1
149
⌀ | section
stringlengths 1
1.9k
⌀ | text
stringlengths 13
73.5k
|
---|---|---|
Nested transaction | Nested transaction | The capability to handle nested transactions properly is a prerequisite for true component-based application architectures. In a component-based encapsulated architecture, nested transactions can occur without the programmer knowing it. A component function may or may not contain a database transaction (this is the encapsulated secret of the component. See Information hiding). If a call to such a component function is made inside a BEGIN - COMMIT bracket, nested transactions occur. Since popular databases like MySQL do not allow nesting BEGIN - COMMIT brackets, a framework or a transaction monitor is needed to handle this. When we speak about nested transactions, it should be made clear that this feature is DBMS dependent and is not available for all databases. |
Nested transaction | Nested transaction | Theory for nested transactions is similar to the theory for flat transactions.The banking industry usually processes financial transactions using open nested transactions, which is a looser variant of the nested transaction model that provides higher performance while accepting the accompanying trade-offs of inconsistency. |
Decamethylferrocene | Decamethylferrocene | Decamethylferrocene or bis(pentamethylcyclopentadienyl)iron(II) is a chemical compound with formula Fe(C5(CH3)5)2 or C20H30Fe. It is a sandwich compound, whose molecule has an iron(II) cation Fe2+ attached by coordination bonds between two pentamethylcyclopentadienyl anions (Cp*−, (CH3)5C−5). It can also be viewed as a derivative of ferrocene, with a methyl group replacing each hydrogen atom of its cyclopentadienyl rings. The name and formula are often abbreviated to DmFc, Me10Fc or FeCp*2.This compound is a yellow crystalline solid that is used in chemical laboratories as a weak reductant. The iron(II) core is easily oxidized to iron(III), yielding the monovalent cation decamethylferrocenium, and even to higher oxidation states. |
Decamethylferrocene | Preparation | Decamethylferrocene is prepared in the same manner as ferrocene from pentamethylcyclopentadiene. This method can be used to produce other decamethylcyclopentadienyl sandwich compounds.
2 Li(C5Me5) + FeCl2 → Fe(C5Me5)2 + 2 LiClThe product can be purified by sublimation. FeCp*2 has staggered Cp* rings. The average distance between iron and each carbon is approximately 2.050 Å. This structure has been confirmed by X-ray crystallography. |
Decamethylferrocene | Redox reactions | Like ferrocene, decamethylferrocene forms a stable cation because Fe(II) is easily oxidized to Fe(III). Because of the electron donating methyl groups on the Cp* groups, decamethylferrocene is more reducing than is ferrocene. In a solution of acetonitrile the reduction potential for the [FeCp*2]+/0 couple is −0.59 V compared to a [FeCp2]0/+ reference (−0.48 V vs Fc/Fc+ in CH2Cl2). Oxygen is reduced to hydrogen peroxide by decamethylferrocene in acidic solution.Using powerful oxidants (e.g. SbF5 or AsF5 in SO2, or XeF+/Sb2F−11 in HF/SbF5) decamethylferrocene is oxidized to a stable dication with an iron(IV) core. In the Sb2F−11 salt, the Cp* rings are parallel. In contrast, a tilt angle of 17° between the Cp* rings is observed in the crystal structure of the SbF−6 salt. |
RoboCup Simulation League | RoboCup Simulation League | The RoboCup Simulation League is one of five soccer leagues within the RoboCup initiative.It is characterised by independently moving software players (agents) that play soccer on a virtual field inside a computer simulation.
It is divided into four subleagues: 2D Soccer Simulation 3D Soccer Simulation 3D Development Mixed Reality Soccer Simulation (formerly called Visualisation) |
RoboCup Simulation League | Differences between 2D and 3D simulations | The 2D simulation sub-league had its first release in early 1995 with version 0.1. It has been actively maintained since then with updates every few months. The ball and all players are represented as circles on the plane of the field. Their position is restricted to the two dimensions of the plane.
SimSpark, the platform on top of which the 3D simulation sub-league is built, was registered with SourceForge in 2004. The platform itself is now well established with ongoing development. The ball and all players are represented as articulated rigid bodies within a system that enforces the simulation of physical properties such as mass, inertia and friction. |
RoboCup Simulation League | Differences between 2D and 3D simulations | As of 2010, a direct comparison of the gameplay of the 2D and 3D leagues shows a marked difference. 2D league teams are generally exhibiting advanced strategies and teamwork, whereas 3D teams appear to struggle with the basics of stability and ambulation. This is partly due to the difference in age of the two leagues, and partly to the difference in complexity involved in building agents for the two leagues. Replaying log files of finals over the recent years shows progress is being made by many teams. |
RoboCup Simulation League | Differences between 2D and 3D simulations | In the 2D system, movement around the plane is achieved via commands from the agents such as move, dash, turn and kick. The 3D system has fewer command choices for agents to send, but the mechanics of motion about the field are much more involved as the positions of 22 hinges throughout the articulated body must be simultaneously controlled. |
Lagometer | Lagometer | A lagometer is a display of network latency on an Internet connection and of rendering by the client. Lagometers are commonly found in computer games or IRC where timing plays a large role. Quake and derived games commonly have them. |
Lagometer | Lagometer | Advanced lagometer consists of two lines – bottom and top. The bottom line advances one pixel per each snapshot received from server (by default they are being sent at 20 snapshots per second rate), while the top one advances one pixel per each frame that is rendered by client. Thus, if the machine framerate was 20 per second, both lines – top and bottom – would run at the same speed. |
Lagometer | Lagometer | Bottom bars correspond to delay before sending a snapshot by a server and receiving it by a client (so called "ping"). The shorter the bar, the smaller the ping was. Red bars mean that the frame has not arrived on time, yellow ones - that the snapshot was suppressed to stay under the rate limit.
Top bars can be drawn in blue or in yellow. While server snapshots are usually received at lower rate as the client framerate, the software interpolates position and movements until it gets an update from a server, when it adjusts own state accordingly. |
Lagometer | Lagometer | The height of upper bars is proportional to the interpolated time between snapshots received (so as long as they come regularly, it stays below the "zero line" and is drawn in blue), or - if snapshots stop to arrive on time - is extrapolated after the last snapshot expected (then bars cross the "zero line" and are drawn in yellow). |
Lagometer | Lagometer | If those bars stay yellow for too long, client is forced to interpolate its frames beyond the "reasonable level" and finally, when the snapshot arrives, the prediction turns out to hardly correspond to the server-side version, which results in a jerky, non continuous movement of scenery (obviously lowering the quality of gameplay).
Some games that use a "lagometer" will simply remove a player from the game if their lag is too high.
In the game Minecraft, the lagometer is displayed on the debug screen, as a line graph that will go up when lag spikes.
Use the following console commands for the following games: |
Russell 2000 Index | Russell 2000 Index | The Russell 2000 Index is a small-cap U.S. stock market index that makes up the smallest 2,000 stocks in the Russell 3000 Index. It was started by the Frank Russell Company in 1984. The index is maintained by FTSE Russell, a subsidiary of the London Stock Exchange Group (LSEG). |
Russell 2000 Index | Overview | The Russell 2000 is by far the most common benchmark for mutual funds that identify themselves as "small-cap", while the S&P 500 index is used primarily for large capitalization stocks. It is the most widely quoted measure of the overall performance of small-cap to mid-cap company shares. It is commonly considered an indicator of the U.S. economy due to its focus on small-cap companies in the U.S. market. The index represents approximately 10% of the total market capitalization of the Russell 3000 Index. As of 31 December 2022, the weighted average market capitalization of a company in the index is approximately $2.76 billion and the median market capitalization is approximately $950 million. The market capitalization of the largest company in the index is approximately $8.1 billion.It first traded above the 1,000 level on May 20, 2013, and above the 2,000 level on December 23, 2020. |
Russell 2000 Index | Overview | Similar small-cap indices include the S&P 600 from Standard & Poor's, which is less commonly used, along with those from other financial information providers. |
Russell 2000 Index | Investing | Many fund companies offer mutual funds and exchange-traded funds (ETFs) that attempt to replicate the performance of the Russell 2000. Their results will be affected by stock selection, trading expenses, and market impact of reacting to changes in the constituent companies of the index. It is not possible to invest directly in an index. |
Sigma 150-600mm f/5-6.3 DG OS HSM lens | Sigma 150-600mm f/5-6.3 DG OS HSM lens | The Sigma APO 150-600mm F5-6.3 DG OS HSM lens is a super-telephoto lens produced by Sigma Corporation. |
Sigma 150-600mm f/5-6.3 DG OS HSM lens | Sigma 150-600mm f/5-6.3 DG OS HSM lens | It is actually a range of two slightly different lenses based on a common design: the Sports and Contemporary. Both lenses feature similar specifications, but there are some notable differences. The Sports model has better weather sealing, more lens elements, a larger size and weight and slightly better optical performance at towards the 600 mm end of its zoom range. The Contemporary model, on the other hand, is built to a cheaper price point but features similar performance. Its performance suffers a little more than the Sport model between 300-600 mm. |
Urelement | Urelement | In set theory, a branch of mathematics, an urelement or ur-element (from the German prefix ur-, 'primordial') is an object that is not a set, but that may be an element of a set. It is also referred to as an atom or individual. |
Urelement | Theory | There are several different but essentially equivalent ways to treat urelements in a first-order theory.
One way is to work in a first-order theory with two sorts, sets and urelements, with a ∈ b only defined when b is a set. In this case, if U is an urelement, it makes no sense to say X∈U , although U∈X is perfectly legitimate. |
Urelement | Theory | Another way is to work in a one-sorted theory with a unary relation used to distinguish sets and urelements. As non-empty sets contain members while urelements do not, the unary relation is only needed to distinguish the empty set from urelements. Note that in this case, the axiom of extensionality must be formulated to apply only to objects that are not urelements. |
Urelement | Theory | This situation is analogous to the treatments of theories of sets and classes. Indeed, urelements are in some sense dual to proper classes: urelements cannot have members whereas proper classes cannot be members. Put differently, urelements are minimal objects while proper classes are maximal objects by the membership relation (which, of course, is not an order relation, so this analogy is not to be taken literally). |
Urelement | Urelements in set theory | The Zermelo set theory of 1908 included urelements, and hence is a version now called ZFA or ZFCA (i.e. ZFA with axiom of choice). It was soon realized that in the context of this and closely related axiomatic set theories, the urelements were not needed because they can easily be modeled in a set theory without urelements. Thus, standard expositions of the canonical axiomatic set theories ZF and ZFC do not mention urelements (for an exception, see Suppes). Axiomatizations of set theory that do invoke urelements include Kripke–Platek set theory with urelements and the variant of Von Neumann–Bernays–Gödel set theory described by Mendelson. In type theory, an object of type 0 can be called an urelement; hence the name "atom". |
Urelement | Urelements in set theory | Adding urelements to the system New Foundations (NF) to produce NFU has surprising consequences. In particular, Jensen proved the consistency of NFU relative to Peano arithmetic; meanwhile, the consistency of NF relative to anything remains an open problem, pending verification of Holmes's proof of its consistency relative to ZF. Moreover, NFU remains relatively consistent when augmented with an axiom of infinity and the axiom of choice. Meanwhile, the negation of the axiom of choice is, curiously, an NF theorem. Holmes (1998) takes these facts as evidence that NFU is a more successful foundation for mathematics than NF. Holmes further argues that set theory is more natural with than without urelements, since we may take as urelements the objects of any theory or of the physical universe. In finitist set theory, urelements are mapped to the lowest-level components of the target phenomenon, such as atomic constituents of a physical object or members of an organisation. |
Urelement | Quine atoms | An alternative approach to urelements is to consider them, instead of as a type of object other than sets, as a particular type of set. Quine atoms (named after Willard Van Orman Quine) are sets that only contain themselves, that is, sets that satisfy the formula x = {x}.Quine atoms cannot exist in systems of set theory that include the axiom of regularity, but they can exist in non-well-founded set theory. ZF set theory with the axiom of regularity removed cannot prove that any non-well-founded sets exist (unless it is inconsistent, in which case it will prove any arbitrary statement), but it is compatible with the existence of Quine atoms. Aczel's anti-foundation axiom implies that there is a unique Quine atom. Other non-well-founded theories may admit many distinct Quine atoms; at the opposite end of the spectrum lies Boffa's axiom of superuniversality, which implies that the distinct Quine atoms form a proper class.Quine atoms also appear in Quine's New Foundations, which allows more than one such set to exist.Quine atoms are the only sets called reflexive sets by Peter Aczel, although other authors, e.g. Jon Barwise and Lawrence Moss, use the latter term to denote the larger class of sets with the property x ∈ x. |
Mid-Atlantic Soft Matter Workshop | Mid-Atlantic Soft Matter Workshop | Mid-Atlantic Soft Matter Workshop or MASM is an interdisciplinary meeting on soft matter routinely hosted and participated in by research and educational organizations in the mid-Atlantic region of the United States.
The workshop consists of several talks relating to the field of soft matter given by invited lecturers, and these talks are interspersed with sessions of short, three-minute "sound-bite" talks that can be delivered by any participant. |
Mid-Atlantic Soft Matter Workshop | History | MASM was started at Georgetown University and organized by Daniel Blair and Jeffrey Urbach of Georgetown University.
There have been 16 MASM meetings. The 16th MASM meeting was hosted at National Institutes of Health (NIH) on July 29, 2015. |
Discrete cosine transform | Discrete cosine transform | A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies. The DCT, first proposed by Nasir Ahmed in 1972, is a widely used transformation technique in signal processing and data compression. It is used in most digital media, including digital images (such as JPEG and HEIF), digital video (such as MPEG and H.26x), digital audio (such as Dolby Digital, MP3 and AAC), digital television (such as SDTV, HDTV and VOD), digital radio (such as AAC+ and DAB+), and speech coding (such as AAC-LD, Siren and Opus). DCTs are also important to numerous other applications in science and engineering, such as digital signal processing, telecommunication devices, reducing network bandwidth usage, and spectral methods for the numerical solution of partial differential equations. |
Discrete cosine transform | Discrete cosine transform | The use of cosine rather than sine functions is critical for compression since fewer cosine functions are needed to approximate a typical signal, whereas for differential equations the cosines express a particular choice of boundary conditions. In particular, a DCT is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using only real numbers. The DCTs are generally related to Fourier series coefficients of a periodically and symmetrically extended sequence whereas DFTs are related to Fourier series coefficients of only periodically extended sequences. DCTs are equivalent to DFTs of roughly twice the length, operating on real data with even symmetry (since the Fourier transform of a real and even function is real and even), whereas in some variants the input or output data are shifted by half a sample. There are eight standard DCT variants, of which four are common. |
Discrete cosine transform | Discrete cosine transform | The most common variant of discrete cosine transform is the type-II DCT, which is often called simply the DCT. This was the original DCT as first proposed by Ahmed. Its inverse, the type-III DCT, is correspondingly often called simply the inverse DCT or the IDCT. Two related transforms are the discrete sine transform (DST), which is equivalent to a DFT of real and odd functions, and the modified discrete cosine transform (MDCT), which is based on a DCT of overlapping data. Multidimensional DCTs (MD DCTs) are developed to extend the concept of DCT to multidimensional signals. A variety of fast algorithms have been developed to reduce the computational complexity of implementing DCT. One of these is the integer DCT (IntDCT), an integer approximation of the standard DCT,: ix, xiii, 1, 141–304 used in several ISO/IEC and ITU-T international standards.DCT compression, also known as block compression, compresses data in sets of discrete DCT blocks. DCT blocks sizes including 8x8 pixels for the standard DCT, and varied integer DCT sizes between 4x4 and 32x32 pixels. The DCT has a strong energy compaction property, capable of achieving high quality at high data compression ratios. However, blocky compression artifacts can appear when heavy DCT compression is applied. |
Discrete cosine transform | History | The DCT was first conceived by Nasir Ahmed, T. Natarajan and K. R. Rao while working at Kansas State University. The concept was proposed to the National Science Foundation in 1972. The DCT was originally intended for image compression. Ahmed developed a practical DCT algorithm with his PhD students T. Raj Natarajan, Wills Dietrich, and Jeremy Fries, and his friend Dr. K. R. Rao at the University of Texas at Arlington in 1973. They presented their results in a January 1974 paper, titled Discrete Cosine Transform. It described what is now called the type-II DCT (DCT-II),: 51 as well as the type-III inverse DCT (IDCT).Since its introduction in 1974, there has been significant research on the DCT. In 1977, Wen-Hsiung Chen published a paper with C. Harrison Smith and Stanley C. Fralick presenting a fast DCT algorithm. Further developments include a 1978 paper by M. J. Narasimha and A. M. Peterson, and a 1984 paper by B. G. Lee. These research papers, along with the original 1974 Ahmed paper and the 1977 Chen paper, were cited by the Joint Photographic Experts Group as the basis for JPEG's lossy image compression algorithm in 1992.The discrete sine transform (DST) was derived from the DCT, by replacing the Neumann condition at x=0 with a Dirichlet condition.: 35-36 The DST was described in the 1974 DCT paper by Ahmed, Natarajan and Rao. A type-I DST (DST-I) was later described by Anil K. Jain in 1976, and a type-II DST (DST-II) was then described by H.B. Kekra and J.K. Solanka in 1978.In 1975, John A. Roese and Guner S. Robinson adapted the DCT for inter-frame motion-compensated video coding. They experimented with the DCT and the fast Fourier transform (FFT), developing inter-frame hybrid coders for both, and found that the DCT is the most efficient due to its reduced complexity, capable of compressing image data down to 0.25-bit per pixel for a videotelephone scene with image quality comparable to an intra-frame coder requiring 2-bit per pixel. In 1979, Anil K. Jain and Jaswant R. Jain further developed motion-compensated DCT video compression, also called block motion compensation. This led to Chen developing a practical video compression algorithm, called motion-compensated DCT or adaptive scene coding, in 1981. Motion-compensated DCT later became the standard coding technique for video compression from the late 1980s onwards.A DCT variant, the modified discrete cosine transform (MDCT), was developed by John P. Princen, A.W. Johnson and Alan B. Bradley at the University of Surrey in 1987, following earlier work by Princen and Bradley in 1986. The MDCT is used in most modern audio compression formats, such as Dolby Digital (AC-3), MP3 (which uses a hybrid DCT-FFT algorithm), Advanced Audio Coding (AAC), and Vorbis (Ogg).Nasir Ahmed also developed a lossless DCT algorithm with Giridhar Mandyam and Neeraj Magotra at the University of New Mexico in 1995. This allows the DCT technique to be used for lossless compression of images. It is a modification of the original DCT algorithm, and incorporates elements of inverse DCT and delta modulation. It is a more effective lossless compression algorithm than entropy coding. Lossless DCT is also known as LDCT. |
Discrete cosine transform | Applications | The DCT is the most widely used transformation technique in signal processing, and by far the most widely used linear transform in data compression. Uncompressed digital media as well as lossless compression have high memory and bandwidth requirements, which is significantly reduced by the DCT lossy compression technique, capable of achieving data compression ratios from 8:1 to 14:1 for near-studio-quality, up to 100:1 for acceptable-quality content. DCT compression standards are used in digital media technologies, such as digital images, digital photos, digital video, streaming media, digital television, streaming television, video on demand (VOD), digital cinema, high-definition video (HD video), and high-definition television (HDTV).The DCT, and in particular the DCT-II, is often used in signal and image processing, especially for lossy compression, because it has a strong "energy compaction" property: in typical applications, most of the signal information tends to be concentrated in a few low-frequency components of the DCT. For strongly correlated Markov processes, the DCT can approach the compaction efficiency of the Karhunen-Loève transform (which is optimal in the decorrelation sense). As explained below, this stems from the boundary conditions implicit in the cosine functions. |
Discrete cosine transform | Applications | DCTs are also widely employed in solving partial differential equations by spectral methods, where the different variants of the DCT correspond to slightly different even/odd boundary conditions at the two ends of the array.
DCTs are also closely related to Chebyshev polynomials, and fast DCT algorithms (below) are used in Chebyshev approximation of arbitrary functions by series of Chebyshev polynomials, for example in Clenshaw–Curtis quadrature.
The DCT is the coding standard for multimedia telecommunication devices. It is widely used for bit rate reduction, and reducing network bandwidth usage. DCT compression significantly reduces the amount of memory and bandwidth required for digital signals.
General applications The DCT is widely used in many applications, which include the following. |
Discrete cosine transform | Applications | DCT visual media standards The DCT-II, also known as simply the DCT, is the most important image compression technique. It is used in image compression standards such as JPEG, and video compression standards such as H.26x, MJPEG, MPEG, DV, Theora and Daala. There, the two-dimensional DCT-II of N×N blocks are computed and the results are quantized and entropy coded. In this case, N is typically 8 and the DCT-II formula is applied to each row and column of the block. The result is an 8 × 8 transform coefficient array in which the (0,0) element (top-left) is the DC (zero-frequency) component and entries with increasing vertical and horizontal index values represent higher vertical and horizontal spatial frequencies. |
Discrete cosine transform | Applications | The integer DCT, an integer approximation of the DCT, is used in Advanced Video Coding (AVC), introduced in 2003, and High Efficiency Video Coding (HEVC), introduced in 2013. The integer DCT is also used in the High Efficiency Image Format (HEIF), which uses a subset of the HEVC video coding format for coding still images. AVC uses 4x4 and 8x8 blocks. HEVC and HEIF use varied block sizes between 4x4 and 32x32 pixels. As of 2019, AVC is by far the most commonly used format for the recording, compression and distribution of video content, used by 91% of video developers, followed by HEVC which is used by 43% of developers. |
Discrete cosine transform | Applications | Image formats Video formats MDCT audio standards General audio Speech coding MD DCT Multidimensional DCTs (MD DCTs) have several applications, mainly 3-D DCTs such as the 3-D DCT-II, which has several new applications like Hyperspectral Imaging coding systems, variable temporal length 3-D DCT coding, video coding algorithms, adaptive video coding and 3-D Compression. Due to enhancement in the hardware, software and introduction of several fast algorithms, the necessity of using M-D DCTs is rapidly increasing. DCT-IV has gained popularity for its applications in fast implementation of real-valued polyphase filtering banks, lapped orthogonal transform and cosine-modulated wavelet bases. |
Discrete cosine transform | Applications | Digital signal processing DCT plays a very important role in digital signal processing. By using the DCT, the signals can be compressed. DCT can be used in electrocardiography for the compression of ECG signals. DCT2 provides a better compression ratio than DCT.
The DCT is widely implemented in digital signal processors (DSP), as well as digital signal processing software. Many companies have developed DSPs based on DCT technology. DCTs are widely used for applications such as encoding, decoding, video, audio, multiplexing, control signals, signaling, and analog-to-digital conversion. DCTs are also commonly used for high-definition television (HDTV) encoder/decoder chips. |
Discrete cosine transform | Applications | Compression artifacts A common issue with DCT compression in digital media are blocky compression artifacts, caused by DCT blocks. The DCT algorithm can cause block-based artifacts when heavy compression is applied. Due to the DCT being used in the majority of digital image and video coding standards (such as the JPEG, H.26x and MPEG formats), DCT-based blocky compression artifacts are widespread in digital media. In a DCT algorithm, an image (or frame in an image sequence) is divided into square blocks which are processed independently from each other, then the DCT of these blocks is taken, and the resulting DCT coefficients are quantized. This process can cause blocking artifacts, primarily at high data compression ratios. This can also cause the "mosquito noise" effect, commonly found in digital video (such as the MPEG formats).DCT blocks are often used in glitch art. The artist Rosa Menkman makes use of DCT-based compression artifacts in her glitch art, particularly the DCT blocks found in most digital media formats such as JPEG digital images and MP3 digital audio. Another example is Jpegs by German photographer Thomas Ruff, which uses intentional JPEG artifacts as the basis of the picture's style. |
Discrete cosine transform | Informal overview | Like any Fourier-related transform, discrete cosine transforms (DCTs) express a function or a signal in terms of a sum of sinusoids with different frequencies and amplitudes. Like the discrete Fourier transform (DFT), a DCT operates on a function at a finite number of discrete data points. The obvious distinction between a DCT and a DFT is that the former uses only cosine functions, while the latter uses both cosines and sines (in the form of complex exponentials). However, this visible difference is merely a consequence of a deeper distinction: a DCT implies different boundary conditions from the DFT or other related transforms. |
Discrete cosine transform | Informal overview | The Fourier-related transforms that operate on a function over a finite domain, such as the DFT or DCT or a Fourier series, can be thought of as implicitly defining an extension of that function outside the domain. That is, once you write a function f(x) as a sum of sinusoids, you can evaluate that sum at any x , even for x where the original f(x) was not specified. The DFT, like the Fourier series, implies a periodic extension of the original function. A DCT, like a cosine transform, implies an even extension of the original function. |
Discrete cosine transform | Informal overview | However, because DCTs operate on finite, discrete sequences, two issues arise that do not apply for the continuous cosine transform. First, one has to specify whether the function is even or odd at both the left and right boundaries of the domain (i.e. the min-n and max-n boundaries in the definitions below, respectively). Second, one has to specify around what point the function is even or odd. In particular, consider a sequence abcd of four equally spaced data points, and say that we specify an even left boundary. There are two sensible possibilities: either the data are even about the sample a, in which case the even extension is dcbabcd, or the data are even about the point halfway between a and the previous point, in which case the even extension is dcbaabcd (a is repeated). |
Discrete cosine transform | Informal overview | These choices lead to all the standard variations of DCTs and also discrete sine transforms (DSTs). Each boundary can be either even or odd (2 choices per boundary) and can be symmetric about a data point or the point halfway between two data points (2 choices per boundary), for a total of 2 × 2 × 2 × 2 = 16 possibilities. Half of these possibilities, those where the left boundary is even, correspond to the 8 types of DCT; the other half are the 8 types of DST. |
Discrete cosine transform | Informal overview | These different boundary conditions strongly affect the applications of the transform and lead to uniquely useful properties for the various DCT types. Most directly, when using Fourier-related transforms to solve partial differential equations by spectral methods, the boundary conditions are directly specified as a part of the problem being solved. Or, for the MDCT (based on the type-IV DCT), the boundary conditions are intimately involved in the MDCT's critical property of time-domain aliasing cancellation. In a more subtle fashion, the boundary conditions are responsible for the "energy compactification" properties that make DCTs useful for image and audio compression, because the boundaries affect the rate of convergence of any Fourier-like series. |
Discrete cosine transform | Informal overview | In particular, it is well known that any discontinuities in a function reduce the rate of convergence of the Fourier series, so that more sinusoids are needed to represent the function with a given accuracy. The same principle governs the usefulness of the DFT and other transforms for signal compression; the smoother a function is, the fewer terms in its DFT or DCT are required to represent it accurately, and the more it can be compressed. (Here, we think of the DFT or DCT as approximations for the Fourier series or cosine series of a function, respectively, in order to talk about its "smoothness".) However, the implicit periodicity of the DFT means that discontinuities usually occur at the boundaries: any random segment of a signal is unlikely to have the same value at both the left and right boundaries. (A similar problem arises for the DST, in which the odd left boundary condition implies a discontinuity for any function that does not happen to be zero at that boundary.) In contrast, a DCT where both boundaries are even always yields a continuous extension at the boundaries (although the slope is generally discontinuous). This is why DCTs, and in particular DCTs of types I, II, V, and VI (the types that have two even boundaries) generally perform better for signal compression than DFTs and DSTs. In practice, a type-II DCT is usually preferred for such applications, in part for reasons of computational convenience. |
Discrete cosine transform | Formal definition | Formally, the discrete cosine transform is a linear, invertible function f:RN→RN (where R denotes the set of real numbers), or equivalently an invertible N × N square matrix. There are several variants of the DCT with slightly modified definitions. The N real numbers x0,…xN−1 are transformed into the N real numbers X0,…,XN−1 according to one of the formulas: DCT-I cos for k=0,…N−1. |
Discrete cosine transform | Formal definition | Some authors further multiply the x0 and xN−1 terms by 2, and correspondingly multiply the X0 and XN−1 terms by 1/2, which makes the DCT-I matrix orthogonal, if one further multiplies by an overall scale factor of 2N−1, but breaks the direct correspondence with a real-even DFT. |
Discrete cosine transform | Formal definition | The DCT-I is exactly equivalent (up to an overall scale factor of 2), to a DFT of 2(N−1) real numbers with even symmetry. For example, a DCT-I of N=5 real numbers abcde is exactly equivalent to a DFT of eight real numbers abcdedcb (even symmetry), divided by two. (In contrast, DCT types II-IV involve a half-sample shift in the equivalent DFT.) Note, however, that the DCT-I is not defined for N less than 2, while all other DCT types are defined for any positive N. |
Discrete cosine transform | Formal definition | Thus, the DCT-I corresponds to the boundary conditions: xn is even around n=0 and even around n=N−1 ; similarly for Xk.
DCT-II cos for k=0,…N−1. |
Discrete cosine transform | Formal definition | The DCT-II is probably the most commonly used form, and is often simply referred to as "the DCT".This transform is exactly equivalent (up to an overall scale factor of 2) to a DFT of 4N real inputs of even symmetry where the even-indexed elements are zero. That is, it is half of the DFT of the 4N inputs yn, where y2n=0, y2n+1=xn for 0≤n<N, y2N=0, and y4N−n=yn for 0<n<2N. |
Discrete cosine transform | Formal definition | DCT-II transformation is also possible using 2N signal followed by a multiplication by half shift. This is demonstrated by Makhoul. |
Discrete cosine transform | Formal definition | Some authors further multiply the X0 term by 1/N and multiply the rest of the matrix by an overall scale factor of {\textstyle {\sqrt {{2}/{N}}}} (see below for the corresponding change in DCT-III). This makes the DCT-II matrix orthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted input. This is the normalization used by Matlab, for example, see. In many applications, such as JPEG, the scaling is arbitrary because scale factors can be combined with a subsequent computational step (e.g. the quantization step in JPEG), and a scaling can be chosen that allows the DCT to be computed with fewer multiplications.The DCT-II implies the boundary conditions: xn is even around n=−1/2 and even around n=N−1/2; Xk is even around k=0 and odd around k=N. |
Discrete cosine transform | Formal definition | Because it is the inverse of DCT-II up to a scale factor (see below), this form is sometimes simply referred to as "the inverse DCT" ("IDCT").Some authors divide the x0 term by 2 instead of by 2 (resulting in an overall x0/2 term) and multiply the resulting matrix by an overall scale factor of {\textstyle {\sqrt {2/N}}} (see above for the corresponding change in DCT-II), so that the DCT-II and DCT-III are transposes of one another. This makes the DCT-III matrix orthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted output. |
Discrete cosine transform | Formal definition | The DCT-III implies the boundary conditions: xn is even around n=0 and odd around n=N; Xk is even around k=−1/2 and even around 2.
DCT-IV cos for k=0,…N−1. |
Discrete cosine transform | Formal definition | The DCT-IV matrix becomes orthogonal (and thus, being clearly symmetric, its own inverse) if one further multiplies by an overall scale factor of {\textstyle {\sqrt {2/N}}.} A variant of the DCT-IV, where data from different transforms are overlapped, is called the modified discrete cosine transform (MDCT).The DCT-IV implies the boundary conditions: xn is even around n=−1/2 and odd around n=N−1/2; similarly for Xk. |
Discrete cosine transform | Formal definition | DCT V-VIII DCTs of types I–IV treat both boundaries consistently regarding the point of symmetry: they are even/odd around either a data point for both boundaries or halfway between two data points for both boundaries. By contrast, DCTs of types V-VIII imply boundaries that are even/odd around a data point for one boundary and halfway between two data points for the other boundary. |
Discrete cosine transform | Formal definition | In other words, DCT types I–IV are equivalent to real-even DFTs of even order (regardless of whether N is even or odd), since the corresponding DFT is of length 2(N−1) (for DCT-I) or 4N (for DCT-II & III) or 8N (for DCT-IV). The four additional types of discrete cosine transform correspond essentially to real-even DFTs of logically odd order, which have factors of N±1/2 in the denominators of the cosine arguments. |
Discrete cosine transform | Formal definition | However, these variants seem to be rarely used in practice. One reason, perhaps, is that FFT algorithms for odd-length DFTs are generally more complicated than FFT algorithms for even-length DFTs (e.g. the simplest radix-2 algorithms are only for even lengths), and this increased intricacy carries over to the DCTs as described below.
(The trivial real-even array, a length-one DFT (odd length) of a single number a , corresponds to a DCT-V of length 1. |
Discrete cosine transform | Inverse transforms | Using the normalization conventions above, the inverse of DCT-I is DCT-I multiplied by 2/(N − 1). The inverse of DCT-IV is DCT-IV multiplied by 2/N. The inverse of DCT-II is DCT-III multiplied by 2/N and vice versa.Like for the DFT, the normalization factor in front of these transform definitions is merely a convention and differs between treatments. For example, some authors multiply the transforms by 2 / N {\textstyle {\sqrt {2/N}}} so that the inverse does not require any additional multiplicative factor. Combined with appropriate factors of √2 (see above), this can be used to make the transform matrix orthogonal. |
Discrete cosine transform | Multidimensional DCTs | Multidimensional variants of the various DCT types follow straightforwardly from the one-dimensional definitions: they are simply a separable product (equivalently, a composition) of DCTs along each dimension.
M-D DCT-II For example, a two-dimensional DCT-II of an image or a matrix is simply the one-dimensional DCT-II, from above, performed along the rows and then along the columns (or vice versa). That is, the 2D DCT-II is given by the formula (omitting normalization and other scale factors, as above): cos cos cos cos [πN2(n2+12)k2]. |
Discrete cosine transform | Multidimensional DCTs | The inverse of a multi-dimensional DCT is just a separable product of the inverses of the corresponding one-dimensional DCTs (see above), e.g. the one-dimensional inverses applied along one dimension at a time in a row-column algorithm.The 3-D DCT-II is only the extension of 2-D DCT-II in three dimensional space and mathematically can be calculated by the formula cos cos cos for 1. |
Discrete cosine transform | Multidimensional DCTs | The inverse of 3-D DCT-II is 3-D DCT-III and can be computed from the formula given by cos cos cos for 1. |
Discrete cosine transform | Multidimensional DCTs | Technically, computing a two-, three- (or -multi) dimensional DCT by sequences of one-dimensional DCTs along each dimension is known as a row-column algorithm. As with multidimensional FFT algorithms, however, there exist other methods to compute the same thing while performing the computations in a different order (i.e. interleaving/combining the algorithms for the different dimensions). Owing to the rapid growth in the applications based on the 3-D DCT, several fast algorithms are developed for the computation of 3-D DCT-II. Vector-Radix algorithms are applied for computing M-D DCT to reduce the computational complexity and to increase the computational speed. To compute 3-D DCT-II efficiently, a fast algorithm, Vector-Radix Decimation in Frequency (VR DIF) algorithm was developed. |
Discrete cosine transform | Multidimensional DCTs | 3-D DCT-II VR DIF In order to apply the VR DIF algorithm the input data is to be formulated and rearranged as follows. The transform size N × N × N is assumed to be 2. |
Discrete cosine transform | Multidimensional DCTs | x~(n1,n2,n3)=x(2n1,2n2,2n3)x~(n1,n2,N−n3−1)=x(2n1,2n2,2n3+1)x~(n1,N−n2−1,n3)=x(2n1,2n2+1,2n3)x~(n1,N−n2−1,N−n3−1)=x(2n1,2n2+1,2n3+1)x~(N−n1−1,n2,n3)=x(2n1+1,2n2,2n3)x~(N−n1−1,n2,N−n3−1)=x(2n1+1,2n2,2n3+1)x~(N−n1−1,N−n2−1,n3)=x(2n1+1,2n2+1,2n3)x~(N−n1−1,N−n2−1,N−n3−1)=x(2n1+1,2n2+1,2n3+1) where 0≤n1,n2,n3≤N2−1 The figure to the adjacent shows the four stages that are involved in calculating 3-D DCT-II using VR DIF algorithm. The first stage is the 3-D reordering using the index mapping illustrated by the above equations. The second stage is the butterfly calculation. Each butterfly calculates eight points together as shown in the figure just below, where cos (φi) The original 3-D DCT-II now can be written as cos cos cos (φk3) where and 3. |
Discrete cosine transform | Multidimensional DCTs | If the even and the odd parts of k1,k2 and k3 and are considered, the general formula for the calculation of the 3-D DCT-II can be expressed as cos cos cos (φ(2k3+l)) where x~ijl(n1,n2,n3)=x~(n1,n2,n3)+(−1)lx~(n1,n2,n3+n2) +(−1)jx~(n1,n2+n2,n3)+(−1)j+lx~(n1,n2+n2,n3+n2) +(−1)ix~(n1+n2,n2,n3)+(−1)i+jx~(n1+n2+n2,n2,n3) +(−1)i+lx~(n1+n2,n2,n3+n3) where or 1. |
Discrete cosine transform | Multidimensional DCTs | Arithmetic complexity The whole 3-D DCT calculation needs log 2N] stages, and each stage involves 18N3 butterflies. The whole 3-D DCT requires log 2N] butterflies to be computed. Each butterfly requires seven real multiplications (including trivial multiplications) and 24 real additions (including trivial additions). Therefore, the total number of real multiplications needed for this stage is log 2N], and the total number of real additions i.e. including the post-additions (recursive additions) which can be calculated directly after the butterfly stage or after the bit-reverse stage are given by log Real log Recursive log 2N−3N3+3N2]. |
Discrete cosine transform | Multidimensional DCTs | The conventional method to calculate MD-DCT-II is using a Row-Column-Frame (RCF) approach which is computationally complex and less productive on most advanced recent hardware platforms. The number of multiplications required to compute VR DIF Algorithm when compared to RCF algorithm are quite a few in number. The number of Multiplications and additions involved in RCF approach are given by log 2N] and log 2N−3N3+3N2], respectively. From Table 1, it can be seen that the total number of multiplications associated with the 3-D DCT VR algorithm is less than that associated with the RCF approach by more than 40%. In addition, the RCF approach involves matrix transpose and more indexing and data swapping than the new VR algorithm. This makes the 3-D DCT VR algorithm more efficient and better suited for 3-D applications that involve the 3-D DCT-II such as video compression and other 3-D image processing applications. |
Discrete cosine transform | Multidimensional DCTs | The main consideration in choosing a fast algorithm is to avoid computational and structural complexities. As the technology of computers and DSPs advances, the execution time of arithmetic operations (multiplications and additions) is becoming very fast, and regular computational structure becomes the most important factor. Therefore, although the above proposed 3-D VR algorithm does not achieve the theoretical lower bound on the number of multiplications, it has a simpler computational structure as compared to other 3-D DCT algorithms. It can be implemented in place using a single butterfly and possesses the properties of the Cooley–Tukey FFT algorithm in 3-D. Hence, the 3-D VR presents a good choice for reducing arithmetic operations in the calculation of the 3-D DCT-II, while keeping the simple structure that characterize butterfly-style Cooley–Tukey FFT algorithms. |
Discrete cosine transform | Multidimensional DCTs | The image to the right shows a combination of horizontal and vertical frequencies for an 8 × 8 (N1=N2=8) two-dimensional DCT. Each step from left to right and top to bottom is an increase in frequency by 1/2 cycle.
For example, moving right one from the top-left square yields a half-cycle increase in the horizontal frequency. Another move to the right yields two half-cycles. A move down yields two half-cycles horizontally and a half-cycle vertically. The source data ( 8×8 ) is transformed to a linear combination of these 64 frequency squares.
MD-DCT-IV The M-D DCT-IV is just an extension of 1-D DCT-IV on to M dimensional domain. The 2-D DCT-IV of a matrix or an image is given by cos cos ((2n+1)(2ℓ+1)π4M), for k=0,1,2…N−1 and ℓ=0,1,2,…M−1.
We can compute the MD DCT-IV using the regular row-column method or we can use the polynomial transform method for the fast and efficient computation. The main idea of this algorithm is to use the Polynomial Transform to convert the multidimensional DCT into a series of 1-D DCTs directly. MD DCT-IV also has several applications in various fields. |
Discrete cosine transform | Computation | Although the direct application of these formulas would require O(N2) operations, it is possible to compute the same thing with only log N) complexity by factorizing the computation similarly to the fast Fourier transform (FFT). One can also compute DCTs via FFTs combined with O(N) pre- and post-processing steps. In general, log N) methods to compute DCTs are known as fast cosine transform (FCT) algorithms. |
Discrete cosine transform | Computation | The most efficient algorithms, in principle, are usually those that are specialized directly for the DCT, as opposed to using an ordinary FFT plus O(N) extra operations (see below for an exception). However, even "specialized" DCT algorithms (including all of those that achieve the lowest known arithmetic counts, at least for power-of-two sizes) are typically closely related to FFT algorithms – since DCTs are essentially DFTs of real-even data, one can design a fast DCT algorithm by taking an FFT and eliminating the redundant operations due to this symmetry. This can even be done automatically (Frigo & Johnson 2005). Algorithms based on the Cooley–Tukey FFT algorithm are most common, but any other FFT algorithm is also applicable. For example, the Winograd FFT algorithm leads to minimal-multiplication algorithms for the DFT, albeit generally at the cost of more additions, and a similar algorithm was proposed by (Feig & Winograd 1992a) for the DCT. Because the algorithms for DFTs, DCTs, and similar transforms are all so closely related, any improvement in algorithms for one transform will theoretically lead to immediate gains for the other transforms as well (Duhamel & Vetterli 1990). |
Discrete cosine transform | Computation | While DCT algorithms that employ an unmodified FFT often have some theoretical overhead compared to the best specialized DCT algorithms, the former also have a distinct advantage: Highly optimized FFT programs are widely available. Thus, in practice, it is often easier to obtain high performance for general lengths N with FFT-based algorithms. |
Discrete cosine transform | Computation | Specialized DCT algorithms, on the other hand, see widespread use for transforms of small, fixed sizes such as the 8 × 8 DCT-II used in JPEG compression, or the small DCTs (or MDCTs) typically used in audio compression. (Reduced code size may also be a reason to use a specialized DCT for embedded-device applications.) In fact, even the DCT algorithms using an ordinary FFT are sometimes equivalent to pruning the redundant operations from a larger FFT of real-symmetric data, and they can even be optimal from the perspective of arithmetic counts. For example, a type-II DCT is equivalent to a DFT of size 4N with real-even symmetry whose even-indexed elements are zero. One of the most common methods for computing this via an FFT (e.g. the method used in FFTPACK and FFTW) was described by Narasimha & Peterson (1978) and Makhoul (1980), and this method in hindsight can be seen as one step of a radix-4 decimation-in-time Cooley–Tukey algorithm applied to the "logical" real-even DFT corresponding to the DCT-II. |
Discrete cosine transform | Computation | Because the even-indexed elements are zero, this radix-4 step is exactly the same as a split-radix step. If the subsequent size N real-data FFT is also performed by a real-data split-radix algorithm (as in Sorensen et al. (1987)), then the resulting algorithm actually matches what was long the lowest published arithmetic count for the power-of-two DCT-II ( log 2N−N+2 real-arithmetic operations). |
Discrete cosine transform | Computation | A recent reduction in the operation count to 17 log 2N+O(N) also uses a real-data FFT. So, there is nothing intrinsically bad about computing the DCT via an FFT from an arithmetic perspective – it is sometimes merely a question of whether the corresponding FFT algorithm is optimal. (As a practical matter, the function-call overhead in invoking a separate FFT routine might be significant for small N, but this is an implementation rather than an algorithmic question since it can be solved by unrolling or inlining.) |
Discrete cosine transform | Example of IDCT | Consider this 8x8 grayscale image of capital letter A.
Each basis function is multiplied by its coefficient and then this product is added to the final image. |
Monocytopenia | Monocytopenia | Monocytopenia is a form of leukopenia associated with a deficiency of monocytes.
It has been proposed as a measure during chemotherapy to predict neutropenia, though some research indicates that it is less effective than lymphopenia. |
Monocytopenia | Causes | The causes of monocytopenia include: acute infections, stress, treatment with glucocorticoids, aplastic anemia, hairy cell leukemia, acute myeloid leukemia, treatment with myelotoxic drugs and genetic syndromes, as for example MonoMAC syndrome. |
Monocytopenia | Diagnosis | - Blood Test (CBC) (Normal range of Monocytes: 1-10%) (Normal range in males: 0.2-0.8 x 10 3 /microliter)- Blood test checking for monocytopenia (Abnormal ranges: <1%) (Abnormal range in males: <0.2 x 10 3 /microliter) |
Shc (shell script compiler) | Shc (shell script compiler) | shc is a shell script compiler for Unix-like operating systems written in the C programming language. The Shell Script Compiler (SHC) encodes and encrypts shell scripts into executable binaries. Compiling shell scripts into binaries provides protection against accidental changes and source code modification, and is a way of hiding shell script source code. |
Shc (shell script compiler) | Mechanism | shc takes a shell script which is specified on the command line by the -f option and produces a C source code of the script with added encryption. The generated source code is then compiled and linked to produce a binary executable. It is a two step process where, first, it creates a filename.x.c file of the shell script file filename. Then it is compiled with cc -$CFLAGS filename.x.c to create the binary from the C source code with the default C compiler.The compiled binary will still be dependent on the shell specified in the shebang (eg. #!/bin/sh), thus shc does not create completely independent binaries.shc itself is not a compiler such as the C compiler, it rather encodes and encrypts a shell script and generates C source code with the added expiration capability. It then uses the system C compiler to compile the source shell script and build a stripped binary which behaves exactly like the original script. Upon execution, the compiled binary will decrypt and execute the code with the shells'-c option. |
Desktop environment | Desktop environment | In computing, a desktop environment (DE) is an implementation of the desktop metaphor made of a bundle of programs running on top of a computer operating system that share a common graphical user interface (GUI), sometimes described as a graphical shell. The desktop environment was seen mostly on personal computers until the rise of mobile computing. Desktop GUIs help the user to easily access and edit files, while they usually do not provide access to all of the features found in the underlying operating system. Instead, the traditional command-line interface (CLI) is still used when full control over the operating system is required. |
Desktop environment | Desktop environment | A desktop environment typically consists of icons, windows, toolbars, folders, wallpapers and desktop widgets (see Elements of graphical user interfaces and WIMP). A GUI might also provide drag and drop functionality and other features that make the desktop metaphor more complete. A desktop environment aims to be an intuitive way for the user to interact with the computer using concepts which are similar to those used when interacting with the physical world, such as buttons and windows. |
Desktop environment | Desktop environment | While the term desktop environment originally described a style of user interfaces following the desktop metaphor, it has also come to describe the programs that realize the metaphor itself. This usage has been popularized by projects such as the Common Desktop Environment, K Desktop Environment, and GNOME. |
Desktop environment | Implementation | On a system that offers a desktop environment, a window manager in conjunction with applications written using a widget toolkit are generally responsible for most of what the user sees. The window manager supports the user interactions with the environment, while the toolkit provides developers a software library for applications with a unified look and behavior. |
Desktop environment | Implementation | A windowing system of some sort generally interfaces directly with the underlying operating system and libraries. This provides support for graphical hardware, pointing devices, and keyboards. The window manager generally runs on top of this windowing system. While the windowing system may provide some window management functionality, this functionality is still considered to be part of the window manager, which simply happens to have been provided by the windowing system. |
Desktop environment | Implementation | Applications that are created with a particular window manager in mind usually make use of a windowing toolkit, generally provided with the operating system or window manager. A windowing toolkit gives applications access to widgets that allow the user to interact graphically with the application in a consistent way. |
Desktop environment | History and common use | The first desktop environment was created by Xerox and was sold with the Xerox Alto in the 1970s. The Alto was generally considered by Xerox to be a personal office computer; it failed in the marketplace because of poor marketing and a very high price tag. With the Lisa, Apple introduced a desktop environment on an affordable personal computer, which also failed in the market. |
Desktop environment | History and common use | The desktop metaphor was popularized on commercial personal computers by the original Macintosh from Apple in 1984, and was popularized further by Windows from Microsoft since the 1990s. As of 2014, the most popular desktop environments are descendants of these earlier environments, including the Windows shell used in Microsoft Windows, and the Aqua environment used in macOS. When compared with the X-based desktop environments available for Unix-like operating systems such as Linux and BSD, the proprietary desktop environments included with Windows and macOS have relatively fixed layouts and static features, with highly integrated "seamless" designs that aim to provide mostly consistent customer experiences across installations. |
Desktop environment | History and common use | Microsoft Windows dominates in marketshare among personal computers with a desktop environment. Computers using Unix-like operating systems such as macOS, ChromeOS, Linux, BSD or Solaris are much less common; however, as of 2015 there is a growing market for low-cost Linux PCs using the X Window System or Wayland with a broad choice of desktop environments. Among the more popular of these are Google's Chromebooks and Chromeboxes, Intel's NUC, the Raspberry Pi, etc.On tablets and smartphones, the situation is the opposite, with Unix-like operating systems dominating the market, including the iOS (BSD-derived), Android, Tizen, Sailfish and Ubuntu (all Linux-derived). Microsoft's Windows phone, Windows RT and Windows 10 are used on a much smaller number of tablets and smartphones. However, the majority of Unix-like operating systems dominant on handheld devices do not use the X11 desktop environments used by other Unix-like operating systems, relying instead on interfaces based on other technologies. |
Desktop environment | Desktop environments for the X Window System | On systems running the X Window System (typically Unix-family systems such as Linux, the BSDs, and formal UNIX distributions), desktop environments are much more dynamic and customizable to meet user needs. In this context, a desktop environment typically consists of several separate components, including a window manager (such as Mutter or KWin), a file manager (such as Files or Dolphin), a set of graphical themes, together with toolkits (such as GTK+ and Qt) and libraries for managing the desktop. All these individual modules can be exchanged and independently configured to suit users, but most desktop environments provide a default configuration that works with minimal user setup. |
Desktop environment | Desktop environments for the X Window System | Some window managers—such as IceWM, Fluxbox, Openbox, ROX Desktop and Window Maker—contain relatively sparse desktop environment elements, such as an integrated spatial file manager, while others like evilwm and wmii do not provide such elements. Not all of the program code that is part of a desktop environment has effects which are directly visible to the user. Some of it may be low-level code. KDE, for example, provides so-called KIO slaves which give the user access to a wide range of virtual devices. These I/O slaves are not available outside the KDE environment. |
Desktop environment | Desktop environments for the X Window System | In 1996 the KDE was announced, followed in 1997 by the announcement of GNOME. Xfce is a smaller project that was also founded in 1996, and focuses on speed and modularity, just like LXDE which was started in 2006. A comparison of X Window System desktop environments demonstrates the differences between environments. GNOME and KDE were usually seen as dominant solutions, and these are still often installed by default on Linux systems. Each of them offers: To programmers, a set of standard APIs, a programming environment, and human interface guidelines. |
Desktop environment | Desktop environments for the X Window System | To translators, a collaboration infrastructure. KDE and GNOME are available in many languages.
To artists, a workspace to share their talents.
To ergonomics specialists, the chance to help simplify the working environment.
To developers of third-party applications, a reference environment for integration. OpenOffice.org is one such application. |
Desktop environment | Desktop environments for the X Window System | To users, a complete desktop environment and a suite of essential applications. These include a file manager, web browser, multimedia player, email client, address book, PDF reader, photo manager, and system preferences application.In the early 2000s, KDE reached maturity. The Appeal and ToPaZ projects focused on bringing new advances to the next major releases of both KDE and GNOME respectively. Although striving for broadly similar goals, GNOME and KDE do differ in their approach to user ergonomics. KDE encourages applications to integrate and interoperate, is highly customizable, and contains many complex features, all whilst trying to establish sensible defaults. GNOME on the other hand is more prescriptive, and focuses on the finer details of essential tasks and overall simplification. Accordingly, each one attracts a different user and developer community. Technically, there are numerous technologies common to all Unix-like desktop environments, most obviously the X Window System. Accordingly, the freedesktop.org project was established as an informal collaboration zone with the goal being to reduce duplication of effort. |
Desktop environment | Desktop environments for the X Window System | As GNOME and KDE focus on high-performance computers, users of less powerful or older computers often prefer alternative desktop environments specifically created for low-performance systems. Most commonly used lightweight desktop environments include LXDE and Xfce; they both use GTK+, which is the same underlying toolkit GNOME uses. The MATE desktop environment, a fork of GNOME 2, is comparable to Xfce in its use of RAM and processor cycles, but is often considered more as an alternative to other lightweight desktop environments. |
Desktop environment | Desktop environments for the X Window System | For a while, GNOME and KDE enjoyed the status of the most popular Linux desktop environments; later, other desktop environments grew in popularity. In April 2011, GNOME introduced a new interface concept with its version 3, while a popular Linux distribution Ubuntu introduced its own new desktop environment, Unity. Some users preferred to keep the traditional interface concept of GNOME 2, resulting in the creation of MATE as a GNOME 2 fork. |
Desktop environment | Examples of desktop environments | The most common desktop environment on personal computers is Windows Shell in Microsoft Windows. Microsoft has made significant efforts in making Windows shell visually pleasing. As a result, Microsoft has introduced theme support in Windows 98, the various Windows XP visual styles, the Aero brand in Windows Vista, the Microsoft design language (codenamed "Metro") in Windows 8, and the Fluent Design System and Windows Spotlight in Windows 10. Windows shell can be extended via Shell extensions. |