content_id
stringlengths
14
14
page_title
stringlengths
1
250
section_title
stringlengths
1
1.26k
breadcrumb
stringlengths
1
1.39k
text
stringlengths
9
3.55k
c_kbhy7v95w6fi
Burgers vector
Summary
Burgers_vector
Specifically, the breadth of the opening defines the magnitude of the Burgers vector, and, when a set of fixed coordinates is introduced, an angle between the termini of the dislocated rectangle's length line segment and width line segment may be specified. When calculating the Burgers vector practically, one may draw a rectangular counterclockwise circuit (Burgers circuit) from a starting point to enclose the dislocation (see the picture above). The Burgers vector will be the vector to complete the circuit, i.e., from the end to the start of the circuit.The direction of the vector depends on the plane of dislocation, which is usually on one of the closest-packed crystallographic planes.
c_cq9s99zrwut6
Burgers vector
Summary
Burgers_vector
In most metallic materials, the magnitude of the Burgers vector for a dislocation is of a magnitude equal to the interatomic spacing of the material, since a single dislocation will offset the crystal lattice by one close-packed crystallographic spacing unit. In edge dislocations, the Burgers vector and dislocation line are perpendicular to one another. In screw dislocations, they are parallel.The Burgers vector is significant in determining the yield strength of a material by affecting solute hardening, precipitation hardening and work hardening. The Burgers vector plays an important role in determining the direction of dislocation line.
c_kcxl9bwvix8g
Charpy test
Summary
Charpy_test
In materials science, the Charpy impact test, also known as the Charpy V-notch test, is a standardized high strain rate test which determines the amount of energy absorbed by a material during fracture. Absorbed energy is a measure of the material's notch toughness. It is widely used in industry, since it is easy to prepare and conduct and results can be obtained quickly and cheaply.
c_uby0a46r94vd
Charpy test
Summary
Charpy_test
A disadvantage is that some results are only comparative. The test was pivotal in understanding the fracture problems of ships during World War II.The test was developed around 1900 by S. B. Russell (1898, American) and Georges Charpy (1901, French). The test became known as the Charpy test in the early 1900s due to the technical contributions and standardization efforts by Charpy.
c_yvb5so5ecui3
Zener–Hollomon parameter
Summary
Zener–Hollomon_parameter
In materials science, the Zener–Hollomon parameter, typically denoted as Z, is used to relate changes in temperature or strain-rate to the stress-strain behavior of a material. It has been most extensively applied to the forming of steels at increased temperature, when creep is active. It is given by Z = ε ˙ exp ⁡ ( Q / R T ) {\displaystyle Z={\dot {\varepsilon }}\exp(Q/RT)} where ε ˙ {\textstyle {\dot {\varepsilon }}} is the strain rate, Q is the activation energy, R is the gas constant, and T is the temperature. The Zener–Hollomon parameter is also known as the temperature compensated strain rate, since the two are inversely proportional in the definition.
c_81ra4xs7eoyy
Zener–Hollomon parameter
Summary
Zener–Hollomon_parameter
It is named after Clarence Zener and John Herbert Hollomon, Jr. who established the formula based on the stress-strain behavior in steel.
c_mxdf62uz4ut9
Zener–Hollomon parameter
Summary
Zener–Hollomon_parameter
When plastically deforming a material, the flow stress depends heavily on both the strain-rate and temperature. During forming processes, Z may help determine appropriate changes in strain-rate or temperature when the other variable is altered, in order to keep material flowing properly. Z has also been applied to some metals over a large range of strain rates and temperatures and shown comparable microstructures at the end-of-processing, as long as Z remained similar. This is because the relative activity of various deformation mechanisms is typically inversely proportional to temperature or strain-rate, such that decreasing strain rate or increasing temperature will increase Z and promote plastic deformation.
c_eccr9w48ehk1
Cottrell atmosphere
Summary
Cottrell_atmosphere
In materials science, the concept of the Cottrell atmosphere was introduced by A. H. Cottrell and B. A. Bilby in 1949 to explain how dislocations are pinned in some metals by boron, carbon, or nitrogen interstitials. Cottrell atmospheres occur in body-centered cubic (BCC) and face-centered cubic (FCC) materials, such as iron or nickel, with small impurity atoms, such as boron, carbon, or nitrogen. As these interstitial atoms distort the lattice slightly, there will be an associated residual stress field surrounding the interstitial. This stress field can be relaxed by the interstitial atom diffusing towards a dislocation, which contains a small gap at its core (as it is a more open structure), see Figure 1.
c_1rrs180hhsvy
Cottrell atmosphere
Summary
Cottrell_atmosphere
Once the atom has diffused into the dislocation core the atom will stay. Typically only one interstitial atom is required per lattice plane of the dislocation. The collection of solute atoms around the dislocation core due to this process is the Cottrell atmosphere.
c_66b6yencpmuk
Sessile drop technique
Summary
Sessile_drop_technique
In materials science, the sessile drop technique is a method used for the characterization of solid surface energies, and in some cases, aspects of liquid surface energies. The main premise of the method is that by placing a droplet of liquid with a known surface energy and contact angle, the surface energy of the solid substrate can be calculated. The liquid used for such experiments is referred to as the probe liquid, and the use of several different probe liquids is required.
c_gxt7f0tsze3x
Sol–gel process
Summary
Sol_gel
In materials science, the sol–gel process is a method for producing solid materials from small molecules. The method is used for the fabrication of metal oxides, especially the oxides of silicon (Si) and titanium (Ti). The process involves conversion of monomers into a colloidal solution (sol) that acts as the precursor for an integrated network (or gel) of either discrete particles or network polymers. Typical precursors are metal alkoxides. Sol-gel process is used to produce ceramic nanoparticles.
c_3inp81azfcg6
Two dimensional (2D) nanomaterials
Summary
Two_dimensional_(2D)_nanomaterials
In materials science, the term single-layer materials or 2D materials refers to crystalline solids consisting of a single layer of atoms. These materials are promising for some applications but remain the focus of research. Single-layer materials derived from single elements generally carry the -ene suffix in their names, e.g. graphene. Single-layer materials that are compounds of two or more elements have -ane or -ide suffixes.
c_uewm8gstojj7
Two dimensional (2D) nanomaterials
Summary
Two_dimensional_(2D)_nanomaterials
2D materials can generally be categorized as either 2D allotropes of various elements or as compounds (consisting of two or more covalently bonding elements). It is predicted that there are hundreds of stable single-layer materials.
c_mdontgqe7ne2
Two dimensional (2D) nanomaterials
Summary
Two_dimensional_(2D)_nanomaterials
The atomic structure and calculated basic properties of these and many other potentially synthesisable single-layer materials, can be found in computational databases. 2D materials can be produced using mainly two approaches: top-down exfoliation and bottom-up synthesis. The exfoliation methods include sonication, mechanical, hydrothermal, electrochemical, laser-assisted, and microwave-assisted exfoliation.
c_vm8qazmssfgk
Threshold displacement energy
Summary
Threshold_displacement_energy
In materials science, the threshold displacement energy (Td) is the minimum kinetic energy that an atom in a solid needs to be permanently displaced from its site in the lattice to a defect position. It is also known as "displacement threshold energy" or just "displacement energy". In a crystal, a separate threshold displacement energy exists for each crystallographic direction.
c_8mnybwlzoo22
Threshold displacement energy
Summary
Threshold_displacement_energy
Then one should distinguish between the minimum (Td,min) and average (Td,ave) over all lattice directions' threshold displacement energies. In amorphous solids, it may be possible to define an effective displacement energy to describe some other average quantity of interest. Threshold displacement energies in typical solids are of the order of 10-50 eV.
c_hpxt4ryokzmi
Yield strength anomaly
Summary
Yield_strength_anomaly
In materials science, the yield strength anomaly refers to materials wherein the yield strength (i.e., the stress necessary to initiate plastic yielding) increases with temperature. For the majority of materials, the yield strength decreases with increasing temperature. In metals, this decrease in yield strength is due to the thermal activation of dislocation motion, resulting in easier plastic deformation at higher temperatures.In some cases, a yield strength anomaly refers to a decrease in the ductility of a material with increasing temperature, which is also opposite the trend in the majority of materials. Anomalies in ductility can be more clear, as an anomalous effect on yield strength can be obscured by its typical decrease with temperature.
c_3zwbkrnlfhrv
Yield strength anomaly
Summary
Yield_strength_anomaly
In concert with yield strength or ductility anomalies, some materials demonstrate extrema in other temperature dependent properties, such as a minimum in ultrasonic damping, or a maximum in electrical conductivity.The yield strength anomaly in β-brass was one of the earliest discoveries such a phenomenon, and several other ordered intermetallic alloys demonstrate this effect. Precipitation-hardened superalloys exhibit a yield strength anomaly over a considerable temperature range. For these materials, the yield strength shows little variation between room temperature and several hundred degrees Celsius.
c_waxo2j45td41
Yield strength anomaly
Summary
Yield_strength_anomaly
Eventually, a maximum yield strength is reached. For even higher temperatures, the yield strength decreases and, eventually, drops to zero when reaching the melting temperature, where the solid material transforms into a liquid. For ordered intermetallics, the temperature of the yield strength peak is roughly 50% of the absolute melting temperature.
c_jf2nvk1ajh6o
Toughening
Summary
Toughening
In materials science, toughening refers to the process of making a material more resistant to the propagation of cracks. When a crack propagates, the associated irreversible work in different materials classes is different. Thus, the most effective toughening mechanisms differ among different materials classes. The crack tip plasticity is important in toughening of metals and long-chain polymers. Ceramics have limited crack tip plasticity and primarily rely on different toughening mechanisms.
c_fk6eaeg6453h
Cold pressing
Summary
Strain_hardening
In materials science, work hardening, also known as strain hardening, is the strengthening of a metal or polymer by plastic deformation. Work hardening may be desirable, undesirable, or inconsequential, depending on the context. This strengthening occurs because of dislocation movements and dislocation generation within the crystal structure of the material.
c_bht1sely95f9
Cold pressing
Summary
Strain_hardening
Many non-brittle metals with a reasonably high melting point as well as several polymers can be strengthened in this fashion. Alloys not amenable to heat treatment, including low-carbon steel, are often work-hardened. Some materials cannot be work-hardened at low temperatures, such as indium, however others can be strengthened only via work hardening, such as pure copper and aluminum.
c_0m8faprm0esw
Antiferromagnetic interaction
Summary
Antiferromagnetism
In materials that exhibit antiferromagnetism, the magnetic moments of atoms or molecules, usually related to the spins of electrons, align in a regular pattern with neighboring spins (on different sublattices) pointing in opposite directions. This is, like ferromagnetism and ferrimagnetism, a manifestation of ordered magnetism. The phenomenon of antiferromagnetism was first introduced by Lev Landau in 1933.Generally, antiferromagnetic order may exist at sufficiently low temperatures, but vanishes at and above the Néel temperature – named after Louis Néel, who had first identified this type of magnetic ordering. Above the Néel temperature, the material is typically paramagnetic.
c_w9o14dyomcd4
Spatial dispersion
In isotropic media
Spatial_dispersion > Spatial dispersion in electromagnetism > In isotropic media
In materials that have no relevant crystalline structure, spatial dispersion can be important. Although symmetry demands that the permittivity is isotropic for zero wavevector, this restriction does not apply for nonzero wavevector. The non-isotropic permittivity for nonzero wavevector leads to effects such as optical activity in solutions of chiral molecules. In isotropic materials without optical activity, the permittivity tensor can be broken down to transverse and longitudinal components, referring to the response to electric fields either perpendicular or parallel to the wavevector.For frequencies nearby an absorption line (e.g., an exciton), spatial dispersion can play an important role.
c_12nx9otvsb4i
Corrosion fatigue
Stress-corrosion fatigue
Corrosion_fatigue > Crack-propagation studies in corrosion fatigue > Stress-corrosion fatigue
In materials where the maximum applied-stress-intensity factor exceeds the stress-corrosion cracking-threshold value, stress corrosion adds to crack-growth velocity. This is shown in the schematic on the right. In a corrosive environment, the crack grows due to cyclic loading at a lower stress-intensity range; above the threshold stress intensity for stress corrosion cracking, additional crack growth (the red line) occurs due to SCC. The lower stress-intensity regions are not affected, and the threshold stress-intensity range for fatigue-crack propagation is unchanged in the corrosive environment. In the most-general case, corrosion-fatigue crack growth may exhibit both of the above effects; crack-growth behavior is represented in the schematic on the left.
c_ryyorxxhz4io
Narrow gap
Optical versus electronic bandgap
Bandgap_energy > Optical versus electronic bandgap
In materials with a large exciton binding energy, it is possible for a photon to have just barely enough energy to create an exciton (bound electron–hole pair), but not enough energy to separate the electron and hole (which are electrically attracted to each other). In this situation, there is a distinction between "optical band gap" and "electronic band gap" (or "transport gap"). The optical bandgap is the threshold for photons to be absorbed, while the transport gap is the threshold for creating an electron–hole pair that is not bound together.
c_feey2q68b3eq
Narrow gap
Optical versus electronic bandgap
Bandgap_energy > Optical versus electronic bandgap
The optical bandgap is at lower energy than the transport gap. In almost all inorganic semiconductors, such as silicon, gallium arsenide, etc., there is very little interaction between electrons and holes (very small exciton binding energy), and therefore the optical and electronic bandgap are essentially identical, and the distinction between them is ignored. However, in some systems, including organic semiconductors and single-walled carbon nanotubes, the distinction may be significant.
c_r74aswhyrall
Exciton
Frenkel exciton
Exciton > Frenkel exciton
In materials with a relatively small dielectric constant, the Coulomb interaction between an electron and a hole may be strong and the excitons thus tend to be small, of the same order as the size of the unit cell. Molecular excitons may even be entirely located on the same molecule, as in fullerenes. This Frenkel exciton, named after Yakov Frenkel, has a typical binding energy on the order of 0.1 to 1 eV.
c_n7o97qbc0obe
Exciton
Frenkel exciton
Exciton > Frenkel exciton
Frenkel excitons are typically found in alkali halide crystals and in organic molecular crystals composed of aromatic molecules, such as anthracene and tetracene. Another example of Frenkel exciton includes on-site d-d excitations in transition metal compounds with partially-filled d-shells. While d-d transitions are in principle forbidden by symmetry, they become weakly-allowed in a crystal when the symmetry is broken by structural relaxations or other effects. Absorption of a photon resonant with a d-d transition leads to the creation of an electron-hole pair on a single atomic site, which can be treated as a Frenkel exciton.
c_nwumcr765b2o
Dynamic strain aging
Description of mechanism
Dynamic_strain_aging > Description of mechanism
In materials, the motion of dislocations is a discontinuous process. When dislocations meet obstacles during plastic deformation (such as particles or forest dislocations), they are temporarily arrested for a certain time. During this time, solutes (such as interstitial particles or substitutional impurities) diffuse around the pinned dislocations, further strengthening the obstacles' hold on the dislocations.
c_2vrqnp68en38
Dynamic strain aging
Description of mechanism
Dynamic_strain_aging > Description of mechanism
Eventually these dislocations will overcome the obstacles with sufficient stress and will quickly move to the next obstacle where they are stopped and the process can repeat. This process's most well-known macroscopic manifestations are Lüders bands and the Portevin–Le Chatelier effect. However, the mechanism is known to affect materials without these physical observations.
c_g7sifjefdyfg
Formative evaluation
In math education
Formative_evaluation > Specific applications > In math education
In math education, it is important for teachers to see how their students approach the problems and how much mathematical knowledge and at what level students use when solving the problems. That is, knowing how students think in the process of learning or problem solving makes it possible for teachers to help their students overcome conceptual difficulties and, in turn, improve learning. In that sense, formative assessment is diagnostic. To employ formative assessment in the classrooms, a teacher has to make sure that each student participates in the learning process by expressing their ideas; there is a trustful environment in which students can provide each other with feedback; s/he (the teacher) provides students with feedback; and the instruction is modified according to students' needs. In math classes, thought revealing activities such as model-eliciting activities (MEAs) and generative activities provide good opportunities for covering these aspects of formative assessment.
c_pcpy81di11ri
Pseudo-differential operator
Summary
Pseudo-differential_operators
In mathematical analysis a pseudo-differential operator is an extension of the concept of differential operator. Pseudo-differential operators are used extensively in the theory of partial differential equations and quantum field theory, e.g. in mathematical models that include ultrametric pseudo-differential equations in a non-Archimedean space.
c_r10n7omt7t8s
Oscillatory integral
Summary
Oscillatory_integral
In mathematical analysis an oscillatory integral is a type of distribution. Oscillatory integrals make rigorous many arguments that, on a naive level, appear to use divergent integrals. It is possible to represent approximate solution operators for many differential equations as oscillatory integrals.
c_jy30z1zkvejz
Multidimensional transform
Summary
Multidimensional_transform
In mathematical analysis and applications, multidimensional transforms are used to analyze the frequency content of signals in a domain of two or more dimensions.
c_ssgc38iexixj
Z-order curve
Summary
Z-order_curve
In mathematical analysis and computer science, functions which are Z-order, Lebesgue curve, Morton space-filling curve, Morton order or Morton code map multidimensional data to one dimension while preserving locality of the data points. It is named in France after Henri Lebesgue, who studied it in 1904, and named in the United States after Guy Macdonald Morton, who first applied the order to file sequencing in 1966. The z-value of a point in multidimensions is simply calculated by interleaving the binary representations of its coordinate values. Once the data are sorted into this ordering, any one-dimensional data structure can be used, such as simple one dimensional arrays, binary search trees, B-trees, skip lists or (with low significant bits truncated) hash tables. The resulting ordering can equivalently be described as the order one would get from a depth-first traversal of a quadtree or octree.
c_92ucf7jhjfgv
Sigma field
Summary
Join_(sigma_algebra)
In mathematical analysis and in probability theory, a σ-algebra (also σ-field) on a set X is a nonempty collection Σ of subsets of X closed under complement, countable unions, and countable intersections. The ordered pair ( X , Σ ) {\displaystyle (X,\Sigma )} is called a measurable space. The σ-algebras are a subset of the set algebras; elements of the latter only need to be closed under the union or intersection of finitely many subsets, which is a weaker condition.The main use of σ-algebras is in the definition of measures; specifically, the collection of those subsets for which a given measure is defined is necessarily a σ-algebra. This concept is important in mathematical analysis as the foundation for Lebesgue integration, and in probability theory, where it is interpreted as the collection of events which can be assigned probabilities.
c_z0i5tpyypcuq
Sigma field
Summary
Join_(sigma_algebra)
Also, in probability, σ-algebras are pivotal in the definition of conditional expectation. In statistics, (sub) σ-algebras are needed for the formal mathematical definition of a sufficient statistic, particularly when the statistic is a function or a random process and the notion of conditional density is not applicable. If X = { a , b , c , d } {\displaystyle X=\{a,b,c,d\}} one possible σ-algebra on X {\displaystyle X} is Σ = { ∅ , { a , b } , { c , d } , { a , b , c , d } } , {\displaystyle \Sigma =\{\varnothing ,\{a,b\},\{c,d\},\{a,b,c,d\}\},} where ∅ {\displaystyle \varnothing } is the empty set.
c_1n5zj6tk9563
Sigma field
Summary
Join_(sigma_algebra)
In general, a finite algebra is always a σ-algebra. If { A 1 , A 2 , A 3 , … } , {\displaystyle \{A_{1},A_{2},A_{3},\ldots \},} is a countable partition of X {\displaystyle X} then the collection of all unions of sets in the partition (including the empty set) is a σ-algebra. A more useful example is the set of subsets of the real line formed by starting with all open intervals and adding in all countable unions, countable intersections, and relative complements and continuing this process (by transfinite iteration through all countable ordinals) until the relevant closure properties are achieved (a construction known as the Borel hierarchy).
c_25ube1e2slk2
Multi-variable function
Summary
Multi-variable_function
In mathematical analysis and its applications, a function of several real variables or real multivariate function is a function with more than one argument, with all arguments being real variables. This concept extends the idea of a function of a real variable to several variables. The "input" variables take real values, while the "output", also called the "value of the function", may be real or complex. However, the study of the complex-valued functions may be easily reduced to the study of the real-valued functions, by considering the real and imaginary parts of the complex function; therefore, unless explicitly specified, only real-valued functions will be considered in this article. The domain of a function of n variables is the subset of R n {\displaystyle \mathbb {R} ^{n}} for which the function is defined. As usual, the domain of a function of several real variables is supposed to contain a nonempty open subset of R n {\displaystyle \mathbb {R} ^{n}} .
c_3boowh6klrtx
Laakso space
Summary
Laakso_space
In mathematical analysis and metric geometry, Laakso spaces are a class of metric spaces which are fractal, in the sense that they have non-integer Hausdorff dimension, but that admit a notion of differential calculus. They are constructed as quotient spaces of × K where K is a Cantor set.
c_z0siec50bpre
Bounded poset
Summary
Bounded_set
In mathematical analysis and related areas of mathematics, a set is called bounded if it is, in a certain sense, of finite measure. Conversely, a set which is not bounded is called unbounded. The word "bounded" makes no sense in a general topological space without a corresponding metric.
c_zh4ouc51rylv
Bounded poset
Summary
Bounded_set
Boundary is a distinct concept: for example, a circle in isolation is a boundaryless bounded set, while the half plane is unbounded yet has a boundary. A bounded set is not necessarily a closed set and vice versa. For example, a subset S of a 2-dimensional real space R2 constrained by two parabolic curves x2 + 1 and x2 - 1 defined in a Cartesian coordinate system is closed by the curves but not bounded (so unbounded).
c_unryb77ey98h
Agmon's inequality
Summary
Agmon's_inequality
In mathematical analysis, Agmon's inequalities, named after Shmuel Agmon, consist of two closely related interpolation inequalities between the Lebesgue space L ∞ {\displaystyle L^{\infty }} and the Sobolev spaces H s {\displaystyle H^{s}} . It is useful in the study of partial differential equations. Let u ∈ H 2 ( Ω ) ∩ H 0 1 ( Ω ) {\displaystyle u\in H^{2}(\Omega )\cap H_{0}^{1}(\Omega )} where Ω ⊂ R 3 {\displaystyle \Omega \subset \mathbb {R} ^{3}} . Then Agmon's inequalities in 3D state that there exists a constant C {\displaystyle C} such that ‖ u ‖ L ∞ ( Ω ) ≤ C ‖ u ‖ H 1 ( Ω ) 1 / 2 ‖ u ‖ H 2 ( Ω ) 1 / 2 , {\displaystyle \displaystyle \|u\|_{L^{\infty }(\Omega )}\leq C\|u\|_{H^{1}(\Omega )}^{1/2}\|u\|_{H^{2}(\Omega )}^{1/2},} and ‖ u ‖ L ∞ ( Ω ) ≤ C ‖ u ‖ L 2 ( Ω ) 1 / 4 ‖ u ‖ H 2 ( Ω ) 3 / 4 .
c_eewcbmp9mkh8
Agmon's inequality
Summary
Agmon's_inequality
{\displaystyle \displaystyle \|u\|_{L^{\infty }(\Omega )}\leq C\|u\|_{L^{2}(\Omega )}^{1/4}\|u\|_{H^{2}(\Omega )}^{3/4}.} In 2D, the first inequality still holds, but not the second: let u ∈ H 2 ( Ω ) ∩ H 0 1 ( Ω ) {\displaystyle u\in H^{2}(\Omega )\cap H_{0}^{1}(\Omega )} where Ω ⊂ R 2 {\displaystyle \Omega \subset \mathbb {R} ^{2}} . Then Agmon's inequality in 2D states that there exists a constant C {\displaystyle C} such that ‖ u ‖ L ∞ ( Ω ) ≤ C ‖ u ‖ L 2 ( Ω ) 1 / 2 ‖ u ‖ H 2 ( Ω ) 1 / 2 . {\displaystyle \displaystyle \|u\|_{L^{\infty }(\Omega )}\leq C\|u\|_{L^{2}(\Omega )}^{1/2}\|u\|_{H^{2}(\Omega )}^{1/2}.} For the n {\displaystyle n} -dimensional case, choose s 1 {\displaystyle s_{1}} and s 2 {\displaystyle s_{2}} such that s 1 < n 2 < s 2 {\displaystyle s_{1}<{\tfrac {n}{2}}
c_psf2k03y94vi
Bernstein's theorem (polynomials)
Bernstein's inequality
Bernstein's_inequality_(mathematical_analysis) > Bernstein's inequality
In mathematical analysis, Bernstein's inequality states that on the complex plane, within the disk of radius 1, the degree of a polynomial times the maximum value of a polynomial is an upper bound for the similar maximum of its derivative. Taking the k-th derivative of the theorem, max | z | ≤ 1 ( | P ( k ) ( z ) | ) ≤ n ! ( n − k ) ! ⋅ max | z | ≤ 1 ( | P ( z ) | ) . {\displaystyle \max _{|z|\leq 1}(|P^{(k)}(z)|)\leq {\frac {n!}{(n-k)! }}\cdot \max _{|z|\leq 1}(|P(z)|).}
c_eamlmfaz68yq
Cesàro summation
Summary
Cesaro_summation
In mathematical analysis, Cesàro summation (also known as the Cesàro mean) assigns values to some infinite sums that are not necessarily convergent in the usual sense. The Cesàro sum is defined as the limit, as n tends to infinity, of the sequence of arithmetic means of the first n partial sums of the series. This special case of a matrix summability method is named for the Italian analyst Ernesto Cesàro (1859–1906). The term summation can be misleading, as some statements and proofs regarding Cesàro summation can be said to implicate the Eilenberg–Mazur swindle. For example, it is commonly applied to Grandi's series with the conclusion that the sum of that series is 1/2.
c_d19gfyr2yyjp
Clairaut's equation
Summary
Clairaut's_equation
In mathematical analysis, Clairaut's equation (or the Clairaut equation) is a differential equation of the form y ( x ) = x d y d x + f ( d y d x ) {\displaystyle y(x)=x{\frac {dy}{dx}}+f\left({\frac {dy}{dx}}\right)} where f {\displaystyle f} is continuously differentiable. It is a particular case of the Lagrange differential equation. It is named after the French mathematician Alexis Clairaut, who introduced it in 1734.
c_rjhs7k6yz9e1
Darboux's formula
Summary
Darboux's_formula
In mathematical analysis, Darboux's formula is a formula introduced by Gaston Darboux (1876) for summing infinite series by using integrals or evaluating integrals using infinite series. It is a generalization to the complex plane of the Euler–Maclaurin summation formula, which is used for similar purposes and derived in a similar manner (by repeated integration by parts of a particular choice of integrand). Darboux's formula can also be used to derive the Taylor series from calculus.
c_t3x6zj88hbr4
Dini continuity
Summary
Dini_continuity
In mathematical analysis, Dini continuity is a refinement of continuity. Every Dini continuous function is continuous. Every Lipschitz continuous function is Dini continuous.
c_txjmrujqddki
Ehrenpreis's fundamental principle
Summary
Ehrenpreis's_fundamental_principle
In mathematical analysis, Ehrenpreis's fundamental principle, introduced by Leon Ehrenpreis, states: Every solution of a system (in general, overdetermined) of homogeneous partial differential equations with constant coefficients can be represented as the integral with respect to an appropriate Radon measure over the complex “characteristic variety” of the system. == References ==
c_q2y1lwvxeng2
Ivar Ekeland
Variational principle
Ivar_Ekeland > Research > Variational principle
In mathematical analysis, Ekeland's variational principle, discovered by Ivar Ekeland, is a theorem that asserts that there exist a nearly optimal solution to a class of optimization problems.Ekeland's variational principle can be used when the lower level set of a minimization problem is not compact, so that the Bolzano–Weierstrass theorem can not be applied. Ekeland's principle relies on the completeness of the metric space.Ekeland's principle leads to a quick proof of the Caristi fixed point theorem.Ekeland was associated with the University of Paris when he proposed this theorem.
c_oz9d4vi2bgas
Ekeland's variational principle
Summary
Ekeland's_variational_principle
In mathematical analysis, Ekeland's variational principle, discovered by Ivar Ekeland, is a theorem that asserts that there exist nearly optimal solutions to some optimization problems. Ekeland's principle can be used when the lower level set of a minimization problems is not compact, so that the Bolzano–Weierstrass theorem cannot be applied. The principle relies on the completeness of the metric space.The principle has been shown to be equivalent to completeness of metric spaces. In proof theory, it is equivalent to Π11CA0 over RCA0, i.e. relatively strong. It also leads to a quick proof of the Caristi fixed point theorem.
c_dyvm7d08ho59
Fourier integral operator
Summary
Fourier_integral_operator
In mathematical analysis, Fourier integral operators have become an important tool in the theory of partial differential equations. The class of Fourier integral operators contains differential operators as well as classical integral operators as special cases. A Fourier integral operator T {\displaystyle T} is given by: ( T f ) ( x ) = ∫ R n e 2 π i Φ ( x , ξ ) a ( x , ξ ) f ^ ( ξ ) d ξ {\displaystyle (Tf)(x)=\int _{\mathbb {R} ^{n}}e^{2\pi i\Phi (x,\xi )}a(x,\xi ){\hat {f}}(\xi )\,d\xi } where f ^ {\displaystyle {\hat {f}}} denotes the Fourier transform of f {\displaystyle f} , a ( x , ξ ) {\displaystyle a(x,\xi )} is a standard symbol which is compactly supported in x {\displaystyle x} and Φ {\displaystyle \Phi } is real valued and homogeneous of degree 1 {\displaystyle 1} in ξ {\displaystyle \xi } . It is also necessary to require that det ( ∂ 2 Φ ∂ x i ∂ ξ j ) ≠ 0 {\displaystyle \det \left({\frac {\partial ^{2}\Phi }{\partial x_{i}\,\partial \xi _{j}}}\right)\neq 0} on the support of a. Under these conditions, if a is of order zero, it is possible to show that T {\displaystyle T} defines a bounded operator from L 2 {\displaystyle L^{2}} to L 2 {\displaystyle L^{2}} .
c_s63add43eqi1
Fubini's Theorem
Summary
Fubini's_Theorem
In mathematical analysis, Fubini's theorem is a result that gives conditions under which it is possible to compute a double integral by using an iterated integral, introduced by Guido Fubini in 1907. One may switch the order of integration if the double integral yields a finite answer when the integrand is replaced by its absolute value. Fubini's theorem implies that two iterated integrals are equal to the corresponding double integral across its integrands.
c_9ckvffd75c39
Fubini's Theorem
Summary
Fubini's_Theorem
Tonelli's theorem, introduced by Leonida Tonelli in 1909, is similar, but applies to a non-negative measurable function rather than one integrable over their domains. A related theorem is often called Fubini's theorem for infinite series, which states that if { a m , n } m = 1 , n = 1 ∞ {\textstyle \{a_{m,n}\}_{m=1,n=1}^{\infty }} is a doubly-indexed sequence of real numbers, and if ∑ ( m , n ) ∈ N × N a m , n {\textstyle \sum _{(m,n)\in \mathbb {N} \times \mathbb {N} }a_{m,n}} is absolutely convergent, then ∑ ( m , n ) ∈ N × N a m , n = ∑ m = 1 ∞ ∑ n = 1 ∞ a m , n = ∑ n = 1 ∞ ∑ m = 1 ∞ a m , n {\displaystyle \sum _{(m,n)\in \mathbb {N} \times \mathbb {N} }a_{m,n}=\sum _{m=1}^{\infty }\sum _{n=1}^{\infty }a_{m,n}=\sum _{n=1}^{\infty }\sum _{m=1}^{\infty }a_{m,n}} Although Fubini's theorem for infinite series is a special case of the more general Fubini's theorem, it is not appropriate to characterize it as a logical consequence of Fubini's theorem. This is because some properties of measures, in particular sub-additivity, are often proved using Fubini's theorem for infinite series. In this case, Fubini's general theorem is a logical consequence of Fubini's theorem for infinite series.
c_0dlwcj6anxxa
Glaeser's continuity theorem
Summary
Glaeser's_continuity_theorem
In mathematical analysis, Glaeser's continuity theorem is a characterization of the continuity of the derivative of the square roots of functions of class C 2 {\displaystyle C^{2}} . It was introduced in 1963 by Georges Glaeser, and was later simplified by Jean Dieudonné.The theorem states: Let f: U → R 0 + {\displaystyle f\ :\ U\rightarrow \mathbb {R} _{0}^{+}} be a function of class C 2 {\displaystyle C^{2}} in an open set U contained in R n {\displaystyle \mathbb {R} ^{n}} , then f {\displaystyle {\sqrt {f}}} is of class C 1 {\displaystyle C^{1}} in U if and only if its partial derivatives of first and second order vanish in the zeros of f. == References ==
c_cqynwwn4gggz
Haar's Tauberian theorem
Summary
Haar's_Tauberian_theorem
In mathematical analysis, Haar's Tauberian theorem named after Alfréd Haar, relates the asymptotic behaviour of a continuous function to properties of its Laplace transform. It is related to the integral formulation of the Hardy–Littlewood Tauberian theorem.
c_27ozez4lvb8s
Heine's Reciprocal Square Root Identity
Summary
Heine's_identity
In mathematical analysis, Heine's identity, named after Heinrich Eduard Heine is a Fourier expansion of a reciprocal square root which Heine presented as where Q m − 1 2 {\displaystyle Q_{m-{\frac {1}{2}}}} is a Legendre function of the second kind, which has degree, m − 1⁄2, a half-integer, and argument, z, real and greater than one. This expression can be generalized for arbitrary half-integer powers as follows where Γ {\displaystyle \scriptstyle \,\Gamma } is the Gamma function. == References ==
c_c2sspbmriwe1
Hölder's inequality
Summary
Hölder's_inequality
In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces. The numbers p and q above are said to be Hölder conjugates of each other. The special case p = q = 2 gives a form of the Cauchy–Schwarz inequality. Hölder's inequality holds even if ‖fg‖1 is infinite, the right-hand side also being infinite in that case.
c_bksswb0h1cre
Hölder's inequality
Summary
Hölder's_inequality
Conversely, if f is in Lp(μ) and g is in Lq(μ), then the pointwise product fg is in L1(μ). Hölder's inequality is used to prove the Minkowski inequality, which is the triangle inequality in the space Lp(μ), and also to establish that Lq(μ) is the dual space of Lp(μ) for p ∈ [1, ∞). Hölder's inequality (in a slightly different form) was first found by Leonard James Rogers (1888). Inspired by Rogers' work, Hölder (1889) gave another proof as part of a work developing the concept of convex and concave functions and introducing Jensen's inequality, which was in turn named for work of Johan Jensen building on Hölder's work.
c_qsj7xttnjh0u
Leonid Kantorovich
Mathematics
Leonid_Kantorovich > Mathematics
In mathematical analysis, Kantorovich had important results in functional analysis, approximation theory, and operator theory. In particular, Kantorovich formulated some fundamental results in the theory of normed vector lattices, especially in Dedekind complete vector lattices called "K-spaces" which are now referred to as "Kantorovich spaces" in his honor. Kantorovich showed that functional analysis could be used in the analysis of iterative methods, obtaining the Kantorovich inequalities on the convergence rate of the gradient method and of Newton's method (see the Kantorovich theorem). Kantorovich considered infinite-dimensional optimization problems, such as the Kantorovich-Monge problem in transport theory. His analysis proposed the Kantorovich–Rubinstein metric, which is used in probability theory, in the theory of the weak convergence of probability measures.
c_5m7b3uau24ax
Korn's inequality
Summary
Korn's_inequality
In mathematical analysis, Korn's inequality is an inequality concerning the gradient of a vector field that generalizes the following classical theorem: if the gradient of a vector field is skew-symmetric at every point, then the gradient must be equal to a constant skew-symmetric matrix. Korn's theorem is a quantitative version of this statement, which intuitively says that if the gradient of a vector field is on average not far from the space of skew-symmetric matrices, then the gradient must not be far from a particular skew-symmetric matrix. The statement that Korn's inequality generalizes thus arises as a special case of rigidity. In (linear) elasticity theory, the symmetric part of the gradient is a measure of the strain that an elastic body experiences when it is deformed by a given vector-valued function. The inequality is therefore an important tool as an a priori estimate in linear elasticity theory.
c_a1yn0ar7gy9l
Krein's condition
Summary
Krein's_condition
In mathematical analysis, Krein's condition provides a necessary and sufficient condition for exponential sums { ∑ k = 1 n a k exp ⁡ ( i λ k x ) , a k ∈ C , λ k ≥ 0 } , {\displaystyle \left\{\sum _{k=1}^{n}a_{k}\exp(i\lambda _{k}x),\quad a_{k}\in \mathbb {C} ,\,\lambda _{k}\geq 0\right\},} to be dense in a weighted L2 space on the real line. It was discovered by Mark Krein in the 1940s. A corollary, also called Krein's condition, provides a sufficient condition for the indeterminacy of the moment problem.
c_2trdiav66fh8
Lambert summation
Summary
Lambert_summation
In mathematical analysis, Lambert summation is a summability method for a class of divergent series.
c_k951uhrjju6g
Lipschitz function
Summary
Lipschitz_constant
In mathematical analysis, Lipschitz continuity, named after German mathematician Rudolf Lipschitz, is a strong form of uniform continuity for functions. Intuitively, a Lipschitz continuous function is limited in how fast it can change: there exists a real number such that, for every pair of points on the graph of this function, the absolute value of the slope of the line connecting them is not greater than this real number; the smallest such bound is called the Lipschitz constant of the function (and is related to the modulus of uniform continuity). For instance, every function that is defined on an interval and has bounded first derivative is Lipschitz continuous.In the theory of differential equations, Lipschitz continuity is the central condition of the Picard–Lindelöf theorem which guarantees the existence and uniqueness of the solution to an initial value problem. A special type of Lipschitz continuity, called contraction, is used in the Banach fixed-point theorem.We have the following chain of strict inclusions for functions over a closed and bounded non-trivial interval of the real line: Continuously differentiable ⊂ Lipschitz continuous ⊂ α {\displaystyle \alpha } -Hölder continuous,where 0 < α ≤ 1 {\displaystyle 0<\alpha \leq 1} . We also have Lipschitz continuous ⊂ absolutely continuous ⊂ uniformly continuous.
c_afqh2ooya0qr
Littlewood's 4/3 inequality
Summary
Littlewood's_4/3_inequality
In mathematical analysis, Littlewood's 4/3 inequality, named after John Edensor Littlewood, is an inequality that holds for every complex-valued bilinear form defined on c 0 {\displaystyle c_{0}} , the Banach space of scalar sequences that converge to zero. Precisely, let B: c 0 × c 0 → C {\displaystyle B:c_{0}\times c_{0}\to \mathbb {C} } or R {\displaystyle \mathbb {R} } be a bilinear form. Then the following holds: ( ∑ i , j = 1 ∞ | B ( e i , e j ) | 4 / 3 ) 3 / 4 ≤ 2 ‖ B ‖ , {\displaystyle \left(\sum _{i,j=1}^{\infty }|B(e_{i},e_{j})|^{4/3}\right)^{3/4}\leq {\sqrt {2}}\|B\|,} where ‖ B ‖ = sup { | B ( x 1 , x 2 ) |: ‖ x i ‖ ∞ ≤ 1 } .
c_pohariczva0z
Littlewood's 4/3 inequality
Summary
Littlewood's_4/3_inequality
{\displaystyle \|B\|=\sup\{|B(x_{1},x_{2})|:\|x_{i}\|_{\infty }\leq 1\}.} The exponent 4/3 is optimal, i.e., cannot be improved by a smaller exponent. It is also known that for real scalars the aforementioned constant is sharp.
c_f1hi546cwm5l
Lorentz space
Summary
Lorentz_space
In mathematical analysis, Lorentz spaces, introduced by George G. Lorentz in the 1950s, are generalisations of the more familiar L p {\displaystyle L^{p}} spaces. The Lorentz spaces are denoted by L p , q {\displaystyle L^{p,q}} . Like the L p {\displaystyle L^{p}} spaces, they are characterized by a norm (technically a quasinorm) that encodes information about the "size" of a function, just as the L p {\displaystyle L^{p}} norm does. The two basic qualitative notions of "size" of a function are: how tall is the graph of the function, and how spread out is it. The Lorentz norms provide tighter control over both qualities than the L p {\displaystyle L^{p}} norms, by exponentially rescaling the measure in both the range ( p {\displaystyle p} ) and the domain ( q {\displaystyle q} ). The Lorentz norms, like the L p {\displaystyle L^{p}} norms, are invariant under arbitrary rearrangements of the values of a function.
c_fk2msctwtp51
Mosco convergence
Summary
Mosco_convergence
In mathematical analysis, Mosco convergence is a notion of convergence for functionals that is used in nonlinear analysis and set-valued analysis. It is a particular case of Γ-convergence. Mosco convergence is sometimes phrased as “weak Γ-liminf and strong Γ-limsup” convergence since it uses both the weak and strong topologies on a topological vector space X. In finite dimensional spaces, Mosco convergence coincides with epi-convergence, while in infinite-dimensional ones, Mosco convergence is strictly stronger property. Mosco convergence is named after Italian mathematician Umberto Mosco, a current Harold J. Gay professor of mathematics at Worcester Polytechnic Institute.
c_7lbl95eoo576
Netto's theorem
Summary
Netto's_theorem
In mathematical analysis, Netto's theorem states that continuous bijections of smooth manifolds preserve dimension. That is, there does not exist a continuous bijection between two smooth manifolds of different dimension. It is named after Eugen Netto.The case for maps from a higher-dimensional manifold to a one-dimensional manifold was proven by Jacob Lüroth in 1878, using the intermediate value theorem to show that no manifold containing a topological circle can be mapped continuously and bijectively to the real line. Both Netto in 1878, and Georg Cantor in 1879, gave faulty proofs of the general theorem.
c_w3ocvo6ryyq0
Netto's theorem
Summary
Netto's_theorem
The faults were later recognized and corrected.An important special case of this theorem concerns the non-existence of continuous bijections from one-dimensional spaces, such as the real line or unit interval, to two-dimensional spaces, such as the Euclidean plane or unit square. The conditions of the theorem can be relaxed in different ways to obtain interesting classes of functions from one-dimensional spaces to two-dimensional spaces: Space-filling curves are surjective continuous functions from one-dimensional spaces to two-dimensional spaces. They cover every point of the plane, or of a unit square, by the image of a line or unit interval.
c_d6r3xdd0ddei
Netto's theorem
Summary
Netto's_theorem
Examples include the Peano curve and Hilbert curve. Neither of these examples has any self-crossings, but by Netto's theorem there are many points of the square that are covered multiple times by these curves. Osgood curves are continuous bijections from one-dimensional spaces to subsets of the plane that have nonzero area.
c_bxalboy05fkt
Netto's theorem
Summary
Netto's_theorem
They form Jordan curves in the plane. However, by Netto's theorem, they cannot cover the entire plane, unit square, or any other two-dimensional region.
c_u9krlzlj22oc
Netto's theorem
Summary
Netto's_theorem
If one relaxes the requirement of continuity, then all smooth manifolds of bounded dimension have equal cardinality, the cardinality of the continuum. Therefore, there exist discontinuous bijections between any two of them, as Georg Cantor showed in 1878. Cantor's result came as a surprise to many mathematicians and kicked off the line of research leading to space-filling curves, Osgood curves, and Netto's theorem.
c_7if1ykfbxx4s
Netto's theorem
Summary
Netto's_theorem
A near-bijection from the unit square to the unit interval can be obtained by interleaving the digits of the decimal representations of the Cartesian coordinates of points in the square. The ambiguities of decimal, exemplified by the two decimal representations of 1 = 0.999..., cause this to be an injection rather than a bijection, but this issue can be repaired by using the Schröder–Bernstein theorem. == References ==
c_a1blqtqa73d7
Parseval's formula
Summary
Parseval's_formula
In mathematical analysis, Parseval's identity, named after Marc-Antoine Parseval, is a fundamental result on the summability of the Fourier series of a function. Geometrically, it is a generalized Pythagorean theorem for inner-product spaces (which can have an uncountable infinity of basis vectors). Informally, the identity asserts that the sum of squares of the Fourier coefficients of a function is equal to the integral of the square of the function, where the Fourier coefficients c n {\displaystyle c_{n}} of f {\displaystyle f} are given by More formally, the result holds as stated provided f {\displaystyle f} is a square-integrable function or, more generally, in Lp space L 2 . {\displaystyle L^{2}.}
c_tq233r1211h2
Parseval's formula
Summary
Parseval's_formula
A similar result is the Plancherel theorem, which asserts that the integral of the square of the Fourier transform of a function is equal to the integral of the square of the function itself. In one-dimension, for f ∈ L 2 ( R ) , {\displaystyle f\in L^{2}(\mathbb {R} ),} Another similar identity is a one which gives the integral of the fourth power of the function f ∈ L 4 {\displaystyle f\in L^{4}} in terms of its Fourier coefficients given f {\displaystyle f} has a finite-length discrete Fourier transform with M {\displaystyle M} number of coefficients c ∈ C {\displaystyle c\in \mathbb {C} } . if c ∈ R {\displaystyle c\in \mathbb {R} } the identity is simplified to
c_9jg0hvdgilnm
Rademacher's theorem
Summary
Rademacher's_theorem
In mathematical analysis, Rademacher's theorem, named after Hans Rademacher, states the following: If U is an open subset of Rn and f: U → Rm is Lipschitz continuous, then f is differentiable almost everywhere in U; that is, the points in U at which f is not differentiable form a set of Lebesgue measure zero. Differentiability here refers to infinitesimal approximability by a linear map, which in particular asserts the existence of the coordinate-wise partial derivatives.
c_m71f88gmogt5
Equality of mixed partials
Schwarz's theorem
Equality_of_mixed_partials > Schwarz's theorem
In mathematical analysis, Schwarz's theorem (or Clairaut's theorem on equality of mixed partials) named after Alexis Clairaut and Hermann Schwarz, states that for a function f: Ω → R {\displaystyle f\colon \Omega \to \mathbb {R} } defined on a set Ω ⊂ R n {\displaystyle \Omega \subset \mathbb {R} ^{n}} , if p ∈ R n {\displaystyle \mathbf {p} \in \mathbb {R} ^{n}} is a point such that some neighborhood of p {\displaystyle \mathbf {p} } is contained in Ω {\displaystyle \Omega } and f {\displaystyle f} has continuous second partial derivatives on that neighborhood of p {\displaystyle \mathbf {p} } , then for all i and j in { 1 , 2 … , n } , {\displaystyle \{1,2\ldots ,\,n\},} ∂ 2 ∂ x i ∂ x j f ( p ) = ∂ 2 ∂ x j ∂ x i f ( p ) . {\displaystyle {\frac {\partial ^{2}}{\partial x_{i}\,\partial x_{j}}}f(\mathbf {p} )={\frac {\partial ^{2}}{\partial x_{j}\,\partial x_{i}}}f(\mathbf {p} ).} The partial derivatives of this function commute at that point. One easy way to establish this theorem (in the case where n = 2 {\displaystyle n=2} , i = 1 {\displaystyle i=1} , and j = 2 {\displaystyle j=2} , which readily entails the result in general) is by applying Green's theorem to the gradient of f .
c_bz73ee2tkahl
Equality of mixed partials
Schwarz's theorem
Equality_of_mixed_partials > Schwarz's theorem
{\displaystyle f.} An elementary proof for functions on open subsets of the plane is as follows (by a simple reduction, the general case for the theorem of Schwarz easily reduces to the planar case). Let f ( x , y ) {\displaystyle f(x,y)} be a differentiable function on an open rectangle Ω {\displaystyle \Omega } containing a point ( a , b ) {\displaystyle (a,b)} and suppose that d f {\displaystyle df} is continuous with continuous ∂ x ∂ y f {\displaystyle \partial _{x}\partial _{y}f} and ∂ y ∂ x f {\displaystyle \partial _{y}\partial _{x}f} over Ω .
c_jmd3ksciql6k
Equality of mixed partials
Schwarz's theorem
Equality_of_mixed_partials > Schwarz's theorem
{\displaystyle \Omega .} Define u ( h , k ) = f ( a + h , b + k ) − f ( a + h , b ) , v ( h , k ) = f ( a + h , b + k ) − f ( a , b + k ) , w ( h , k ) = f ( a + h , b + k ) − f ( a + h , b ) − f ( a , b + k ) + f ( a , b ) . {\displaystyle {\begin{aligned}u\left(h,\,k\right)&=f\left(a+h,\,b+k\right)-f\left(a+h,\,b\right),\\v\left(h,\,k\right)&=f\left(a+h,\,b+k\right)-f\left(a,\,b+k\right),\\w\left(h,\,k\right)&=f\left(a+h,\,b+k\right)-f\left(a+h,\,b\right)-f\left(a,\,b+k\right)+f\left(a,\,b\right).\end{aligned}}} These functions are defined for | h | , | k | < ε {\displaystyle \left|h\right|,\,\left|k\right|<\varepsilon } , where ε > 0 {\displaystyle \varepsilon >0} and × {\displaystyle \left\times \left} is contained in Ω .
c_uacz48ehdpx4
Equality of mixed partials
Schwarz's theorem
Equality_of_mixed_partials > Schwarz's theorem
{\displaystyle {\begin{aligned}hk\,\partial _{y}\partial _{x}f\left(a+\theta h,\,b+\theta ^{\prime }k\right)&=hk\,\partial _{x}\partial _{y}f\left(a+\phi ^{\prime }h,\,b+\phi k\right),\\\partial _{y}\partial _{x}f\left(a+\theta h,\,b+\theta ^{\prime }k\right)&=\partial _{x}\partial _{y}f\left(a+\phi ^{\prime }h,\,b+\phi k\right).\end{aligned}}} Letting h , k {\displaystyle h,\,k} tend to zero in the last equality, the continuity assumptions on ∂ y ∂ x f {\displaystyle \partial _{y}\partial _{x}f} and ∂ x ∂ y f {\displaystyle \partial _{x}\partial _{y}f} now imply that ∂ 2 ∂ x ∂ y f ( a , b ) = ∂ 2 ∂ y ∂ x f ( a , b ) . {\displaystyle {\frac {\partial ^{2}}{\partial x\partial y}}f\left(a,\,b\right)={\frac {\partial ^{2}}{\partial y\partial x}}f\left(a,\,b\right).} This account is a straightforward classical method found in many text books, for example in Burkill, Apostol and Rudin.Although the derivation above is elementary, the approach can also be viewed from a more conceptual perspective so that the result becomes more apparent.
c_9o2mb79t7zn7
Equality of mixed partials
Schwarz's theorem
Equality_of_mixed_partials > Schwarz's theorem
Indeed the difference operators Δ x t , Δ y t {\displaystyle \Delta _{x}^{t},\,\,\Delta _{y}^{t}} commute and Δ x t f , Δ y t f {\displaystyle \Delta _{x}^{t}f,\,\,\Delta _{y}^{t}f} tend to ∂ x f , ∂ y f {\displaystyle \partial _{x}f,\,\,\partial _{y}f} as t {\displaystyle t} tends to 0, with a similar statement for second order operators. Here, for z {\displaystyle z} a vector in the plane and u {\displaystyle u} a directional vector ( 1 0 ) {\displaystyle {\tbinom {1}{0}}} or ( 0 1 ) {\displaystyle {\tbinom {0}{1}}} , the difference operator is defined by Δ u t f ( z ) = f ( z + t u ) − f ( z ) t . {\displaystyle \Delta _{u}^{t}f(z)={f(z+tu)-f(z) \over t}.}
c_evcpvsoaz69z
Equality of mixed partials
Schwarz's theorem
Equality_of_mixed_partials > Schwarz's theorem
By the fundamental theorem of calculus for C 1 {\displaystyle C^{1}} functions f {\displaystyle f} on an open interval I {\displaystyle I} with ( a , b ) ⊂ I {\displaystyle (a,b)\subset I} ∫ a b f ′ ( x ) d x = f ( b ) − f ( a ) . {\displaystyle \int _{a}^{b}f^{\prime }(x)\,dx=f(b)-f(a).} Hence | f ( b ) − f ( a ) | ≤ ( b − a ) sup c ∈ ( a , b ) | f ′ ( c ) | {\displaystyle |f(b)-f(a)|\leq (b-a)\,\sup _{c\in (a,b)}|f^{\prime }(c)|} .This is a generalized version of the mean value theorem.
c_9semhsxo22tk
Equality of mixed partials
Schwarz's theorem
Equality_of_mixed_partials > Schwarz's theorem
Recall that the elementary discussion on maxima or minima for real-valued functions implies that if f {\displaystyle f} is continuous on {\displaystyle } and differentiable on ( a , b ) {\displaystyle (a,b)} , then there is a point c {\displaystyle c} in ( a , b ) {\displaystyle (a,b)} such that f ( b ) − f ( a ) b − a = f ′ ( c ) . {\displaystyle {f(b)-f(a) \over b-a}=f^{\prime }(c).} For vector-valued functions with V {\displaystyle V} a finite-dimensional normed space, there is no analogue of the equality above, indeed it fails.
c_r5b61lemb5cj
Equality of mixed partials
Schwarz's theorem
Equality_of_mixed_partials > Schwarz's theorem
{\displaystyle \left|\Delta _{1}^{t}\Delta _{2}^{t}f(x_{0},y_{0})-D_{1}D_{2}f(x_{0},y_{0})\right|\leq \sup _{0\leq s\leq 1}\left|\Delta _{1}^{t}D_{2}f(x_{0},y_{0}+ts)-D_{1}D_{2}f(x_{0},y_{0})\right|\leq \sup _{0\leq r,s\leq 1}\left|D_{1}D_{2}f(x_{0}+tr,y_{0}+ts)-D_{1}D_{2}f(x_{0},y_{0})\right|.} Thus Δ 1 t Δ 2 t f ( x 0 , y 0 ) {\displaystyle \Delta _{1}^{t}\Delta _{2}^{t}f(x_{0},y_{0})} tends to D 1 D 2 f ( x 0 , y 0 ) {\displaystyle D_{1}D_{2}f(x_{0},y_{0})} as t {\displaystyle t} tends to 0. The same argument shows that Δ 2 t Δ 1 t f ( x 0 , y 0 ) {\displaystyle \Delta _{2}^{t}\Delta _{1}^{t}f(x_{0},y_{0})} tends to D 2 D 1 f ( x 0 , y 0 ) {\displaystyle D_{2}D_{1}f(x_{0},y_{0})} .
c_un02waz6avk7
Equality of mixed partials
Schwarz's theorem
Equality_of_mixed_partials > Schwarz's theorem
Hence, since the difference operators commute, so do the partial differential operators D 1 {\displaystyle D_{1}} and D 2 {\displaystyle D_{2}} , as claimed.Remark. By two applications of the classical mean value theorem, Δ 1 t Δ 2 t f ( x 0 , y 0 ) = D 1 D 2 f ( x 0 + t θ , y 0 + t θ ′ ) {\displaystyle \Delta _{1}^{t}\Delta _{2}^{t}f(x_{0},y_{0})=D_{1}D_{2}f(x_{0}+t\theta ,y_{0}+t\theta ^{\prime })} for some θ {\displaystyle \theta } and θ ′ {\displaystyle \theta ^{\prime }} in ( 0 , 1 ) {\displaystyle (0,1)} . Thus the first elementary proof can be reinterpreted using difference operators. Conversely, instead of using the generalized mean value theorem in the second proof, the classical mean valued theorem could be used.
c_pniensr7fpjc
Strichartz estimate
Summary
Strichartz_estimate
In mathematical analysis, Strichartz estimates are a family of inequalities for linear dispersive partial differential equations. These inequalities establish size and decay of solutions in mixed norm Lebesgue spaces. They were first noted by Robert Strichartz and arose out of connections to the Fourier restriction problem.
c_fsehp4treneu
Tannery's theorem
Summary
Tannery's_theorem
In mathematical analysis, Tannery's theorem gives sufficient conditions for the interchanging of the limit and infinite summation operations. It is named after Jules Tannery.
c_l08bl1uqrjot
Trudinger's theorem
Summary
Trudinger's_theorem
In mathematical analysis, Trudinger's theorem or the Trudinger inequality (also sometimes called the Moser–Trudinger inequality) is a result of functional analysis on Sobolev spaces. It is named after Neil Trudinger (and Jürgen Moser). It provides an inequality between a certain Sobolev space norm and an Orlicz space norm of a function.
c_2ae5ylnf1tc2
Trudinger's theorem
Summary
Trudinger's_theorem
The inequality is a limiting case of Sobolev imbedding and can be stated as the following theorem: Let Ω {\displaystyle \Omega } be a bounded domain in R n {\displaystyle \mathbb {R} ^{n}} satisfying the cone condition. Let m p = n {\displaystyle mp=n} and p > 1 {\displaystyle p>1} . Set A ( t ) = exp ⁡ ( t n / ( n − m ) ) − 1.
c_mwnm3moikf6q
Trudinger's theorem
Summary
Trudinger's_theorem
{\displaystyle A(t)=\exp \left(t^{n/(n-m)}\right)-1.} Then there exists the embedding W m , p ( Ω ) ↪ L A ( Ω ) {\displaystyle W^{m,p}(\Omega )\hookrightarrow L_{A}(\Omega )} where L A ( Ω ) = { u ∈ M f ( Ω ): ‖ u ‖ A , Ω = inf { k > 0: ∫ Ω A ( | u ( x ) | k ) d x ≤ 1 } < ∞ } . {\displaystyle L_{A}(\Omega )=\left\{u\in M_{f}(\Omega ):\|u\|_{A,\Omega }=\inf\{k>0:\int _{\Omega }A\left({\frac {|u(x)|}{k}}\right)~dx\leq 1\}<\infty \right\}.} The space L A ( Ω ) {\displaystyle L_{A}(\Omega )} is an example of an Orlicz space.
c_v0bkgoviys9w
Wiener's Tauberian theorem
Summary
Wiener's_Tauberian_theorem
In mathematical analysis, Wiener's tauberian theorem is any of several related results proved by Norbert Wiener in 1932. They provide a necessary and sufficient condition under which any function in L 1 {\displaystyle L^{1}} or L 2 {\displaystyle L^{2}} can be approximated by linear combinations of translations of a given function.Informally, if the Fourier transform of a function f {\displaystyle f} vanishes on a certain set Z {\displaystyle Z} , the Fourier transform of any linear combination of translations of f {\displaystyle f} also vanishes on Z {\displaystyle Z} . Therefore, the linear combinations of translations of f {\displaystyle f} cannot approximate a function whose Fourier transform does not vanish on Z {\displaystyle Z} . Wiener's theorems make this precise, stating that linear combinations of translations of f {\displaystyle f} are dense if and only if the zero set of the Fourier transform of f {\displaystyle f} is empty (in the case of L 1 {\displaystyle L^{1}} ) or of Lebesgue measure zero (in the case of L 2 {\displaystyle L^{2}} ). Gelfand reformulated Wiener's theorem in terms of commutative C*-algebras, when it states that the spectrum of the L 1 {\displaystyle L^{1}} group ring L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} of the group R {\displaystyle \mathbb {R} } of real numbers is the dual group of R {\displaystyle \mathbb {R} } . A similar result is true when R {\displaystyle \mathbb {R} } is replaced by any locally compact abelian group.
c_dtj6o8fs4p2h
Zorich's theorem
Summary
Zorich's_theorem
In mathematical analysis, Zorich's theorem was proved by Vladimir A. Zorich in 1967. The result was conjectured by M. A. Lavrentev in 1938.
c_o7ow7trwjie0
Banach limit
Summary
Banach_limit
In mathematical analysis, a Banach limit is a continuous linear functional ϕ: ℓ ∞ → C {\displaystyle \phi :\ell ^{\infty }\to \mathbb {C} } defined on the Banach space ℓ ∞ {\displaystyle \ell ^{\infty }} of all bounded complex-valued sequences such that for all sequences x = ( x n ) {\displaystyle x=(x_{n})} , y = ( y n ) {\displaystyle y=(y_{n})} in ℓ ∞ {\displaystyle \ell ^{\infty }} , and complex numbers α {\displaystyle \alpha }: ϕ ( α x + y ) = α ϕ ( x ) + ϕ ( y ) {\displaystyle \phi (\alpha x+y)=\alpha \phi (x)+\phi (y)} (linearity); if x n ≥ 0 {\displaystyle x_{n}\geq 0} for all n ∈ N {\displaystyle n\in \mathbb {N} } , then ϕ ( x ) ≥ 0 {\displaystyle \phi (x)\geq 0} (positivity); ϕ ( x ) = ϕ ( S x ) {\displaystyle \phi (x)=\phi (Sx)} , where S {\displaystyle S} is the shift operator defined by ( S x ) n = x n + 1 {\displaystyle (Sx)_{n}=x_{n+1}} (shift-invariance); if x {\displaystyle x} is a convergent sequence, then ϕ ( x ) = lim x {\displaystyle \phi (x)=\lim x} .Hence, ϕ {\displaystyle \phi } is an extension of the continuous functional lim: c → C {\displaystyle \lim :c\to \mathbb {C} } where c ⊂ ℓ ∞ {\displaystyle c\subset \ell ^{\infty }} is the complex vector space of all sequences which converge to a (usual) limit in C {\displaystyle \mathbb {C} } . In other words, a Banach limit extends the usual limits, is linear, shift-invariant and positive. However, there exist sequences for which the values of two Banach limits do not agree. We say that the Banach limit is not uniquely determined in this case.
c_2teeh2lwa914
Banach limit
Summary
Banach_limit
As a consequence of the above properties, a real-valued Banach limit also satisfies: lim inf n → ∞ x n ≤ ϕ ( x ) ≤ lim sup n → ∞ x n . {\displaystyle \liminf _{n\to \infty }x_{n}\leq \phi (x)\leq \limsup _{n\to \infty }x_{n}.} The existence of Banach limits is usually proved using the Hahn–Banach theorem (analyst's approach), or using ultrafilters (this approach is more frequent in set-theoretical expositions). These proofs necessarily use the axiom of choice (so called non-effective proof).
c_d08fbltnl4l6
Besicovitch covering theorem
Summary
Besicovitch_covering_theorem
In mathematical analysis, a Besicovitch cover, named after Abram Samoilovitch Besicovitch, is an open cover of a subset E of the Euclidean space RN by balls such that each point of E is the center of some ball in the cover. The Besicovitch covering theorem asserts that there exists a constant cN depending only on the dimension N with the following property: Given any Besicovitch cover F of a bounded set E, there are cN subcollections of balls A1 = {Bn1}, …, AcN = {BncN} contained in F such that each collection Ai consists of disjoint balls, and E ⊆ ⋃ i = 1 c N ⋃ B ∈ A i B . {\displaystyle E\subseteq \bigcup _{i=1}^{c_{N}}\bigcup _{B\in A_{i}}B.}
c_kbdtth82xqnc
Besicovitch covering theorem
Summary
Besicovitch_covering_theorem
Let G denote the subcollection of F consisting of all balls from the cN disjoint families A1,...,AcN. The less precise following statement is clearly true: every point x ∈ RN belongs to at most cN different balls from the subcollection G, and G remains a cover for E (every point y ∈ E belongs to at least one ball from the subcollection G). This property gives actually an equivalent form for the theorem (except for the value of the constant). There exists a constant bN depending only on the dimension N with the following property: Given any Besicovitch cover F of a bounded set E, there is a subcollection G of F such that G is a cover of the set E and every point x ∈ E belongs to at most bN different balls from the subcover G.In other words, the function SG equal to the sum of the indicator functions of the balls in G is larger than 1E and bounded on RN by the constant bN, 1 E ≤ S G := ∑ B ∈ G 1 B ≤ b N . {\displaystyle \mathbf {1} _{E}\leq S_{\mathbf {G} }:=\sum _{B\in \mathbf {G} }\mathbf {1} _{B}\leq b_{N}.}
c_feic4zhd7vc2
Contraction semigroup
Summary
Quasicontraction_semigroup
In mathematical analysis, a C0-semigroup Γ(t), t ≥ 0, is called a quasicontraction semigroup if there is a constant ω such that ||Γ(t)|| ≤ exp(ωt) for all t ≥ 0. Γ(t) is called a contraction semigroup if ||Γ(t)|| ≤ 1 for all t ≥ 0.