content_id
stringlengths
14
14
page_title
stringlengths
1
250
section_title
stringlengths
1
1.26k
breadcrumb
stringlengths
1
1.39k
text
stringlengths
9
3.55k
c_zu6x6i8ok53a
Spaces of test functions and distributions
Summary
Spaces_of_test_functions_and_distributions
In mathematical analysis, the spaces of test functions and distributions are topological vector spaces (TVSs) that are used in the definition and application of distributions. Test functions are usually infinitely differentiable complex-valued (or sometimes real-valued) functions on a non-empty open subset U ⊆ R n {\displaystyle U\subseteq \mathbb {R} ^{n}} that have compact support. The space of all test functions, denoted by C c ∞ ( U ) , {\displaystyle C_{c}^{\infty }(U),} is endowed with a certain topology, called the canonical LF-topology, that makes C c ∞ ( U ) {\displaystyle C_{c}^{\infty }(U)} into a complete Hausdorff locally convex TVS. The strong dual space of C c ∞ ( U ) {\displaystyle C_{c}^{\infty }(U)} is called the space of distributions on U {\displaystyle U} and is denoted by D ′ ( U ) := ( C c ∞ ( U ) ) b ′ , {\displaystyle {\mathcal {D}}^{\prime }(U):=\left(C_{c}^{\infty }(U)\right)_{b}^{\prime },} where the " b {\displaystyle b} " subscript indicates that the continuous dual space of C c ∞ ( U ) , {\displaystyle C_{c}^{\infty }(U),} denoted by ( C c ∞ ( U ) ) ′ , {\displaystyle \left(C_{c}^{\infty }(U)\right)^{\prime },} is endowed with the strong dual topology.
c_mp6iow7lt3zv
Spaces of test functions and distributions
Summary
Spaces_of_test_functions_and_distributions
There are other possible choices for the space of test functions, which lead to other different spaces of distributions. If U = R n {\displaystyle U=\mathbb {R} ^{n}} then the use of Schwartz functions as test functions gives rise to a certain subspace of D ′ ( U ) {\displaystyle {\mathcal {D}}^{\prime }(U)} whose elements are called tempered distributions. These are important because they allow the Fourier transform to be extended from "standard functions" to tempered distributions.
c_nyj1sgp3hn89
Spaces of test functions and distributions
Summary
Spaces_of_test_functions_and_distributions
The set of tempered distributions forms a vector subspace of the space of distributions D ′ ( U ) {\displaystyle {\mathcal {D}}^{\prime }(U)} and is thus one example of a space of distributions; there are many other spaces of distributions. There also exist other major classes of test functions that are not subsets of C c ∞ ( U ) , {\displaystyle C_{c}^{\infty }(U),} such as spaces of analytic test functions, which produce very different classes of distributions. The theory of such distributions has a different character from the previous one because there are no analytic functions with non-empty compact support. Use of analytic test functions leads to Sato's theory of hyperfunctions.
c_hqcg9ef3sika
Staircase paradox
Summary
Staircase_paradox
In mathematical analysis, the staircase paradox is a pathological example showing that limits of curves do not necessarily preserve their length. It consists of a sequence of "staircase" polygonal chains in a unit square, formed from horizontal and vertical line segments of decreasing length, so that these staircases converge uniformly to the diagonal of the square. However, each staircase has length two, while the length of the diagonal is the square root of 2, so the sequence of staircase lengths does not converge to the length of the diagonal. Martin Gardner calls this "an ancient geometrical paradox".
c_qe4optiqci0w
Staircase paradox
Summary
Staircase_paradox
It shows that, for curves under uniform convergence, the length of a curve is not a continuous function of the curve.For any smooth curve, polygonal chains with segment lengths decreasing to zero, connecting consecutive vertices along the curve, always converge to the arc length. The failure of the staircase curves to converge to the correct length can be explained by the fact that some of their vertices do not lie on the diagonal. In higher dimensions, the Schwarz lantern provides an analogous example showing that polyhedral surfaces that converge pointwise to a curved surface do not necessarily converge to its area, even when the vertices all lie on the surface.As well as highlighting the need for careful definitions of arc length in mathematics education, the paradox has applications in digital geometry, where it motivates methods of estimating the perimeter of pixelated shapes that do not merely sum the lengths of boundaries between pixels.
c_vyukr0vcpidx
Chebyshev norm
Summary
Uniform_metric
In mathematical analysis, the uniform norm (or sup norm) assigns to real- or complex-valued bounded functions f {\displaystyle f} defined on a set S {\displaystyle S} the non-negative number ‖ f ‖ ∞ = ‖ f ‖ ∞ , S = sup { | f ( s ) |: s ∈ S } . {\displaystyle \|f\|_{\infty }=\|f\|_{\infty ,S}=\sup \left\{\,|f(s)|:s\in S\,\right\}.} This norm is also called the supremum norm, the Chebyshev norm, the infinity norm, or, when the supremum is in fact the maximum, the max norm.
c_472jo8hqam7i
Chebyshev norm
Summary
Uniform_metric
The name "uniform norm" derives from the fact that a sequence of functions { f n } {\displaystyle \left\{f_{n}\right\}} converges to f {\displaystyle f} under the metric derived from the uniform norm if and only if f n {\displaystyle f_{n}} converges to f {\displaystyle f} uniformly.If f {\displaystyle f} is a continuous function on a closed and bounded interval, or more generally a compact set, then it is bounded and the supremum in the above definition is attained by the Weierstrass extreme value theorem, so we can replace the supremum by the maximum. In this case, the norm is also called the maximum norm. In particular, if x {\displaystyle x} is some vector such that x = ( x 1 , x 2 , … , x n ) {\displaystyle x=\left(x_{1},x_{2},\ldots ,x_{n}\right)} in finite dimensional coordinate space, it takes the form: ‖ x ‖ ∞ := max ( | x 1 | , … , | x n | ) . {\displaystyle \|x\|_{\infty }:=\max \left(\left|x_{1}\right|,\ldots ,\left|x_{n}\right|\right).}
c_scxuwnj5gvkt
Universal chord theorem
Summary
Universal_chord_theorem
In mathematical analysis, the universal chord theorem states that if a function f is continuous on and satisfies f ( a ) = f ( b ) {\displaystyle f(a)=f(b)} , then for every natural number n {\displaystyle n} , there exists some x ∈ {\displaystyle x\in } such that f ( x ) = f ( x + b − a n ) {\displaystyle f(x)=f\left(x+{\frac {b-a}{n}}\right)} .
c_bbjatgrhdwxs
Multimedia
Mathematical and scientific research
Multi_Format_Publishing > Usage/application > Mathematical and scientific research
In mathematical and scientific research, multimedia is mainly used for modeling and simulation. For example, a scientist can look at a molecular model of a particular substance and manipulate it to arrive at a new substance. Representative research can be found in journals such as the Journal of Multimedia.
c_rv1h5rfyh9dc
Multimedia
Mathematical and scientific research
Multi_Format_Publishing > Usage/application > Mathematical and scientific research
One well known example of this being applied would be in the movie Interstellar where Executive Director Kip Thorne helped create one of the most realistic depictions of a blackhole in film. The visual effects team under Paul Franklin took Kip Thorne's mathematical data and applied it into their own visual effects engine called "Double Negative Gravitational Renderer" a.k.a. "Gargantua", to create a "real" blackhole, used in the final cut. Later on the visual effects team went onto publish a blackhole study
c_h3rvm1ertmi1
Q-Pochhammer symbol
Summary
Q-Pochhammer_symbol
In mathematical area of combinatorics, the q-Pochhammer symbol, also called the q-shifted factorial, is the product with ( a ; q ) 0 = 1. {\displaystyle (a;q)_{0}=1.} It is a q-analog of the Pochhammer symbol ( x ) n = x ( x + 1 ) … ( x + n − 1 ) {\displaystyle (x)_{n}=x(x+1)\dots (x+n-1)} , in the sense that The q-Pochhammer symbol is a major building block in the construction of q-analogs; for instance, in the theory of basic hypergeometric series, it plays the role that the ordinary Pochhammer symbol plays in the theory of generalized hypergeometric series. Unlike the ordinary Pochhammer symbol, the q-Pochhammer symbol can be extended to an infinite product: This is an analytic function of q in the interior of the unit disk, and can also be considered as a formal power series in q. The special case is known as Euler's function, and is important in combinatorics, number theory, and the theory of modular forms.
c_w8rc3xari60y
Community matrix
Summary
Community_matrix
In mathematical biology, the community matrix is the linearization of a generalized Lotka–Volterra equation at an equilibrium point. The eigenvalues of the community matrix determine the stability of the equilibrium point. For example, the Lotka–Volterra predator–prey model is d x d t = x ( α − β y ) d y d t = − y ( γ − δ x ) , {\displaystyle {\begin{array}{rcl}{\dfrac {dx}{dt}}&=&x(\alpha -\beta y)\\{\dfrac {dy}{dt}}&=&-y(\gamma -\delta x),\end{array}}} where x(t) denotes the number of prey, y(t) the number of predators, and α, β, γ and δ are constants.
c_ihoq9flspnif
Community matrix
Summary
Community_matrix
By the Hartman–Grobman theorem the non-linear system is topologically equivalent to a linearization of the system about an equilibrium point (x*, y*), which has the form = A , {\displaystyle {\begin{bmatrix}{\frac {du}{dt}}\\{\frac {dv}{dt}}\end{bmatrix}}=\mathbf {A} {\begin{bmatrix}u\\v\end{bmatrix}},} where u = x − x* and v = y − y*. In mathematical biology, the Jacobian matrix A {\displaystyle \mathbf {A} } evaluated at the equilibrium point (x*, y*) is called the community matrix. By the stable manifold theorem, if one or both eigenvalues of A {\displaystyle \mathbf {A} } have positive real part then the equilibrium is unstable, but if all eigenvalues have negative real part then it is stable.
c_j9khdlunmqoa
Pauline van den Driessche
Contributions
Pauline_van_den_Driessche > Contributions
In mathematical biology, van den Driessche's contributions include important work on delay differential equations and on Hopf bifurcations, and the effects of changing population size and immigration on epidemics.She has also done more fundamental research in linear algebra, motivated by applications in mathematical biology. Her work in this area includes pioneering contributions to the theory of combinatorial matrix theory in which she proved connections between the sign pattern of a matrix and its stability, as well as results on matrix decomposition.
c_lgw8pphziufq
Transylvania lottery
Summary
Transylvania_lottery
In mathematical combinatorics, the Transylvania lottery is a lottery where players selected three numbers from 1-14 for each ticket, and then three numbers are chosen randomly. A ticket wins if two of the numbers match the random ones. The problem asks how many tickets the player must buy in order to be certain of winning. (Javier Martínez, Gloria Gutiérrez & Pablo Cordero et al. 2008, p.85)(Mazur 2010, p.280 problem 15) An upper bound can be given using the Fano plane with a collection of 14 tickets in two sets of seven.
c_xskmiar6tw57
Transylvania lottery
Summary
Transylvania_lottery
Each set of seven uses every line of a Fano plane, labelled with the numbers 1 to 7, and 8 to 14. At least two of the three randomly chosen numbers must be in one Fano plane set, and any two points on a Fano plane are on a line, so there will be a ticket in the collection containing those two numbers. There is a (6/13)*(5/12)=5/26 chance that all three randomly chosen numbers are in the same Fano plane set. In this case, there is a 1/5 chance that they are on a line, and hence all three numbers are on one ticket, otherwise each of the three pairs are on three different tickets.
c_en4nfi6mvxym
Radó's theorem (Riemann surfaces)
Summary
Radó's_theorem_(Riemann_surfaces)
In mathematical complex analysis, Radó's theorem, proved by Tibor Radó (1925), states that every connected Riemann surface is second-countable (has a countable base for its topology). The Prüfer surface is an example of a surface with no countable base for the topology, so cannot have the structure of a Riemann surface. The obvious analogue of Radó's theorem in higher dimensions is false: there are 2-dimensional connected complex manifolds that are not second-countable.
c_l4ypjjqbm0vh
Schottky theorem
Summary
Schottky_theorem
In mathematical complex analysis, Schottky's theorem, introduced by Schottky (1904) is a quantitative version of Picard's theorem. It states that for a holomorphic function f in the open unit disk that does not take the values 0 or 1, the value of |f(z)| can be bounded in terms of z and f(0). Schottky's original theorem did not give an explicit bound for f. Ostrowski (1931, 1933) gave some weak explicit bounds. Ahlfors (1938, theorem B) gave a strong explicit bound, showing that if f is holomorphic in the open unit disk and does not take the values 0 or 1, then log ⁡ | f ( z ) | ≤ 1 + | z | 1 − | z | ( 7 + max ( 0 , log ⁡ | f ( 0 ) | ) ) {\displaystyle \log |f(z)|\leq {\frac {1+|z|}{1-|z|}}(7+\max(0,\log |f(0)|))} .Several authors, such as Jenkins (1955), have given variations of Ahlfors's bound with better constants: in particular Hempel (1980) gave some bounds whose constants are in some sense the best possible.
c_9a1xwybe7kpk
Quasiconformal map
Summary
Quasi-conformal_mapping
In mathematical complex analysis, a quasiconformal mapping, introduced by Grötzsch (1928) and named by Ahlfors (1935), is a homeomorphism between plane domains which to first order takes small circles to small ellipses of bounded eccentricity. Intuitively, let f: D → D′ be an orientation-preserving homeomorphism between open sets in the plane. If f is continuously differentiable, then it is K-quasiconformal if the derivative of f at every point maps circles to ellipses with eccentricity bounded by K.
c_86vmrmh0psr7
Geometric function theory
Quasiconformal maps
Geometric_function_theory > Topics in geometric function theory > Quasiconformal maps
In mathematical complex analysis, a quasiconformal mapping, introduced by Grötzsch (1928) and named by Ahlfors (1935), is a homeomorphism between plane domains which to first order takes small circles to small ellipses of bounded eccentricity. Intuitively, let f: D → D′ be an orientation-preserving homeomorphism between open sets in the plane. If f is continuously differentiable, then it is K-quasiconformal if the derivative of f at every point maps circles to ellipses with eccentricity bounded by K. If K is 0, then the function is conformal.
c_1r5r4mmylhxi
Universal Teichmüller space
Summary
Universal_Teichmüller_space
In mathematical complex analysis, universal Teichmüller space T(1) is a Teichmüller space containing the Teichmüller space T(G) of every Fuchsian group G. It was introduced by Bers (1965) as the set of boundary values of quasiconformal maps of the upper half-plane that fix 0, 1, and ∞.
c_pm1faufk8xvv
Kleinian integer
Summary
Kleinian_integer
In mathematical cryptography, a Kleinian integer is a complex number of the form m + n 1 + − 7 2 {\displaystyle m+n{\frac {1+{\sqrt {-7}}}{2}}} , with m and n rational integers. They are named after Felix Klein. The Kleinian integers form a ring called the Kleinian ring, which is the ring of integers in the imaginary quadratic field Q ( − 7 ) {\displaystyle \mathbb {Q} ({\sqrt {-7}})} . This ring is a unique factorization domain.
c_97loffc8uqrj
Semi colon
Mathematics
Semi_colon > Mathematics
In the calculus of relations, the semicolon is used in infix notation for the composition of relations: A ; B = { ( x , z ): ∃ y x A y ∧ y B z } . {\displaystyle A;B\ =\ \{(x,z):\exists y\ \ xAy\ \land \ yBz\}~.} The ; Humphrey point is sometimes used as the "decimal point" in duodecimal numbers: 54;612 equals 64.510.
c_we71p1ov834n
Continuous-time signal
Summary
Continuous_time
In mathematical dynamics, discrete time and continuous time are two alternative frameworks within which variables that evolve over time are modeled.
c_15sm4lpugkb1
Topkis's Theorem
Summary
Topkis's_Theorem
In mathematical economics, Topkis's theorem is a result that is useful for establishing comparative statics. The theorem allows researchers to understand how the optimal value for a choice variable changes when a feature of the environment changes. The result states that if f is supermodular in (x,θ), and D is a lattice, then x ∗ ( θ ) = arg ⁡ max x ∈ D f ( x , θ ) {\displaystyle x^{*}(\theta )=\arg \max _{x\in D}f(x,\theta )} is nondecreasing in θ. The result is especially helpful for establishing comparative static results when the objective function is not differentiable. The result is named after Donald M. Topkis.
c_4t2139uerihn
Isoelastic function
Summary
Isoelastic_function
In mathematical economics, an isoelastic function, sometimes constant elasticity function, is a function that exhibits a constant elasticity, i.e. has a constant elasticity coefficient. The elasticity is the ratio of the percentage change in the dependent variable to the percentage causative change in the independent variable, in the limit as the changes approach zero in magnitude. For an elasticity coefficient r {\displaystyle r} (which can take on any real value), the function's general form is given by f ( x ) = k x r , {\displaystyle f(x)={kx^{r}},} where k {\displaystyle k} and r {\displaystyle r} are constants. The elasticity is by definition elasticity = d f ( x ) d x x f ( x ) = d ln f ( x ) d ln x , {\displaystyle {\text{elasticity}}={\frac {df(x)}{dx}}{\frac {x}{f(x)}}={\frac {d{\text{ln}}f(x)}{d{\text{ln}}x}},} which for this function simply equals r.
c_gdc9wu7g0agl
Applied general equilibrium
Summary
Applied_general_equilibrium
In mathematical economics, applied general equilibrium (AGE) models were pioneered by Herbert Scarf at Yale University in 1967, in two papers, and a follow-up book with Terje Hansen in 1973, with the aim of empirically estimating the Arrow–Debreu model of general equilibrium theory with empirical data, to provide "“a general method for the explicit numerical solution of the neoclassical model” (Scarf with Hansen 1973: 1) Scarf's method iterated a sequence of simplicial subdivisions which would generate a decreasing sequence of simplices around any solution of the general equilibrium problem. With sufficiently many steps, the sequence would produce a price vector that clears the market. Brouwer's Fixed Point theorem states that a continuous mapping of a simplex into itself has at least one fixed point. This paper describes a numerical algorithm for approximating, in a sense to be explained below, a fixed point of such a mapping (Scarf 1967a: 1326).
c_lwrrej6yijxf
Applied general equilibrium
Summary
Applied_general_equilibrium
Scarf never built an AGE model, but hinted that “these novel numerical techniques might be useful in assessing consequences for the economy of a change in the economic environment” (Kehoe et al. 2005, citing Scarf 1967b). His students elaborated the Scarf algorithm into a tool box, where the price vector could be solved for any changes in policies (or exogenous shocks), giving the equilibrium ‘adjustments’ needed for the prices. This method was first used by Shoven and Whalley (1972 and 1973), and then was developed through the 1970s by Scarf’s students and others.Most contemporary applied general equilibrium models are numerical analogs of traditional two-sector general equilibrium models popularized by James Meade, Harry Johnson, Arnold Harberger, and others in the 1950s and 1960s.
c_wyuqdqiuiw8w
Applied general equilibrium
Summary
Applied_general_equilibrium
Earlier analytic work with these models has examined the distortionary effects of taxes, tariffs, and other policies, along with functional incidence questions. More recent applied models, including those discussed here, provide numerical estimates of efficiency and distributional effects within the same framework.
c_low86vcaqmj5
Applied general equilibrium
Summary
Applied_general_equilibrium
Scarf's fixed-point method was a break-through in the mathematics of computation generally, and specifically in optimization and computational economics. Later researchers continued to develop iterative methods for computing fixed-points, both for topological models like Scarf's and for models described by functions with continuous second derivatives or convexity or both. Of course, "global Newton methods" for essentially convex and smooth functions and path-following methods for diffeomorphisms converged faster than did robust algorithms for continuous functions, when the smooth methods are applicable.
c_s82u7aynzbj9
Arrow–Debreu model
Summary
Arrow–Debreu_model
In mathematical economics, the Arrow–Debreu model is a theoretical general equilibrium model. It posits that under certain economic assumptions (convex preferences, perfect competition, and demand independence) there must be a set of prices such that aggregate supplies will equal aggregate demands for every commodity in the economy.The model is central to the theory of general (economic) equilibrium and it is often used as a general reference for other microeconomic models. It was proposed by Kenneth Arrow, Gérard Debreu in 1954, and Lionel W. McKenzie independently in 1954, with later improvements in 1959.The A-D model is one of the most general models of competitive economy and is a crucial part of general equilibrium theory, as it can be used to prove the existence of general equilibrium (or Walrasian equilibrium) of an economy.
c_lle53npqq5gd
Arrow–Debreu model
Summary
Arrow–Debreu_model
In general, there may be many equilibria. Arrow (1972) and Debreu (1983) were separately awarded the Nobel Prize in Economics for their development of the model. McKenzie however was not awarded.
c_rxbsw9kaicuv
Solomon Mikhlin
Elasticity theory and boundary value problems
Solomon_Mikhlin > Work > Research activity > Elasticity theory and boundary value problems
In mathematical elasticity theory, Mikhlin was concerned by three themes: the plane problem (mainly from 1932 to 1935), the theory of shells (from 1954) and the Cosserat spectrum (from 1967 to 1973). Dealing with the plane elasticity problem, he proposed two methods for its solution in multiply connected domains. The first one is based upon the so-called complex Green's function and the reduction of the related boundary value problem to integral equations. The second method is a certain generalization of the classical Schwarz algorithm for the solution of the Dirichlet problem in a given domain by splitting it in simpler problems in smaller domains whose union is the original one.
c_44x0hmcbltls
Solomon Mikhlin
Elasticity theory and boundary value problems
Solomon_Mikhlin > Work > Research activity > Elasticity theory and boundary value problems
Mikhlin studied its convergence and gave applications to special applied problems. He proved existence theorems for the fundamental problems of plane elasticity involving inhomogeneous anisotropic media: these results are collected in the book (Mikhlin 1957). Concerning the theory of shells, there are several Mikhlin's articles dealing with it.
c_eiico05tbyt6
Solomon Mikhlin
Elasticity theory and boundary value problems
Solomon_Mikhlin > Work > Research activity > Elasticity theory and boundary value problems
He studied the error of the approximate solution for shells, similar to plane plates, and found out that this error is small for the so-called purely rotational state of stress. As a result of his study of this problem, Mikhlin also gave a new (invariant) form of the basic equations of the theory. He also proved a theorem on perturbations of positive operators in a Hilbert space which let him to obtain an error estimate for the problem of approximating a sloping shell by a plane plate.
c_87lj9xecycqc
Solomon Mikhlin
Elasticity theory and boundary value problems
Solomon_Mikhlin > Work > Research activity > Elasticity theory and boundary value problems
Mikhlin studied also the spectrum of the operator pencil of the classical linear elastostatic operator or Navier–Cauchy operator A ( ω ) u = Δ 2 u + ω ∇ ( ∇ ⋅ u ) {\displaystyle {\boldsymbol {\mathcal {A}}}(\omega ){\boldsymbol {u}}=\Delta _{2}{\boldsymbol {u}}+\omega \nabla \left(\nabla \cdot {\boldsymbol {u}}\right)} where u {\displaystyle u} is the displacement vector, Δ 2 {\displaystyle \scriptstyle \Delta _{2}} is the vector laplacian, ∇ {\displaystyle \scriptstyle \nabla } is the gradient, ∇ ⋅ {\displaystyle \scriptstyle \nabla \cdot } is the divergence and ω {\displaystyle \omega } is a Cosserat eigenvalue. The full description of the spectrum and the proof of the completeness of the system of eigenfunctions are also due to Mikhlin, and partly to V.G. Maz'ya in their only joint work.
c_grtlw71cz2dq
Corner angle
Identifying angles
Reflex_angle > Identifying angles
In mathematical expressions, it is common to use Greek letters (α, β, γ, θ, φ, . . . )
c_ppp4c0c5126h
Corner angle
Identifying angles
Reflex_angle > Identifying angles
as variables denoting the size of some angle (to avoid confusion with its other meaning, the symbol π is typically not used for this purpose). Lower case Roman letters (a, b, c, . .
c_71nr3a5ao448
Corner angle
Identifying angles
Reflex_angle > Identifying angles
. ) are also used. In contexts where this is not confusing, an angle may be denoted by the upper case Roman letter denoting its vertex.
c_puw2gp2za5z6
Corner angle
Identifying angles
Reflex_angle > Identifying angles
See the figures in this article for examples. The three defining points may also identify angles in geometric figures. For example, the angle with vertex A formed by the rays AB and AC (that is, the half-lines from point A through points B and C) is denoted ∠BAC or B A C ^ {\displaystyle {\widehat {\rm {BAC}}}} .
c_k5shlm9w5z0z
Corner angle
Identifying angles
Reflex_angle > Identifying angles
Where there is no risk of confusion, the angle may sometimes be referred to by a single vertex alone (in this case, "angle A"). Potentially, an angle denoted as, say, ∠BAC might refer to any of four angles: the clockwise angle from B to C about A, the anticlockwise angle from B to C about A, the clockwise angle from C to B about A, or the anticlockwise angle from C to B about A, where the direction in which the angle is measured determines its sign (see § Signed angles). However, in many geometrical situations, it is evident from the context that the positive angle less than or equal to 180 degrees is meant, and in these cases, no ambiguity arises. Otherwise, to avoid ambiguity, specific conventions may be adopted so that, for instance, ∠BAC always refers to the anticlockwise (positive) angle from B to C about A and ∠CAB the anticlockwise (positive) angle from C to B about A.
c_8eji27x2uq6w
Littlewood–Offord problem
Summary
Littlewood–Offord_problem
In mathematical field of combinatorial geometry, the Littlewood–Offord problem is the problem of determining the number of subsums of a set of vectors that fall in a given convex set. More formally, if V is a vector space of dimension d, the problem is to determine, given a finite subset of vectors S and a convex subset A, the number of subsets of S whose summation is in A. The first upper bound for this problem was proven (for d = 1 and d = 2) in 1938 by John Edensor Littlewood and A. Cyril Offord. This Littlewood–Offord lemma states that if S is a set of n real or complex numbers of absolute value at least one and A is any disc of radius one, then not more than ( c log ⁡ n / n ) 2 n {\displaystyle {\Big (}c\,\log n/{\sqrt {n}}{\Big )}\,2^{n}} of the 2n possible subsums of S fall into the disc. In 1945 Paul Erdős improved the upper bound for d = 1 to ( n ⌊ n / 2 ⌋ ) ≈ 2 n 1 n {\displaystyle {n \choose \lfloor {n/2}\rfloor }\approx 2^{n}\,{\frac {1}{\sqrt {n}}}} using Sperner's theorem.
c_3c4p52ybxjgh
Littlewood–Offord problem
Summary
Littlewood–Offord_problem
This bound is sharp; equality is attained when all vectors in S are equal. In 1966, Kleitman showed that the same bound held for complex numbers. In 1970, he extended this to the setting when V is a normed space.Suppose S = {v1, …, vn}.
c_kinaq5tehtgj
Littlewood–Offord problem
Summary
Littlewood–Offord_problem
By subtracting 1 2 ∑ i = 1 n v i {\displaystyle {\frac {1}{2}}\sum _{i=1}^{n}v_{i}} from each possible subsum (that is, by changing the origin and then scaling by a factor of 2), the Littlewood–Offord problem is equivalent to the problem of determining the number of sums of the form ∑ i = 1 n ε i v i {\displaystyle \sum _{i=1}^{n}\varepsilon _{i}v_{i}} that fall in the target set A, where ε i {\displaystyle \varepsilon _{i}} takes the value 1 or −1. This makes the problem into a probabilistic one, in which the question is of the distribution of these random vectors, and what can be said knowing nothing more about the vi. == References ==
c_jis98rhm1t02
Pseudoreal representation
Summary
Pseudoreal_representation
In mathematical field of representation theory, a quaternionic representation is a representation on a complex vector space V with an invariant quaternionic structure, i.e., an antilinear equivariant map j: V → V {\displaystyle j\colon V\to V} which satisfies j 2 = − 1. {\displaystyle j^{2}=-1.} Together with the imaginary unit i and the antilinear map k := ij, j equips V with the structure of a quaternionic vector space (i.e., V becomes a module over the division algebra of quaternions).
c_3uju06rykpib
Pseudoreal representation
Summary
Pseudoreal_representation
From this point of view, quaternionic representation of a group G is a group homomorphism φ: G → GL(V, H), the group of invertible quaternion-linear transformations of V. In particular, a quaternionic matrix representation of g assigns a square matrix of quaternions ρ(g) to each element g of G such that ρ(e) is the identity matrix and ρ ( g h ) = ρ ( g ) ρ ( h ) for all g , h ∈ G . {\displaystyle \rho (gh)=\rho (g)\rho (h){\text{ for all }}g,h\in G.} Quaternionic representations of associative and Lie algebras can be defined in a similar way.
c_s7r6gstcus2a
Symplectic representation
Summary
Symplectic_representation
In mathematical field of representation theory, a symplectic representation is a representation of a group or a Lie algebra on a symplectic vector space (V, ω) which preserves the symplectic form ω. Here ω is a nondegenerate skew symmetric bilinear form ω: V × V → F {\displaystyle \omega \colon V\times V\to \mathbb {F} } where F is the field of scalars. A representation of a group G preserves ω if ω ( g ⋅ v , g ⋅ w ) = ω ( v , w ) {\displaystyle \omega (g\cdot v,g\cdot w)=\omega (v,w)} for all g in G and v, w in V, whereas a representation of a Lie algebra g preserves ω if ω ( ξ ⋅ v , w ) + ω ( v , ξ ⋅ w ) = 0 {\displaystyle \omega (\xi \cdot v,w)+\omega (v,\xi \cdot w)=0} for all ξ in g and v, w in V. Thus a representation of G or g is equivalently a group or Lie algebra homomorphism from G or g to the symplectic group Sp(V,ω) or its Lie algebra sp(V,ω) If G is a compact group (for example, a finite group), and F is the field of complex numbers, then by introducing a compatible unitary structure (which exists by an averaging argument), one can show that any complex symplectic representation is a quaternionic representation. Quaternionic representations of finite or compact groups are often called symplectic representations, and may be identified using the Frobenius–Schur indicator.
c_9iqr2fdvmuad
Smoluchowski equation
Particular cases with known solution and inversion
Smoluchowski_equation > Particular cases with known solution and inversion
In mathematical finance for volatility smile modeling of options via local volatility, one has the problem of deriving a diffusion coefficient σ ( X t , t ) {\displaystyle {\sigma }(\mathbf {X} _{t},t)} consistent with a probability density obtained from market option quotes. The problem is therefore an inversion of the Fokker–Planck equation: Given the density f(x,t) of the option underlying X deduced from the option market, one aims at finding the local volatility σ ( X t , t ) {\displaystyle {\sigma }(\mathbf {X} _{t},t)} consistent with f. This is an inverse problem that has been solved in general by Dupire (1994, 1997) with a non-parametric solution. Brigo and Mercurio (2002, 2003) propose a solution in parametric form via a particular local volatility σ ( X t , t ) {\displaystyle {\sigma }(\mathbf {X} _{t},t)} consistent with a solution of the Fokker–Planck equation given by a mixture model. More information is available also in Fengler (2008), Gatheral (2008), and Musiela and Rutkowski (2008).
c_rijutjngt4bo
Margrabe's formula
Summary
Margrabe's_formula
In mathematical finance, Margrabe's formula is an option pricing formula applicable to an option to exchange one risky asset for another risky asset at maturity. It was derived by William Margrabe (PhD Chicago) in 1978. Margrabe's paper has been cited by over 2000 subsequent articles.
c_g9l46102t0bx
Monte Carlo option model
Summary
Monte_Carlo_methods_for_option_pricing
In mathematical finance, a Monte Carlo option model uses Monte Carlo methods to calculate the value of an option with multiple sources of uncertainty or with complicated features. The first application to option pricing was by Phelim Boyle in 1977 (for European options). In 1996, M. Broadie and P. Glasserman showed how to price Asian options by Monte Carlo. An important development was the introduction in 1996 by Carriere of Monte Carlo methods for options with early exercise features.
c_a03w2o1mz77b
Replicating portfolio
Summary
Replicating_portfolio
In mathematical finance, a replicating portfolio for a given asset or series of cash flows is a portfolio of assets with the same properties (especially cash flows). This is meant in two distinct senses: static replication, where the portfolio has the same cash flows as the reference asset (and no changes need to be made to maintain this), and dynamic replication, where the portfolio does not have the same cash flows, but has the same "Greeks" as the reference asset, meaning that for small (properly, infinitesimal) changes to underlying market parameters, the price of the asset and the price of the portfolio change in the same way. Dynamic replication requires continual adjustment, as the asset and portfolio are only assumed to behave similarly at a single point (mathematically, their partial derivatives are equal at a single point). Given an asset or liability, an offsetting replicating portfolio (a "hedge") is called a static hedge or dynamic hedge, and constructing such a portfolio (by selling or purchasing) is called static hedging or dynamic hedging.
c_grnhiladhd66
Replicating portfolio
Summary
Replicating_portfolio
The notion of a replicating portfolio is fundamental to rational pricing, which assumes that market prices are arbitrage-free – concretely, arbitrage opportunities are exploited by constructing a replicating portfolio. In practice, replicating portfolios are seldom, if ever, exact replications. Most significantly, unless they are claims against the same counterparties, there is credit risk. Further, dynamic replication is invariably imperfect, since actual price movements are not infinitesimal – they may in fact be large – and transaction costs to change the hedge are not zero.
c_7z495wkn9rdn
Equivalent Martingale Measure
Summary
Martingale_measure
In mathematical finance, a risk-neutral measure (also called an equilibrium measure, or equivalent martingale measure) is a probability measure such that each share price is exactly equal to the discounted expectation of the share price under this measure. This is heavily used in the pricing of financial derivatives due to the fundamental theorem of asset pricing, which implies that in a complete market, a derivative's price is the discounted expected value of the future payoff under the unique risk-neutral measure. Such a measure exists if and only if the market is arbitrage-free.
c_gr5zveolqzuy
Convexity (finance)
Summary
Convexity_correction
In mathematical finance, convexity refers to non-linearities in a financial model. In other words, if the price of an underlying variable changes, the price of an output does not change linearly, but depends on the second derivative (or, loosely speaking, higher-order terms) of the modeling function. Geometrically, the model is no longer flat but curved, and the degree of curvature is called the convexity.
c_1r01xh9b2kir
Kelly criterion
Application to the stock market
Kelly_criterion > Application to the stock market
In mathematical finance, if security weights maximize the expected geometric growth rate (which is equivalent to maximizing log wealth), then a portfolio is growth optimal. Computations of growth optimal portfolios can suffer tremendous garbage in, garbage out problems. For example, the cases below take as given the expected return and covariance structure of assets, but these parameters are at best estimates or models that have significant uncertainty.
c_98tishu6utuq
Kelly criterion
Application to the stock market
Kelly_criterion > Application to the stock market
If portfolio weights are largely a function of estimation errors, then Ex-post performance of a growth-optimal portfolio may differ fantastically from the ex-ante prediction. Parameter uncertainty and estimation errors are a large topic in portfolio theory. An approach to counteract the unknown risk is to invest less than the Kelly criterion (e.g., half).
c_5x4pb7efh1g0
Black-Scholes equation
Summary
Black–Scholes_equation
In mathematical finance, the Black–Scholes equation is a partial differential equation (PDE) governing the price evolution of a European call or European put under the Black–Scholes model. Broadly speaking, the term may refer to a similar PDE that can be derived for a variety of options, or more generally, derivatives. For a European call or put on an underlying stock paying no dividends, the equation is: ∂ V ∂ t + 1 2 σ 2 S 2 ∂ 2 V ∂ S 2 + r S ∂ V ∂ S − r V = 0 {\displaystyle {\frac {\partial V}{\partial t}}+{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}+rS{\frac {\partial V}{\partial S}}-rV=0} where V is the price of the option as a function of stock price S and time t, r is the risk-free interest rate, and σ {\displaystyle \sigma } is the volatility of the stock. The key financial insight behind the equation is that, under the model assumption of a frictionless market, one can perfectly hedge the option by buying and selling the underlying asset in just the right way and consequently “eliminate risk". This hedge, in turn, implies that there is only one right price for the option, as returned by the Black–Scholes formula.
c_kdboewthdmkt
Constant elasticity of variance model
Summary
Constant_elasticity_of_variance_model
In mathematical finance, the CEV or constant elasticity of variance model is a stochastic volatility model that attempts to capture stochastic volatility and the leverage effect. The model is widely used by practitioners in the financial industry, especially for modelling equities and commodities. It was developed by John Cox in 1975.
c_xyikywt681bu
Cox–Ingersoll–Ross model
Summary
CIR_process
In mathematical finance, the Cox–Ingersoll–Ross (CIR) model describes the evolution of interest rates. It is a type of "one factor model" (short-rate model) as it describes interest rate movements as driven by only one source of market risk. The model can be used in the valuation of interest rate derivatives. It was introduced in 1985 by John C. Cox, Jonathan E. Ingersoll and Stephen A. Ross as an extension of the Vasicek model.
c_3ny8vto8xe7s
Doob decomposition theorem
Application
Doob_decomposition_theorem > Application
In mathematical finance, the Doob decomposition theorem can be used to determine the largest optimal exercise time of an American option. Let X = (X0, X1, . . .
c_fxen789pge0c
Doob decomposition theorem
Application
Doob_decomposition_theorem > Application
, XN) denote the non-negative, discounted payoffs of an American option in a N-period financial market model, adapted to a filtration (F0, F1, . . .
c_w383hudh7kf4
Doob decomposition theorem
Application
Doob_decomposition_theorem > Application
, FN), and let Q {\displaystyle \mathbb {Q} } denote an equivalent martingale measure. Let U = (U0, U1, . .
c_frhb30ed5khc
Doob decomposition theorem
Application
Doob_decomposition_theorem > Application
. , UN) denote the Snell envelope of X with respect to Q {\displaystyle \mathbb {Q} } . The Snell envelope is the smallest Q {\displaystyle \mathbb {Q} } -supermartingale dominating X and in a complete financial market it represents the minimal amount of capital necessary to hedge the American option up to maturity.
c_hs02nnt30i63
Doob decomposition theorem
Application
Doob_decomposition_theorem > Application
Let U = M + A denote the Doob decomposition with respect to Q {\displaystyle \mathbb {Q} } of the Snell envelope U into a martingale M = (M0, M1, . . .
c_mjtvpji14lpl
Doob decomposition theorem
Application
Doob_decomposition_theorem > Application
, MN) and a decreasing predictable process A = (A0, A1, . . .
c_yy6l52ugbn0e
Doob decomposition theorem
Application
Doob_decomposition_theorem > Application
, AN) with A0 = 0. Then the largest stopping time to exercise the American option in an optimal way is τ max := { N if A N = 0 , min { n ∈ { 0 , … , N − 1 } ∣ A n + 1 < 0 } if A N < 0. {\displaystyle \tau _{\text{max}}:={\begin{cases}N&{\text{if }}A_{N}=0,\\\min\{n\in \{0,\dots ,N-1\}\mid A_{n+1}<0\}&{\text{if }}A_{N}<0.\end{cases}}} Since A is predictable, the event {τmax = n} = {An = 0, An+1 < 0} is in Fn for every n ∈ {0, 1, .
c_ic0802uep4k3
Doob decomposition theorem
Application
Doob_decomposition_theorem > Application
. . , N − 1}, hence τmax is indeed a stopping time. It gives the last moment before the discounted value of the American option will drop in expectation; up to time τmax the discounted value process U is a martingale with respect to Q {\displaystyle \mathbb {Q} } .
c_um4lrlodki7h
Option delta
Summary
Greeks_(finance)
In mathematical finance, the Greeks are the quantities representing the sensitivity of the price of derivatives such as options to a change in underlying parameters on which the value of an instrument or portfolio of financial instruments is dependent. The name is used because the most common of these sensitivities are denoted by Greek letters (as are some other finance measures). Collectively these have also been called the risk sensitivities, risk measures: 742 or hedge parameters.
c_sha0219b34u9
SABR volatility model
Summary
SABR_volatility_model
In mathematical finance, the SABR model is a stochastic volatility model, which attempts to capture the volatility smile in derivatives markets. The name stands for "stochastic alpha, beta, rho", referring to the parameters of the model. The SABR model is widely used by practitioners in the financial industry, especially in the interest rate derivative markets. It was developed by Patrick S. Hagan, Deep Kumar, Andrew Lesniewski, and Diana Woodward.
c_8rje2dvhpq81
Local volatility
Formulation
Local_volatility > Formulation
In mathematical finance, the asset St that underlies a financial derivative is typically assumed to follow a stochastic differential equation of the form d S t = ( r t − d t ) S t d t + σ t S t d W t {\displaystyle dS_{t}=(r_{t}-d_{t})S_{t}\,dt+\sigma _{t}S_{t}\,dW_{t}} ,under the risk neutral measure, where r t {\displaystyle r_{t}} is the instantaneous risk free rate, giving an average local direction to the dynamics, and W t {\displaystyle W_{t}} is a Wiener process, representing the inflow of randomness into the dynamics. The amplitude of this randomness is measured by the instant volatility σ t {\displaystyle \sigma _{t}} . In the simplest model i.e. the Black–Scholes model, σ t {\displaystyle \sigma _{t}} is assumed to be constant, or at most a deterministic function of time; in reality, the realised volatility of an underlying actually varies with time and with the underlying itself. When such volatility has a randomness of its own—often described by a different equation driven by a different W—the model above is called a stochastic volatility model.
c_asq6ef548j4t
Local volatility
Formulation
Local_volatility > Formulation
And when such volatility is merely a function of the current underlying asset level St and of time t, we have a local volatility model. The local volatility model is a useful simplification of the stochastic volatility model.
c_7z8duzb4yhp5
Local volatility
Formulation
Local_volatility > Formulation
"Local volatility" is thus a term used in quantitative finance to denote the set of diffusion coefficients, σ t = σ ( S t , t ) {\displaystyle \sigma _{t}=\sigma (S_{t},t)} , that are consistent with market prices for all options on a given underlying, yielding an asset price model of the type d S t = ( r t − d t ) S t d t + σ ( S t , t ) S t d W t . {\displaystyle dS_{t}=(r_{t}-d_{t})S_{t}\,dt+\sigma (S_{t},t)S_{t}\,dW_{t}.} This model is used to calculate exotic option valuations which are consistent with observed prices of vanilla options.
c_n7fzesqqb9ua
Stochastic volatility jump
Summary
Stochastic_volatility_jump
In mathematical finance, the stochastic volatility jump (SVJ) model is suggested by Bates. This model fits the observed implied volatility surface well. The model is a Heston process for stochastic volatility with an added Merton log-normal jump. It assumes the following correlated processes: d S = μ S d t + ν S d Z 1 + ( e α + δ ε − 1 ) S d q {\displaystyle dS=\mu S\,dt+{\sqrt {\nu }}S\,dZ_{1}+(e^{\alpha +\delta \varepsilon }-1)S\,dq} d ν = λ ( ν − ν ¯ ) d t + η ν d Z 2 {\displaystyle d\nu =\lambda (\nu -{\overline {\nu }})\,dt+\eta {\sqrt {\nu }}\,dZ_{2}} corr ⁡ ( d Z 1 , d Z 2 ) = ρ {\displaystyle \operatorname {corr} (dZ_{1},dZ_{2})=\rho } prob ⁡ ( d q = 1 ) = λ d t {\displaystyle \operatorname {prob} (dq=1)=\lambda dt} where S is the price of security, μ is the constant drift (i.e. expected return), t represents time, Z1 is a standard Brownian motion, q is a Poisson counter with density λ. == References ==
c_0z5tr4y2ylud
No such thing as a free lunch
Finance
No_such_thing_as_a_free_lunch > History and usage > Meanings > Finance
In mathematical finance, the term is also used as an informal synonym for the principle of no-arbitrage. This principle states that a combination of securities that has the same cash-flows as another security must have the same net price in equilibrium.
c_c8wxfu1gdhcj
Volatility risk premium
Summary
Volatility_risk_premium
In mathematical finance, the volatility risk premium is a measure of the extra amount investors demand in order to hold a volatile security, above what can be computed based on expected returns. It can be defined as the compensation for inherent volatility risk divided by the volatility beta.
c_gbbl07o8q2oq
Thompson uniqueness theorem
Summary
Thompson_uniqueness_theorem
In mathematical finite group theory, Thompson's original uniqueness theorem (Feit & Thompson 1963, theorems 24.5 and 25.2) states that in a minimal simple finite group of odd order there is a unique maximal subgroup containing a given elementary abelian subgroup of rank 3. Bender (1970) gave a shorter proof of the uniqueness theorem.
c_4quex9pcq49c
Thompson factorization
Summary
Thompson_factorization
In mathematical finite group theory, a Thompson factorization, introduced by Thompson (1966), is an expression of some finite groups as a product of two subgroups, usually normalizers or centralizers of p-subgroups for some prime p.
c_am4zgsvsscj7
Aschbacher block
Summary
Aschbacher_block
In mathematical finite group theory, a block, sometimes called Aschbacher block, is a subgroup giving an obstruction to Thompson factorization and pushing up. Blocks were introduced by Michael Aschbacher.
c_193d4kj18vc1
Groups of GF(2) type
Summary
Groups_of_GF(2)_type
In mathematical finite group theory, a group of GF(2)-type is a group with an involution centralizer whose generalized Fitting subgroup is a group of symplectic type (Gorenstein 1982, definition 1.45). As the name suggests, many of the groups of Lie type over the field with 2 elements are groups of GF(2)-type. Also 16 of the 26 sporadic groups are of GF(2)-type, suggesting that in some sense sporadic groups are somehow related to special properties of the field with 2 elements. Timmesfeld (1978) showed roughly that groups of GF(2)-type can be subdivided into 8 types.
c_3jhwzv6p6i0q
Groups of GF(2) type
Summary
Groups_of_GF(2)_type
The groups of each of these 8 types were classified by various authors. They consist mainly of groups of Lie type with all roots of the same length over the field with 2 elements, but also include many exceptional cases, including the majority of the sporadic simple groups. Smith (1980) gave a survey of this work. Smith (1979, p.279) gives a table of simple groups containing a large extraspecial 2-group.
c_r3p8ap0s6uyd
Group of symplectic type
Summary
Group_of_symplectic_type
In mathematical finite group theory, a p-group of symplectic type is a p-group such that all characteristic abelian subgroups are cyclic. According to Thompson (1968, p.386), the p-groups of symplectic type were classified by P. Hall in unpublished lecture notes, who showed that they are all a central product of an extraspecial group with a group that is cyclic, dihedral, quasidihedral, or quaternion. Gorenstein (1980, 5.4.9) gives a proof of this result. The width n of a group G of symplectic type is the largest integer n such that the group contains an extraspecial subgroup H of order p1+2n such that G = H.CG(H), or 0 if G contains no such subgroup. Groups of symplectic type appear in centralizers of involutions of groups of GF(2)-type.
c_vv046821bx6t
Quadratic pair
Summary
Quadratic_pair
In mathematical finite group theory, a quadratic pair for the odd prime p, introduced by Thompson (1971), is a finite group G together with a quadratic module, a faithful representation M on a vector space over the finite field with p elements such that G is generated by elements with minimal polynomial (x − 1)2. Thompson classified the quadratic pairs for p ≥ 5. Chermak (2004) classified the quadratic pairs for p = 3. With a few exceptions, especially for p = 3, groups with a quadratic pair for the prime p tend to be more or less groups of Lie type in characteristic p.
c_y58rpzmf3125
Rank 3 permutation group
Summary
Rank_3_permutation_group
In mathematical finite group theory, a rank 3 permutation group acts transitively on a set such that the stabilizer of a point has 3 orbits. The study of these groups was started by Higman (1964, 1971). Several of the sporadic simple groups were discovered as rank 3 permutation groups.
c_2gw04ibdo00t
N-group (finite group theory)
Summary
N-group_(finite_group_theory)
In mathematical finite group theory, an N-group is a group all of whose local subgroups (that is, the normalizers of nontrivial p-subgroups) are solvable groups. The non-solvable ones were classified by Thompson during his work on finding all the minimal finite simple groups.
c_u4haf00j4u93
Exceptional character
Summary
Exceptional_character
In mathematical finite group theory, an exceptional character of a group is a character related in a certain way to a character of a subgroup. They were introduced by Suzuki (1955, p. 663), based on ideas due to Brauer in (Brauer & Nesbitt 1941).
c_vvp24ggdjzn2
Baer–Suzuki theorem
Summary
Baer–Suzuki_theorem
In mathematical finite group theory, the Baer–Suzuki theorem, proved by Baer (1957) and Suzuki (1965), states that if any two elements of a conjugacy class C of a finite group generate a nilpotent subgroup, then all elements of the conjugacy class C are contained in a nilpotent subgroup. Alperin & Lyons (1971) gave a short elementary proof.
c_bb5u2511w2vx
Brauer–Fowler theorem
Summary
Brauer–Fowler_theorem
In mathematical finite group theory, the Brauer–Fowler theorem, proved by Brauer & Fowler (1955), states that if a group G has even order g > 2 then it has a proper subgroup of order greater than g1/3. The technique of the proof is to count involutions (elements of order 2) in G. Perhaps more important is another result that the authors derive from the same count of involutions, namely that up to isomorphism there are only a finite number of finite simple groups with a given centralizer of an involution. This suggested that finite simple groups could be classified by studying their centralizers of involutions, and it led to the discovery of several sporadic groups. Later it motivated a part of the classification of finite simple groups.
c_zl2kb29xoimr
Dade isometry
Summary
Dade_isometry
In mathematical finite group theory, the Dade isometry is an isometry from class function on a subgroup H with support on a subset K of H to class functions on a group G (Collins 1990, 6.1). It was introduced by Dade (1964) as a generalization and simplification of an isometry used by Feit & Thompson (1963) in their proof of the odd order theorem, and was used by Peterfalvi (2000) in his revision of the character theory of the odd order theorem.
c_5su8uoydpfj7
Dempwolff group
Summary
Dempwolff_group
In mathematical finite group theory, the Dempwolff group is a finite group of order 319979520 = 215·32·5·7·31, that is the unique nonsplit extension 2 5 . G L 5 ( F 2 ) {\displaystyle 2^{5\,. }\mathrm {GL} _{5}(\mathbb {F} _{2})} of G L 5 ( F 2 ) {\displaystyle \mathrm {GL} _{5}(\mathbb {F} _{2})} by its natural module of order 2 5 {\displaystyle 2^{5}} . The uniqueness of such a nonsplit extension was shown by Dempwolff (1972), and the existence by Thompson (1976), who showed using some computer calculations of Smith (1976) that the Dempwolff group is contained in the compact Lie group E 8 {\displaystyle E_{8}} as the subgroup fixing a certain lattice in the Lie algebra of E 8 {\displaystyle E_{8}} , and is also contained in the Thompson sporadic group (the full automorphism group of this lattice) as a maximal subgroup.
c_wdxkphv4if9n
Dempwolff group
Summary
Dempwolff_group
Huppert (1967, p.124) showed that any extension of G L n ( F q ) {\displaystyle \mathrm {GL} _{n}(\mathbb {F} _{q})} by its natural module F q n {\displaystyle \mathbb {F} _{q}^{n}} splits if q > 2 {\displaystyle q>2} , and Dempwolff (1973) showed that it also splits if n {\displaystyle n} is not 3, 4, or 5, and in each of these three cases there is just one non-split extension. These three nonsplit extensions can be constructed as follows: The nonsplit extension 2 3 .
c_y4pc94tor16d
Dempwolff group
Summary
Dempwolff_group
G L 3 ( F 2 ) {\displaystyle 2^{3\,. }\mathrm {GL} _{3}(\mathbb {F} _{2})} is a maximal subgroup of the Chevalley group G 2 ( F 3 ) {\displaystyle G_{2}(\mathbb {F} _{3})} . The nonsplit extension 2 4 .
c_sthfrp1lzpns
Dempwolff group
Summary
Dempwolff_group
G L 4 ( F 2 ) {\displaystyle 2^{4\,. }\mathrm {GL} _{4}(\mathbb {F} _{2})} is a maximal subgroup of the sporadic Conway group Co3. The nonsplit extension 2 5 . G L 5 ( F 2 ) {\displaystyle 2^{5\,. }\mathrm {GL} _{5}(\mathbb {F} _{2})} is a maximal subgroup of the Thompson sporadic group Th.
c_o8qptg0xez53
Gorenstein–Harada theorem
Summary
Gorenstein–Harada_theorem
In mathematical finite group theory, the Gorenstein–Harada theorem, proved by Gorenstein and Harada (1973, 1974) in a 464-page paper, classifies the simple finite groups of sectional 2-rank at most 4. It is part of the classification of finite simple groups.Finite simple groups of section 2 that rank at least 5, have Sylow 2-subgroups with a self-centralizing normal subgroup of rank at least 3, which implies that they have to be of either component type or of characteristic 2 type. Therefore, the Gorenstein–Harada theorem splits the problem of classifying finite simple groups into these two sub-cases.
c_yapyxqd3rsrv
L-balance theorem
Summary
L-balance_theorem
In mathematical finite group theory, the L-balance theorem was proved by Gorenstein & Walter (1975). The letter L stands for the layer of a group, and "balance" refers to the property discussed below.
c_cv8qqt1io5y4
Puig subgroup
Summary
Puig_subgroup
In mathematical finite group theory, the Puig subgroup, introduced by Puig (1976), is a characteristic subgroup of a p-group analogous to the Thompson subgroup.
c_3af4p7rlowj4
Thompson order formula
Summary
Thompson_order_formula
In mathematical finite group theory, the Thompson order formula, introduced by John Griggs Thompson (Held 1969, p.279), gives a formula for the order of a finite group in terms of the centralizers of involutions, extending the results of Brauer & Fowler (1955).
c_dh6da0mp1p1w
Thompson subgroup
Summary
Thompson_subgroup
In mathematical finite group theory, the Thompson subgroup J ( P ) {\displaystyle J(P)} of a finite p-group P refers to one of several characteristic subgroups of P. John G. Thompson (1964) originally defined J ( P ) {\displaystyle J(P)} to be the subgroup generated by the abelian subgroups of P of maximal rank. More often the Thompson subgroup J ( P ) {\displaystyle J(P)} is defined to be the subgroup generated by the abelian subgroups of P of maximal order or the subgroup generated by the elementary abelian subgroups of P of maximal rank. In general these three subgroups can be different, though they are all called the Thompson subgroup and denoted by J ( P ) {\displaystyle J(P)} .
c_16srhpgg7rf6
Thompson transitivity theorem
Summary
Thompson_transitivity_theorem
In mathematical finite group theory, the Thompson transitivity theorem gives conditions under which the centralizer of an abelian subgroup A acts transitively on certain subgroups normalized by A. It originated in the proof of the odd order theorem by Feit and Thompson (1963), where it was used to prove the Thompson uniqueness theorem.
c_6vpxb3tvbtvd
Classical involution theorem
Summary
Classical_involution_theorem
In mathematical finite group theory, the classical involution theorem of Aschbacher (1977a, 1977b, 1980) classifies simple groups with a classical involution and satisfying some other conditions, showing that they are mostly groups of Lie type over a field of odd characteristic. Berkman (2001) extended the classical involution theorem to groups of finite Morley rank. A classical involution t of a finite group G is an involution whose centralizer has a subnormal subgroup containing t with quaternion Sylow 2-subgroups.
c_rji790yn9m4y
Regular p-group
Summary
Regular_p-group
In mathematical finite group theory, the concept of regular p-group captures some of the more important properties of abelian p-groups, but is general enough to include most "small" p-groups. Regular p-groups were introduced by Phillip Hall (1934).